In 2018, I had the opportunity to deliver a presentation at a major company to demonstrate how Artificial Intelligence (AI) could assist in automating one of their challenging and demanding tasks.
At the end of my talk, a gentleman who posed a question to me: “With 25 years of experience in this job – with all due respect – I can rely more on my gut feeling than on AI.”
I acknowledged that this is due to his extensive professional experience. I also pointed out that not everyone possesses such an experience and well-developed intuition or “gut feeling”’. Intuition and gut-feelings, developed over long years of experience, are great. However, they can lead to biased/flawed decision-making.
In his influential book “Thinking, Fast and Slow”, author Daniel Kahneman introduced the concept of two distinct systems within our brains. He described System 1 as characterised by its speed and reliance on intuition and System 2, which operates at a slower pace, employing logical reasoning. Although conclusive scientific evidence for these systems may be lacking, their impact can be observed in our daily lives. For instance, routine actions sich as fastening a seatbelt when entering a car are effortlessly performed by you (System 1), while solving a complex mathematical problem requires the deliberate thinking of System 2.
The human brain is great, and has led to so many great scientific discoveries, however, our judgment is inherently influenced by our biases and experiences. This is where AI can play a significant role. When properly trained and constructed, AI has the potential to mitigate these biases and enhance decision-making across various domains, including healthcare, education, law, and politics.
AI is not a recent phenomenon. Its origins can be traced back to the early 1950s. At its core, AI involves the extraction and generation of knowledge from vast amounts of data. It encompasses the creation of methods and models that enable computers to analyse and comprehend such data, learn from it, and make predictions or forecasts.
An example of AI in daily use is the ability of your mobile device to be unlocked using fingerprint or facial recognition. These applications rely on algorithms which have been exposed to a large number of images of human fingerprints or faces, allowing the devices to differentiate between individuals using their unique characteristics.
This process is commonly known as “training”. In essence, if you have a sufficient amount of data that accurately represents a specific scenario, you can leverage AI to create solutions that perform tasks for you. For example, by utilising a significant number of chest X-ray images, computers can be taught to identify whether a patient has pneumonia or not. Similarly, with a good number of samples (images/videos) showing corroded pipelines underwater, computers can be trained to detect corrosion in similar cases.
On a recent business trip to Houston, I had an interesting exchange with a border control officer at the airport who asked what I am doing for living. Upon learning that I work in AI, he exhibited both enthusiasm and apprehension regarding the potential risks AI might present to humanity. With a smile, he remarked: “Oh… you’re one of those guys.”
It is fascinating to see how people’s interest in AI has developed in the past few years. Five years ago, when I delivered a presentation showcasing the benefits of AI, I struggled to capture the interest of those in the room. Nowadays, merely mentioning the term AI is enough to spark engaging conversations with individuals from various backgrounds, be it a taxi driver, a border control officer, or a waitress at a restaurant. As a person working in the field, I see this is a positive sign rather than worrying one.
The growing public awareness surrounding AI indicates an active involvement in discussions about its potential consequences. I see such public awareness and involvement as a positive sign, people will recognise the benefits of AI and data-driven applications in advancing various fields.
Throughout history, humanity has consistently displayed impressive resilience and adaptability. Concerns about job losses, AI misuse, and monopolisation of AI research by powerful entities have been raised, much like in previous industries. However, just as in the past, we will likely overcome these obstacles and embrace the transformative potential of AI.
As a final point, I’m often asked about the potential eradication of humanity by AI and robots. My simple and short answer is this won’t happen. It is essential to bear in mind that we label it as Artificial Intelligence and not Intelligence. Hence, while it is natural to worry and have concerns, panicking, banning, and overregulating is not justified really.
Instead, our focus should be on how to adapt our practices to leverage this progress effectively. For example, in education, we must reconsider how we teach, engage, assess, and prepare our students for future job opportunities. In this era of exponential AI progress, with large language models and other generative AI applications, continuing to educate our students in the same traditional way is no longer acceptable.
AI can now handle tasks such as essay writing, solving math equations, generating voice-based videos, preparing presentations, and more.This remarkable ability of AI to undertake tasks that were once done by humans necessitates an urgent reassessment of our practices in education, health and other industries.
We should seize this opportunity to embrace AI’s potential and reimagine how we approach various sectors to make the most of these advancements.
As printed in The Scotsman and Scotland on Sunday.