Augmented Intelligence: How AI Will Empower, Not Replace, Human Expertise

 AI is a tool that humans can train to solve problems, and it is up to us to program our values, rules and decision-making principles into the machines. When applying technology to workflows, it is essential to select a ‘good’ problem to solve, especially in industries that are not early adopters.   

Technology is evolving whether we like it or not. Some people welcome this change and see the potential to work with greater efficiency, while others are more sceptical of the capabilities and worry that the ‘robots’ will replace us. No matter what your views are, being informed about the latest developments in technology and artificial intelligence will help you refine your arguments – one way or another.  

In 2018, I was at a presentation where we discussed artificial intelligence and its capabilities. The message that resonated and stayed with me was about ethics. We decide what we teach AI, and based on human decisions observed by the bots, we convey our belief system, values and principles. If we take an objective look at ourselves, we might be surprised what our own actions convey about us   

The discourse around self-driving cars is interesting, for example, because it brings up many issues. We can get from A to B in different ways: i) the fastest route, ii) the safest route. How much do we want the bots to follow the rules? How do we want them to make decisions? How do we define safe? Ultimately it is up to us to program our values, rules and decision-making principles into the machines.   

The same is true for any type of AI that we integrate into the workflow. I emphasise integration into the workflow because I don’t believe bots could replace us just yet. Everything is possible with technology – whether we like the outcome or not. My main hypothesis when applying technology to our workflows is to select a ‘good’ problem to solve, especially in industries that are not early adopters of technology. While we could automate whole workflows and design mind-blowing solutions, the truth is that most industries are not ready to take the plunge.  The least threatening approach to introducing AI is through baby steps. What are the tasks that are time-consuming and require manual search for data and information? What other tasks are there that you could automate with human supervision? That’s right: humans supervising bots, or the human-in-the-loop approach means that humans control critical aspects. Several industries have experimented with the concept of setting up a knowledge base connected to a large language model to chat with the information. Think of law firms, investment companies or even ESG. Working across jurisdictions and countries mean that these professionals have to analyse a large amount of de-centralised data that is often in local languages. How convenient would it be to ‘feed’ it all to a bot and chat with this information? Humans controlling the input and humans interpreting the outcome. This doesn’t only address concerns around training data transparency, but it also eliminates the ‘black box’ criticism that AI often faces.  

This is a good place to refer to the original suggestion that we should make informed criticism of technology. By informed I mean that not all AI applications are the same: application of AI to manage big data and machine learning are very different concepts. The example I gave you in the previous paragraph refers to the use of AI to support humans in managing large and complex datasets. In layman terms, the bots are used to process data, improve the tools for analysis and decision-making. The clear advantage of the introduction of technology is to speed up the process of analysis with better tools for improved interpretation of the results. Machine learning on the other hand includes bots that follow patterns similar to human intelligence to reason, learn and solve problems. The machine is learning from experience and improves models with or without human intervention. Indeed, without the human-in-the-loop principle, we would be enabling models to improve themselves without human supervision. And this is what people are more scared of. However, if we establish reinforcement learning controlled by humans, the model will evolve under supervision, based on feedback from human experts.  

The current landscape of AI application across industries outside the more experimental tech world revolve around reactive and limited memory AI. Reactive is probably the most basic type of AI where actions are performed based on the programmed formula. This is the true extension of a human! While I do 3 calculations, a bot does 3000. The functionality is usually limited and excludes learning from past experience, nevertheless, this technology is perfect for search engines or labelling emails. Limited memory AI can learn from past data or experience and can make some decisions. The application of this technology in everyday life is most common for chatbots or virtual assistants. The limited memory refers to time limitations on past data that is used for learning. In my view, both of these types of AI are great starting points to support the work of human experts and enhance their performance through increased efficiency and better access to large scale industry specific data.  

No need to despair, if your biggest fear is a bot replacing you, us humans have great powers: we can admit uncertainty or simply say ‘I don’t know.’ A bot would not do either – hence the issues around hallucinations. If it can’t find an answer, it will make one up. We don’t have to search hard to find false information or fake references created by AI to justify their fake answer. I believe that letting bots work for us instead of us is the way to go. After all, in every industry we can find tasks where human experts add the greatest value and where they waste time. Tackling time wasting activities with technology will not only make us more fulfilled in our jobs but will enable us to focus on creating value.

Post a Comment

Previous Post Next Post