Demystifying AI: A General Introduction
Updated: Dec 21, 2020
Shreshth Malik demystifies terminology surrounding Artificial Learning (AI) and Machine Learning (ML).
From recommendation engines for your next Netflix binge, to finding the quickest route home, Artificial Intelligence (AI) and Machine Learning (ML) algorithms play a major role in our daily lives. This article gives an introduction to the field, providing the context to understand the current ecosystem.
Let’s start by summarising some of the key concepts in the field:
Artificial Intelligence (AI): A blanket term for computer models or machines that have the ability to act ‘intelligently’. The definition of ‘intelligence’ is hotly debated, but in practice it is the ability to problem-solve, learn, adapt, and make decisions to achieve goals. This can be achieved through machine learning but can be achieved by other means. For example, using logic to explicitly program what an AI ‘agent’ should do was popular in the early years of AI.
Machine Learning (ML): Tasks in AI often involve taking an input (a picture for example), and processing it to give a desired output (whether the picture has a cat in it). Machine learning is a method where a model learns how to mathematically map from the input to the output from data, rather than being explicitly programmed. It uses ‘training’ data to iteratively change its mapping and improve its performance. The trained model can then be used to make predictions on unseen data. This is the most prominent subset of AI.
Deep Learning: A subfield of ML which takes inspiration from the human brain’s architecture to formulate ML models. Like the brain, the models consist of many layers of ‘neurons’ which are connected to each other. An input is fed through a neuron which transforms it before passing it onto the next layer. By combining many of these transformations, the model can learn abstract representations of the data. This has been shown to greatly improve its learning ability. Most of the state-of-the-art systems today like object detection and language models are deep learning models.
Reinforcement Learning (RL): Psychology tells us that humans and animals learn how to behave through feedback from their actions. For example, a dog learns to fetch a ball if you reward it with treats for doing so. Reinforcement learning uses similar reward-based feedback to train AI agents to achieve their objectives.
Data Science: A systematic and scientific approach to analysing data to extract knowledge and/or useful insights. Data scientists generally do this through visualisation and statistical/ML methods. The focus is on application – understanding how data can be used effectively for a given situation (e.g. for a business objective).
A Venn Diagram showing the overlap between terms in the Data science and AI ecosystem.
A Brief History of AI
The field of AI was established at the Dartmouth Summer Research Project on Artificial Intelligence in 1956. Prior to this, Alan Turing, in ‘Intelligent Machinery’ (1948), introduced early formulations of machine learning and draws inspiration from the human brain. Many of these ideas for learning architectures are still used in deep learning.
The original proposal for the summer workshop sought to ‘find how to make machines… solve the kinds of problems now reserved for humans, and improve themselves’. It was thought that ‘a significant advance can be made’ if the right people worked together over the summer. It is to no surprise that they did not manage to completely solve the AI problem over one summer! What the workshop did provide though, was the foundation and direction for the field to grow.
Fast forward 20 years and while the field had developed considerably, the investors and the public still had not seen the breakthroughs that were promised in 1956. An ‘AI winter’ of pessimism and reduced funding for the field followed in the 1970s and 80s.
The turn of the century, at last, brought the computational power and data required to utilise ML methods effectively. At first, it was only big businesses and banks that had access to the vast resources required to store and process their data. More recently, the development of cloud computing has enabled anyone with an internet connection to be able to conduct powerful analyses of their data. This has allowed data scientists across industries to bring a data-driven approach to decision making. Predictive maintenance of machinery, inventory management, fraud payment prediction and targeted advertising are all examples of ML applications in business processes. These are applications of so-called ‘narrow’ AI, where we seek to optimise or predict something very specific.
Recent Trends and Outlook
In the last ten years, breakthroughs in deep learning have enabled human and super-human level performance on tasks that were previously unimaginable. Language models such as GPT-3 can now write natural human-like text, extract information, and translate between languages. Real-time ML object detection and decision making models are accelerating the development of autonomous vehicles like the Tesla AutoPilot. Reinforcement learning algorithms developed by DeepMind have superhuman performance in classic arcade games, and have famously defeated the world Chess and Go champions.
These high-profile developments have propelled AI into the limelight, but we should be wary of its limitations. There is still a long way to go for ‘general’ AI, where an agent can adapt to multiple tasks and new environments like humans. Furthermore, as we use more AI systems, its potential for negative impact on society also increases. There is proven danger for existing human biases to slip into ML algorithms as they learn from our biased data. Interpreting models used for decision making and understanding their shortfalls is vital as we progress towards a data-driven future.
Investments in AI technology across industries continue to grow in spite of the pandemic. In fact, AI has even been contributing to the response. It is hard to predict where we will be in a few years; AI as a scientific field is still in its infancy. However, there is no doubt that it will have a major impact on society as we see more ML models deployed across industries including healthcare, transportation, governance, finance and many more.
The UCL Finance and Technology Review (UCL FTR) is the official publication of the UCL FinTech Society. We aim to publish opinions from the student body and industry experts with accuracy and journalistic integrity. While every care is taken to ensure that the information posted on this publication is correct, UCL FTR can accept no liability for any consequential loss or damage arising as a result of using the information printed. Opinions expressed in individual articles do not necessarily represent the views of the editorial team, society, Students’ Union UCL or University College London. This applies to all content posted on the UCL FTR website and related social media pages.