AI is one of the most exciting and groundbreaking fields in the world of computer science. It’s also one of the most complicated.
In this post, we’ll break down six key concepts in artificial intelligence (including AI itself) so that you can better understand what this field is all about.
The Key Concepts of AI
1. Artificial Intelligence
Artificial intelligence is any software that mimics our natural intelligence. For example, a calculator performs a task that we normally do with our intelligence, but it is not mimicking our ability to think. However, when you ask Siri to perform a calculation and she answers your question correctly, that is a very simple form of AI.
In most forms, AI can observe its environment to some degree (listening to your voice, for example) and use the data it gathers to make better decisions. Oftentimes, the interactions AI has with a user are taken as feedback and added to the AI’s knowledge base to be used in future decisions, which is a simple type of AI learning.
2. Machine Learning
Machine Learning (ML) is a subset of AI that focuses on the ability of a program to adapt when given new information. In simpler terms, machine learning often ignores the mimicry typically associated with AI and strictly focuses on the learning component. Without any additional coding provided by a programmer, ML software can discover new and better methods to make decisions.
Think of this like the equations you learned in algebra. You start out using equations in specific use cases and eventually realize that they apply more broadly to other areas of math. That realization — the connection between something you’ve been taught and something you’ve discovered — is the primary goal of machine learning: to teach software enough that it can begin to teach itself.
3. Neural Network
A neural network is a set of algorithms used in machine learning that model an AI as layers of interconnected nodes. This method of representing a system is loosely based on interconnected neurons in the human brain. In other words, when you hear someone talking about neural networks, just think of it as a really primitive digital brain.
For example, you’ve likely noticed a feature in your smartphone’s photo application that can sort pictures based on the people in each photo. This is accomplished with a neural network built to recognize faces, something that can normally only be done by a human. That “digital brain” can’t hold a conversation – it’s far too simple. But it can do something that a traditional computer program can’t, which is adaptable recognition.
4. Deep Learning
Deep learning is another subset of machine learning that uses layers of neural networks rather than a single neural network. The word “deep” in deep learning is referring to these layers. You can think of each neural-network-layer as a space where something new is learned from a set of data.
To put this in plain terms, picture five vertical lines, like so: I I I I I. The first one is the input layer — that’s where the deep learning software receives data. The second line, layer two, runs the data through an algorithm to learn something about that data. The third layer does the same thing using a different algorithm, which allows the software to learn a second thing about the data. The fourth layer does the same thing, with yet another algorithm, so that the deep learning software now has three things it’s learned about the initial input. In the fifth and final layer, the software outputs what it has learned.
The layers in-between the first and last layers are known as “hidden” layers, and most deep learning applications have far more than three hidden layers. But the idea here is that rather than doing one thing with a piece of data, several things are done with it to give the software a deeper understanding of the data.
5. Supervised and Unsupervised Learning
Supervised learning is a method of teaching AI by providing it with labeled training data. For example, you might give an AI a set of images labeled as either “cat” or “dog”. Then, by learning from those images, the AI would be able to identify new unlabeled images as “cat” or “dog” on its own.
Unsupervised learning has the same end goal — for the AI to be able to correctly label data — but it’s never given the initial training. Let’s say you have an AI and you want it to tell the difference between cars and bicycles, but you want it to figure out the difference on its own. So all you do is give it a hundred images of cars and bicycles and say “right” or “wrong” when it labels an image as a car or a bicycle. Eventually, the AI should be able to piece together what makes a “bike” a bike and a “car” a car.
6. Reinforcement Learning
Reinforcement learning is a type of machine learning that teaches AI through trial and error. Take the lab mouse trying to find the cheese at the end of a maze. On a first attempt, the mouse may struggle to even make it to the end. Each time it is placed in the maze, however, it becomes more and more proficient at the maze, until eventually, it can make consistently perfect runs.
This type of iterative learning is one of the most valuable ways in which humans and other animals learn. We are penalized when we make mistakes and rewarded when we get things right, and eventually learn how to do something (almost) perfectly.
Reinforcement learning applies the same concept to teaching AI. It gives an AI a goal, the AI makes attempts at that goal, and feedback is given on how close or far it was from reaching that goal. The AI is told to complete the task to 100%, and then the AI is left to do its thing.