The Different Learning Styles Of Artificial Intelligence
If you’re new to artificial intelligence, the concept of AI learning might be a bit hard to wrap your head around.
After all, an AI is still just a series of ones and zeroes, no different than any other computer program. So how does an AI learn?
How Does AI “Learn”?
First, let’s establish what we mean when talking about AI learning. An AI on its own doesn’t necessarily imply learning. Virtual assistants like Siri and Alexa are technically AIs, but they don’t do a whole lot of learning (though this is slowly starting to change). When an AI is learning, it is essentially interacting with its programming on its own to achieve a goal.
For example, say you have an AI that has control over a robotic body. You give it a function that allows it to move its left left leg, its right leg, and both of its arms. But you don’t give it a “run” function. Instead, you tell the AI you want it to run, and then by testing out its various movement functions, it eventually learns how to use its programming to run.
That, in a fairly basic form, is what we’re talking about when discussing AI learning.
The Different Types Of AI Learning
Of course, giving an AI a set of tools and a goal is just one way to help it learn. There are multiple ways to teach an AI, each with its benefits and uses, which we’ll delve into below.
Inductive learning is the type of learning you engage in school, college, and with your elders. Essentially, it involves being told something, which you then draw a conclusion from.
For example, if you wanted to teach an AI that fire was dangerous, you would give it datasets on the dangers of fire. This could include images of things being burned, data on the risks and costs of fire, etc. From this, the AI would learn that it is in its best interest to avoid fire.
Inductive learning is one of the easier styles of learning for an AI, as the transfer of information is very controlled. However, the drawback is that it prevents the AI from being able to draw broader conclusions. In other words, telling someone how to ride a bike will never be as effective as giving someone a bike to practice riding on.
Which is where deductive learning comes in. Deductive learning is the reverse of inductive learning. Instead of being given facts that lead to a conclusion, the AI experiences something that immediately leads to a conclusion, and from that conclusion, it can pull out facts and lessons.
Returning to the fire example, an AI with the right sensors and programming could interact with fire and quickly learn that it’s dangerous without needing to be told. However, it might also learn that fire is useful, can be controlled, how to put it out, and how to react when you’ve been burned.
This style of learning is of course riskier and a bit more difficult for the developer and the AI. Neither has full control over the process and the results aren’t guaranteed, though you can be sure that the results will be interesting.
Supervised learning is a method of teaching AI that builds off of inductive learning but uses a little less structure. It involves providing an AI with labeled training data.
For example, let’s say you give an AI a set of images correctly labeled as either “cat” or “dog”. By looking at these images, the AI would start to see patterns in the “cat” images that distinguish them from the “dog” images and vice versa. Eventually, after it’s learned everything it can from these images, the AI would then be able to correctly identify any image of any cat or dog.
The only drawback to supervised learning is that it requires an existing dataset. You have to have data (e.g., pictures of cats and dogs) to get started, and there are plenty of cases where that data isn’t available to you or simply doesn’t exist at all.
That’s where unsupervised learning comes into play. Unsupervised learning also teaches AI to look for patterns in data, but unlike supervised learning, it doesn’t provide training data.
In an unsupervised model, you would give the AI images of cars, buses, and bicycles, without labeling them as such. This way, the AI would have to make deductions about these images and come up with its own categories. When given a new image, the AI will try to classify it into one of these categories, but it will not know what that category represents, such as “car” or “bicycle.” It would be forced to come up with its own definitions.
Semi-supervised learning was created to address a common issue in unsupervised learning, which is aimlessness. In theory, unsupervised learning is great, but it often gives the AI too little data to work with, resulting in minimal progress.
Semi-supervised learning involves giving the AI a small set of labeled data and a larger set of unlabeled data. The AI is still forced to come up with its own connections for the most part, but it has enough of a starting point to ensure the AI learns in the way the developers want it to.
Reinforcement learning is a type of machine learning that involves teaching AI through trial and error.
Take the familiar lab mouse trying to find the cheese at the end of a maze. On a first attempt, the mouse may struggle to even make it to the end. But each time it’s placed in the maze it becomes more and more proficient at the maze, until eventually, it can make consistently perfect runs.
In an AI context, this involves giving the AI a set of unlabeled data (pictures of dogs and cats that aren’t identified as “dog” or “cat”) and asking it to separate them into two groups. When the AI matches a dog with a cat, the AI is given a “wrong” feedback. This continues until the AI eventually figures out what makes a dog a “dog” and a cat a “cat”.