Article

Artificial Intelligence 101: The Key Concepts Of AI

Artificial-Intelligence-101-The-Key-Concepts-Of-AI-scaled

Artificial intelligence (AI) is one of the most exciting and groundbreaking fields in the world of computer science. It’s also one of the most complicated.

In this post, we’ll break down six key concepts in artificial intelligence (including AI itself) so that you can better understand what this field is all about and imagine the many business use cases for AI across industries.

The Key Concepts of AI

1. Artificial Intelligence

Artificial intelligence is any software that mimics our natural intelligence through various methods of AI learning, a prime example from recently being applications of generative AI (like Brancher.ai ) or the field of Robotic Process Automation (RPA). For example, a calculator performs a task that we normally do with our intelligence, but it is not mimicking our ability to think. However, when you ask Siri to perform a calculation and she answers your question correctly, that is a very simple form of artificial intelligence.

In most forms, AI can observe its environment to some degree (listening to your voice, for example, or in more advanced cases, robots perceiving their environment and navigating challenging environments autonomously) and use the data it gathers to make better decisions. Oftentimes, the interactions AI has with a user are taken as feedback and added to the AI’s knowledge base to be used in future decisions, which is a simple type of AI learning.

An image of a man pondering the numerous possibilities of artificial intelligence (AI).

2. Machine Learning

Machine Learning (ML) is a subset of AI that focuses on the ability of a program to adapt when given new information. In simpler terms, machine learning often ignores the mimicry typically associated with artificial intelligence and strictly focuses on the learning component. Without any additional coding provided by a programmer, ML software can discover new and better methods to make decisions, which will be essential to the advancement of fields like robotics.

Think of this like the equations you learned in algebra. You start out using equations in specific use cases and eventually realize that they apply more broadly to other areas of math. That realization — the connection between something you’ve been taught and something you’ve discovered — is the primary goal of machine learning: to teach software enough that it can begin to teach itself. See our post on MLOps for robots, an example of how to set up systems where machines learn independently when give the right environment.

An abstract representation of machine learning (ML), which is often connected to interconnectivity and IoT.

3. Neural Network

A neural network is a set of algorithms used in machine learning that model an AI as layers of interconnected nodes. This method of representing a system is loosely based on interconnected neurons in the human brain. In other words, when you hear someone talking about neural networks, just think of it as a really primitive digital brain.

For example, you’ve likely noticed a feature in your smartphone’s photo application that can sort pictures based on the people in each photo. This is accomplished with a neural network built to recognize faces, something that can normally only be done by a human. That “digital brain” can’t hold a conversation – it’s far too simple. But it can do something that a traditional computer program can’t, which is adaptable recognition.

An abstract representation of a neural network, related to artificial intelligence (AI) and machine learning (ML).

4. Deep Learning

Deep learning is another subset of machine learning that uses layers of neural networks rather than a single neural network. The word “deep” in deep learning is referring to these layers. You can think of each neural-network-layer as a space where something new is learned from a set of data.

To put this in plain terms, picture five vertical lines, like so: I I I I I.

The first one is the input layer — that’s where the deep learning software receives data. The second line, layer two, runs the data through an algorithm to learn something about that data. The third layer does the same thing using a different algorithm, which allows the software to learn a second thing about the data. The fourth layer does the same thing, with yet another algorithm, so that the deep learning software now has three things it’s learned about the initial input. In the fifth and final layer, the software outputs what it has learned.

The layers in-between the first and last layers are known as “hidden” layers, and most deep learning applications have far more than three hidden layers. But the idea here is that rather than doing one thing with a piece of data, several things are done with it to give the software a deeper understanding of the data.

An image representing the layers of deep learning and AI algorithms. Deep learning (DL) is connected to artificial intelligence (AI) and machine learning (ML).

5. Supervised and Unsupervised Learning

Supervised learning is a method of teaching artificial intelligence by providing it with labeled training data. For example, you might give an AI a set of images labeled as either “cat” or “dog”. Then, by learning from those images, the AI would be able to identify new unlabeled images as “cat” or “dog” on its own. (this type of learning would be especially important in something sophisticated like creating an autonomous fleet of robots)

Unsupervised learning has the same end goal — for the AI to be able to correctly label data — but it’s never given the initial training. Let’s say you have an AI and you want it to tell the difference between cars and bicycles, but you want it to figure out the difference on its own. So all you do is give it a hundred images of cars and bicycles and say “right” or “wrong” when it labels an image as a car or a bicycle. Eventually, the AI should be able to piece together what makes a “bike” a bike and a “car” a car.

Supervised learning and unsupervised learning are two types of artificial intelligence (AI) and are related to machine learning (ML). They are both focused on teaching machines to identify data patterns and begin organizing it.

6. Reinforcement Learning

Reinforcement learning is a type of machine learning that teaches AI through trial and error. Take the lab mouse trying to find the cheese at the end of a maze. On a first attempt, the mouse may struggle to even make it to the end. Each time it is placed in the maze, however, it becomes more and more proficient at the maze, until eventually, it can make consistently perfect runs.

This type of iterative learning is one of the most valuable ways in which humans and other animals learn. We are penalized when we make mistakes and rewarded when we get things right, and eventually learn how to do something (almost) perfectly.

Reinforcement learning applies the same concept to teaching AI. It gives an AI a goal, the AI makes attempts at that goal, and feedback is given on how close or far it was from reaching that goal. The AI is told to complete the task to 100%, and then the AI is left to do its thing.Do

Need help with an artificial intelligence-related project? Let’s connect.

AI is booming. Working as a consultancy on the leading edge, we’ve seen firsthand how many industries are reaping the benefits of incorporating the technology. If you have a project, don’t hesitate to reach out for an initial conversation.