Advances in computing power over the last decade have enabled Artificial Intelligence research to evolve quickly in both technical power and commercial usability.
Let’s look more closely at what AI is, as well as key concepts that can position you to better understand it and the potential it holds.
Artificial General Intelligence vs. Artificial Narrow Intelligence
One caveat before we begin: there are many levels of sophistication in the AI technologies in the discussion today. They can be roughly broken into two main categories: artificial general intelligence (AGI) and artificial narrow intelligence (ANI). Artificial general intelligence refers to technology that can perform any intellectual task that a human could. Artificial general intelligence is, so far, still stuck in the realm of science fiction.
We’re going to focus on examples of artificial narrow intelligence since artificial general intelligence is still a ways off. We want to look at AI as it is today and use this information to predict what it might look like in the near future.
The Making of AI
Like many of the emerging fields in technology, AI is a broad, often vague, and complicated subject. The concept is simple, but the execution requires several moving parts (literally and figuratively) that all need to work harmoniously.
AI can range from something as simple as a Python program written on a $35 RaspberryPi computer to something as complex as a robot with vision, IoT capability, and sophisticated software. In this section, we’ll explore the underlying concepts of AI as well as the resources available to get an AI project off the ground, no matter how small or grand in scale.
Key Concepts of AI Technology
Artificial intelligence is software that can complete tasks that a human would normally accomplish cognitively. In most forms, AI can observe its environment and use accumulated knowledge to maximize its success in making decisions. For example, when you ask the voice assistant on your phone to show you pictures of golden retrievers, it interprets your request, decides how to best respond, and performs some action. Often the response from the user is taken as feedback and added to the AI’s knowledge base to be used in future decisions.
Machine Learning (ML) is a subset of AI that includes the ability to adapt functionality to new information. Without any additional coding provided by a programmer, an ML application can discover new and better methods to make decisions.
Machine learning techniques work by developing models, which are equations that approximate the relationships between different items in a data set. In the simplest form, a linear regression model expresses one variable (Y) in relation to another (X) through an equation (Y= mX + b) that has the least total error across all Y values in a data set. Real models are far more complex. Machine learning techniques differ in the models they assume and the method they use to learn from data.
A neural network is a particular set of algorithms used in machine learning that models a system as layers of interconnected nodes. This method of representing a system is loosely based on interconnected neurons in the human brain.
For example, you’ve likely noticed a feature in your smartphone’s Photos app that can sort pictures based on the individual people in each photo. This is accomplished using a neural network configured to recognize faces by identifying patterns that typically would only be distinguishable by a human. Neural networks can identify, group, and sort this data.
Deep learning is a subset of machine learning that uses complex, multi-layer neural networks. The word “deep” in deep learning refers to the fact that the neural network contains more than one layer of neurons and connections for learning. These “hidden” layers allow for more interactions between input data values, and these interactions are obscured to the programmer or the user.
While some machine learning algorithms try to ‘fit’ data to existing and known statistical models, deep learning adds the capacity to extract more feature relationships from the same data. Using various neural network architectures, deep learning can find a larger number of correlation patterns within data, often requiring human effort to target the correct patterns required for a particular application.
Supervised and Unsupervised Learning
Supervised learning is a method of teaching AI by providing it with labeled training data. For example, a set of images correctly labeled as either “cat” or “dog” could be used as a training set for building a pet categorization algorithm. By mapping characteristics of known inputs to their labeled outputs, this algorithm learns to classify new cases to one of the trained labels.
Unsupervised learning looks for similar patterns in data but without known correct labels. This is often called clustering. If you provide the AI with images of cars, buses, and bicycles, without labeling them as such, the algorithm makes deductions about how it should define categories.
Reinforcement learning is a type of machine learning that involves teaching AI through trial and error. Take the familiar lab mouse trying to find the cheese at the end of a maze. On a first attempt, the mouse may struggle to even make it to the end. Each time it is placed in the maze, however, it becomes more and more proficient at the maze, until eventually, it can make consistently perfect runs.
Simulated environments allow us to observe the AI’s behaviors in the virtual world so we can more accurately predict its behavior in the real world. Simulations also lead to truly diverse, unique data that you can’t achieve with teaching through data.
Download our free white paper: AI Software & Hardware Options to learn how AI technology is being applied today.