As interesting as the concepts and resources behind AI are, the real fun begins once you take AI out of the lab and put it to work in the real world.
The two major areas of practical AI application that we want to explore: how can AI alter the way we use and think about computers, and what can AI do if it’s given a robot body.
The majority of AI that we see in practical application is still pretty simple (Siri and Google Assistant, for example), but that is about to change in the near future. As you’ll see in the next few sections, researchers and developers are already attempting to direct the ways in which AI will interact with and perceive us, our environments, and our technology.
The Importance of Peripherals for Advancing Human-Machine Interaction
Nowadays, especially with how far computers have come, we can interact with and control computers fairly intuitively. Despite their complexity, it’s considerably easier to accomplish basic tasks on a laptop than it is to drive a car. The reason for this is that we have created a number of peripheral tools to make interacting with computers easier. Keyboards, mice, headphones, GUIs, and touchscreens all make the process of communicating with your computer much easier than in the days of command-line interfaces.
As more peripherals arise for AI development and deployment, the technology will become more widespread. Let’s look at some of AI’s practical applications.
1. Facial Recognition: Changing the way we interact with computers
Verbal communication is just one of the ways that AI will change the way we interact with computers, though. We’re already starting to see things like facial recognition (Apple’s FaceID), voice recognition (“Ok Google”), and gesture recognition (Samsung’s Air Gesture) alter the way we communicate and interact with computers.
When you combine these three things with artificial intelligence, you can start to imagine a computer that no longer needs a mouse, keyboard, or even a screen. Instead of being a device you physically interface with, a computer could simply be an entity in your home or on your person that you communicate with like you would with another person.
Companies like Kairos and NTech are already creating sophisticated facial recognition AI that can not only track and identify faces, but spot them in crowds, read emotions, predict a person’s age, and more.
Kairos is one of the leaders in facial recognition software today. Their services use neural networks to analyze and intelligently recognize people’s faces. The company uses this software to provide companies with a new source of security. For example, in banking and finance, an individual’s password would be their face, which would be much harder to crack than a password. The company has a strong emphasis on ethics, ensuring that the data their software uses is never used maliciously.
Like Kairos, NTech is a company that uses deep learning and artificial intelligence to provide businesses with various facial recognition services. These services include things like facial verification, face identification, face detection, age/ gender detection, and emotional interpretation. The company was presented with the FRPC award during the IARPA competition.
2. Language Processing: Making machines more effective at understanding us
Optical character recognition (OCR) technology uses pattern recognition and feature detection to recognize characters in non-computerized text (such as handwriting or scans). However, it cannot make meaning of or understand the text.
Natural language processing (NLP), on the other hand, works to understand what the text means and sometimes even generate a response. NLP is what Amazon’s Alexa and Google Home use to understand and respond to what users say. Search engines like Google and Bing also use a form of NLP to disambiguate users’ search terms and offer suggested queries.
3. Robotics: Increasing machine intelligence and autonomy
As technology has progressed, several industries now rely on robotics to do tasks that are too difficult, precise, or dangerous for a human to do. However, the vast majority of these robots are still fairly simple in terms of their intelligence. Many are designed to accomplish a set task over and over again, with virtually no autonomy. As AI progresses, however, this is sure to change.
Robotics is 90% perception and 10% controlling/enabling the robot to move. Giving robots the ability to intelligently interpret their environment is extremely difficult, and it has been largely impossible in the past due to technological limitations.
Modern sensors are capable of providing robots with a vast amount of data, allowing them to recognize objects and people in their direct environment. One example of these sensors is LIDAR, which is a remote sensing method that uses light to measure the distance to other objects. LIDAR helps mobile robots make sense of their environments by providing information about the position of objects around them.
If it hasn’t already, artificial intelligence is likely to alter the social landscape for good. What modern technology allows us to do in a snap, AI will allow us to do in a breath. AI is about taking the power of computers, smartphones, and IoT devices, and bringing their capabilities far beyond their current limitations.