Article

Human-Machine Interaction: How AI Is Revolutionizing Tech

Human-Machine-Interaction-How-AI-Is-Revolutionizing-Tech-scaled

Nowadays, especially with how far computers have come, we can interact with and control computers fairly intuitively. Despite their complexity, it’s considerably easier to accomplish basic tasks on a laptop than it is to drive a car.

The reason for this is that we’ve created several peripheral tools to make interacting with computers easier. Keyboards, mice, headphones, GUIs, touchscreens— all of these things make the process of communicating with your computer much easier than in the days of command-line interfaces.

AI, however, could take the way we interact with computers to an entirely different level. In this post, we’ll discuss the impact of AI on human-machine interaction, both in present-day terms and future predictions.

The Impact Of AI On Computing

Voice-Based Interactions

Voice-based interactions have been growing in popularity over the last few years. This is due to virtual assistants like Siri and Google Assistant becoming more sophisticated, products like Amazon Echo taking off, and the gradual cultural acceptance of talking to your phone. With AI, this will only become more and more powerful.

Take the voice assistant you have on your phone, for example, and imagine what it would be like if it really worked. Rather than doing a couple of simple tasks like sending messages or getting directions, imagine that you could ask it any question or give it any computing task, and it would get it done every time, all with the same ease that you communicate with your friends.

The only thing holding this tech back is the limitations of current AI. But it might not be so long before our tactile interfaces are superseded by voice-based interfaces.

Advanced Biometrics

Verbal communication is just one of the ways that AI will change the way we interact with computers. We’re already starting to see things like facial recognition (Apple’s FaceID), voice recognition (“Ok Google”), and gesture recognition (Samsung’s Air Gesture) become equally as common as voice-based interactions.

When you combine these three things with artificial intelligence, you can start to imagine a computer that no longer needs a mouse, keyboard, or even a screen. Instead of being a device you physically interface with, a computer could simply be an entity in your home or on your person that you communicate with like you would with another person.

When this is made fully possible, we’ll not only be able to operate our computers with our voice, but also develop a relationship with them. Probably not a relationship in the way we normally think about it, but similar to the way you have a relationship with something like Netflix. It knows your name, your favorite movies, and your preferences, and customizes your experience based on those things. With AI and biometrics, you could have an experienced that personalized using your mood, age, gender, and more.

Conversational Computing

Language processing technology is advancing as well. Optical character recognition (OCR) technology uses pattern recognition and feature detection to recognize characters in non-computerized text (such as handwriting or document scans). However, just because it can read that text doesn’t mean it understands it.

Natural language processing (NLP), on the other hand, aims to help AI understand what that text means and sometimes even generate a response. NLP is what Amazon’s Alexa and Google Home use to understand and respond to what users say. Search engines like Google and Bing also use a form of NLP to disambiguate users’ search terms and offer suggested queries.

NLP is also used to crawl thousands of online reviews and webpages to summarize them into a useful message like “75% of reviewers gave this restaurant 5 stars, but found the overall ambiance to be lacking.” In every case, NLP uses AI to understand, manipulate, and interpret human language. While the field is certainly not new, the technology is advancing at an increasing rate due to better and more available data and rising interest.

The potential here is to loosen up the way we communicate with AI. Most people would agree that they have a “Siri Speak” when talking to a virtual assistant. You wouldn’t ask Siri something like, “How did this gum get on my shoe?” because she wouldn’t have a pre-programmed response. But with NLP advancements, it will become more and more likely that Siri and other AIs will be able to understand what you say and what they read.

Computing On Our Behalf

Another valuable way that AI could alter the way we use computers is with its ability to supplement what we normally use computers for. A great example of this is OpenAI’s text generating application, a key example of generative AI. In short, OpenAI created an AI-based text generator that was so accurate, they pulled it from the public because it was too convincing.

In other words, it became so easy to generate articles and papers with the software that OpenAI worried (and probably rightly so) that it would be used to spread misinformation. While a scary prospect, it’s an excellent example of where the future of computing is likely heading, and it’s a direction that oddly lines up with the automotive industry.

With self-driving vehicles, it no longer becomes necessary to know how to drive a car. You simply tell it where you want to go, and then you wait until you’re there. The same could happen to the rest of our computing experience. Need a slideshow put together? Just ask Google to do it for you. Want to create a logo for your brand? Talk to Siri. Accounting, editing, video production — you name it and your computer could do it.

Of course, this level of AI is still a few decades away at least and it’s unlikely to completely replace manual computing, but is an interesting concept that you could essentially offload computer work on your computer. How much more would you get done in a day if your computer could do your menial work for you?

The-Future-of-AI
Related White Paper

The Future of AI

Download
Steve-Hulet-Default-BW_optimize.jpg

Steve Hulet

CTO

Steve is the Co-Founder and CTO at Fresh. A former Software Engineer at Amazon with over 12 years of web development experience, Steve provides technical, architectural, and engineering oversight to projects. Steve is responsible for all technology reviews related to websites. His specialities include programming languages such as C, C++, Java, Python, and PHP, and technology software including Eclipse, GLPK, jQuery, Linux, and MATLAB. Steve’s skills include automation, databases, linear programming, optimization, and testing, all of which he uses in conjunction with Fresh’s digital strategists provide innovative solutions to clients.