Article
Key Concepts Of AI: Artificial Intelligence 101

Artificial intelligence (AI) is one of the most exciting and fast-moving fields in computer science. It now shows up in everyday tools, from search and chatbots to robots, design software, and enterprise systems. In this post, we’ll break down key AI concepts in plain language.
The goal is to help you understand how AI works and imagine practical use cases across your business. We’ll start with AI as a whole, then move into machine learning, neural networks, and deep learning, followed by training approaches like supervised, unsupervised, and reinforcement learning, and modern applications such as generative AI and automation.
Along the way, we’ll show how these key concepts of AI show up in Fresh Consulting projects so you can see them in action and the business use cases for AI across industries.
An overview of the key concepts of AI
What is artificial intelligence, exactly?
Artificial intelligence is technology that enables computers and machines to simulate human abilities such as learning, understanding, problem-solving, and decision-making. In simple terms, AI is any software system that can sense, think with varying degrees of autonomy and independence.
For example, a simple four-function calculator performs basic arithmetic—addition, subtraction, multiplication, and division—but it does not adapt independently or make decisions about what to calculate. The newly released 7-Cal incorporates AI Models (GPT-5.2, Gemini 3, Claude, and custom API models), which goes beyond solving an equation to explaining it.
Modern AI also shows up in tools like Siri or Alexa, but also in chatbots like ChatGPT, AI photo filters on TikTok and Snapchat, and email integrations that can draft full replies for you. In more advanced cases, AI helps robots perceive their surroundings, plan routes, and navigate challenging environments autonomously.
Modern AI systems often have three common ingredients:
- Data: The historical information the system learns from (images, text, sensor data, logs).
- Models: The algorithms that learn patterns and make predictions or decisions.
- Feedback: Signals about success or failure that help the system improve over time.
At Fresh, these artificial intelligence core concepts show up in projects like:
- Smart digital experiences that personalize content or recommendations in real time.
- Robotics and autonomous systems that use AI to perceive, plan, and act in physical environments.
- AI-enhanced business tools that support strategy, product innovation, and operational decision making.
Our proprietary autonomous work management platform—Harmony—is a concrete example of how components of artificial intelligence come together in practice.
Harmony continuously collects telemetry and behavioral data from robots and the environments they work in, then aggregates that data in the cloud so it can be monitored and analyzed over time. On top of that data, the platform uses AI-based control and other machine learning models to help robots navigate, coordinate, and adapt more autonomously instead of relying on fixed scripts.
As teams deploy robots and run missions, they see how well routes, behaviors, and task assignments perform, then use that feedback to refine configurations, improve the models, and scale to more robots and new environments with less friction.

Machine Learning
Machine learning (ML) is a subset of AI that focuses on systems that learn patterns from data, rather than relying only on fixed, hand‑written rules. AI is the broader field concerned with getting machines to perform tasks that we consider “intelligent,” while ML is one concrete way to build such systems: by training models on examples so they can make predictions or decisions on their own.
A helpful way to think about ML is like learning to solve a Rubik’s Cube. At first, you might follow a sequence of moves someone else wrote down, step by step, for a single scrambled cube. Over time, you start to recognize that certain patterns on the cube call for specific move sequences, and you can use those same sequences to solve many different scrambles. You’ve gone from blindly following instructions to internalizing general strategies.
In ML, the algorithm “learns” general rules from many examples, then applies those learned rules to new data it has never seen before.
In business, machine learning powers use cases such as:
- Predicting demand or inventory needs.
- Detecting fraud or anomalies in transactions.
- Segmenting customers based on behavior.
- Improving search, recommendations, and personalization.
For robotics and smart products, ML can help machines independently discover better ways to move, grasp objects, or coordinate with other systems when given the right environment and data. This is where disciplines like MLOps become important: teams need reliable pipelines to collect data, train models, deploy them safely, and monitor performance at scale.
In our Red Dot award-winning Project Moab, machine learning enables a ball-balancing robot to move from fixed behaviors to flexible strategies it can reuse in many situations. Engineers first use Microsoft’s Project Bonsai to “teach” the system in simulation, where Moab tries different actions and gets feedback based on how well it keeps the ball balanced or catches it.
Over time, the model learns general control policies—like how to react when the ball is rolling too fast in one direction—that apply not just to a single setup but to many different throws, bounces, and obstacle layouts. Those learned policies are then deployed to the physical robot, which can apply the same underlying rules in the real world without being explicitly programmed for every possible ball position or motion.
Machine learning, one of the most foundational key concepts of AI, sets the stage for the others listed next.

Neural Networks
Another key concept of AI, the neural network, is a type of machine learning model made up of layers of interconnected nodes (also called neurons). This structure is loosely inspired by the human brain, where each neuron receives signals, processes them, and passes them on.
You can think of a neural network as a very simple digital brain. It takes inputs (like pixel values from an image or word tokens in a sentence), processes them through multiple layers, and produces an output (like “this is a dog” or “this customer is likely to churn”)
A common example is the face recognition feature on your smartphone. The phone’s photo app can group pictures by person, even though the images were taken in different lighting and angles. Under the hood, a neural network has learned to detect patterns like eyes, noses, and overall facial structure, and then recognize those patterns across new photos it has never seen before.
Neural networks are a foundation for many core AI concepts today, including:
- Computer vision (understanding images and video).
- Natural language processing (understanding and generating human language).
- Speech recognition and synthesis.
- Predictive models and recommendation systems
Fresh teams work with neural networks when designing systems that must “see” and “understand” the world, like robots that recognize objects, quality-control cameras that detect defects, or AI interfaces that listen and respond to users.
In one internal proof of concept, our team built a mobile experience in 48 hours during a Design Thinking Workshop to help employees correctly recycle or dispose of their waste. The app used Apple’s Vision and Core ML frameworks on an iOS device to run object recognition in real time, processing camera input directly on the phone instead of sending video to the cloud.
In that prototype, a Core ML object detection model—powered by a single neural network—analyzed the live camera feed and identified objects like cups or containers as the user held them up. The Vision framework handled the camera frames and passed them through the neural network, then converted the model’s outputs into recognized objects the app could act on.
Combining on-device neural networks and real-time object detection made it possible to give immediate guidance about how to sort each item, while keeping the experience fast and privacy‑friendly.
Deep Learning
The next key concept of AI, Deep Learning, is a subset of machine learning that uses neural networks with many layers. The word “deep” refers to the depth (number) of layers, not to how “profound” the system is.
To put this in plain terms, picture five vertical lines, like so: I I I I I. The first line is the input layer — that’s where the deep learning model receives data (an image, a sound, or a sentence). The next lines are hidden layers. Each hidden layer transforms the data in some way to learn new features. The final line is the output layer, which produces a result, such as a label (“cat”) or a score (“80% likely this transaction is fraud”).
In practice, real deep learning models often have dozens or even hundreds of layers. The hidden layers gradually build up more abstract features. For example, in image recognition a deep network might:
- First learn edges and simple shapes.
- Then learn parts like eyes or wheels.
- Finally learn full objects like faces, cars, or tools.
Deep learning software drives many of the “wow” moments people associate with AI today, including:
- Self-driving and driver-assist systems that process video in real time.
- High-quality machine translation and speech recognition
- Advanced medical imaging analysis.
- Generative AI models that can write, draw, or create audio and video content.
Because deep learning models can be large and complex, organizations need strong engineering practices, data pipelines, and cloud or edge infrastructure to deploy them reliably.
At Fresh, this comes to life in projects like the Delta Bot, which started as an experimental prototype to showcase Microsoft’s Project Bonsai in a playful, AI-powered air hockey experience. Our teams combined deep learning–driven perception and control with custom hardware, real-time vision, and immersive visual and audio design to turn a trade show demo into a robust, repeatable system.

Over time, that prototype evolved into a Red Dot award–winning, market-ready robotic air hockey table, integrating multiple subsystems—gantry motion, magnetic controls, displays, machine vision, and AI—into a cohesive product that feels seamless to players while quietly orchestrating sophisticated models behind the scenes.

Supervised and Unsupervised Learning
Most key concepts of AI involve how a model is trained. The two most common training approaches are supervised learning and unsupervised learning.
Supervised learning
Supervised learning uses labeled examples. Each training example includes both the input and the correct answer. The model’s goal is to learn the mapping from input to output so it can make accurate predictions on new data.
For example, you might give a model thousands of images labeled “cat” or “dog.” Over time, it learns which visual patterns belong to cats and which belong to dogs. Then, when you give it a new unlabeled image, it can predict whether it contains a cat or a dog. Supervised learning is especially important for tasks like:
- Image classification and defect detection.
- Predicting customer churn or credit risk.
- Forecasting demand or pricing.
- Classifying support tickets or routing messages.
In sophisticated systems, such as an autonomous fleet of robots, supervised learning can be used to recognize objects, lane markings, or navigation cues in labeled training data, which then guides safe behavior in the real world.
Supervised learning also shows up in Fresh’s Harmony platform for robot perception and navigation in challenging environments. For glass door detection, the team iteratively trained a YOLO deep learning model on labeled images of wooden and glass doors, along with their open, closed, and semi-open states, so the robot could reliably identify both the door and its status in real time.
By augmenting the training data—such as varying what appears behind the glass or focusing on features like handles and joints—the model learned to distinguish subtle visual cues that signal a transparent door and how open it is, while still generalizing beyond a single office.
Running this supervised model on an edge computer lets Harmony classify doors quickly enough to inform navigation decisions, helping robots decide when to move, stop, or ask for human assistance as they pass through real-world environments.

Unsupervised learning
Unsupervised learning works without labeled answers. Instead of being told “this is a car” or “this is a bicycle,” the model sees unlabeled data and tries to discover structure on its own, such as clusters or patterns.
For example, imagine you give an AI a hundred images of people, vehicles, and other visual features, but you never tell it which is which. The model groups similar images together. When you later inspect those groups, you may see that one cluster contains cars and another contains people. This approach is useful for:
- Customer segmentation based on behavior.
- Detecting unusual events or anomalies in logs.
- Exploring new datasets where labels are expensive or not yet available.
In practice, many real-world systems blend supervised and unsupervised methods, or use semi-supervised learning where only part of the data is labeled. Steve Yin, Principal Engineer at Fresh, has done extensive work in this area of artificial intelligence, leveraging various key concepts of AI to build sophisticated computer vision solutions.

Reinforcement Learning
Reinforcement learning (RL) is a type of machine learning where an AI agent learns by trial and error. The agent takes actions in an environment, receives rewards or penalties, and adjusts its behavior to maximize long-term reward.
A classic example is a lab mouse trying to find cheese at the end of a maze. On the first attempt, the mouse may hit many dead ends. Over time, as it receives positive feedback for successful paths and negative feedback for bad ones, it learns the fastest route. Reinforcement learning applies this same idea to AI: the system is given a goal, explores different actions, and improves based on feedback.
Reinforcement learning has been used to:
- Train systems to play complex games like Go and real-time strategy games at superhuman levels.
- Optimize industrial control systems, such as energy usage in data centers.
- Improve robotics, where an agent learns to walk, grasp, or coordinate movements through repeated practice.
For businesses, the most practical use of RL often appears in optimization: fine-tuning pricing, recommendations, or logistics decisions based on real-time feedback and constraints. Fresh can help organizations explore RL when there is a clear environment, defined rewards, and the ability to safely experiment.
A concrete example is Fresh’s work with Microsoft Project Bonsai, a “machine teaching” platform for building autonomous industrial control systems. In that collaboration, Bonsai learns how to operate equipment—such as CNC machines—by interacting with a simulated environment, receiving rewards for efficient, safe behavior and refining its control policies over time.
Fresh helped design and build Moab, a ball-balancing robot that uses trained Bonsai “brains” to keep a ball steady or even catch moving balls, illustrating how an RL-style training loop in simulation can produce robust, autonomous behavior in the real world.

Generative AI
Since 2022, generative AI has moved from a niche research area to a mainstream capability in products and workplaces, one of the key concepts of AI that’s most familiar to non-specialists.
Generative models learn from large collections of data and then create new content, such as text, images, audio, or code, that resembles the patterns they saw during training.
Common examples include:
- Large language models (LLMs) that can draft emails, summarize documents, or act as chat-based assistants.
- Image generation models that create original illustrations from simple prompts.
- Code assistants that help engineers write and debug software faster.
Generative AI is now being embedded into many business tools and workflows, enabling:
- Faster content creation for marketing, documentation, and support.
- Richer conversational interfaces for apps, products, and services.
- Rapid prototyping of UX copy, interface states, and design variations.
- Smart agents that can chain multiple actions together, such as retrieving information, making calls to other systems, and drafting outputs.
For a consultancy like Fresh, generative AI is one part of a larger AI toolbox. It can sit alongside predictive models, rule-based systems, and robotics to create full experiences: for example, an AI copilot embedded in a web app, or a multimodal interface for a robot that understands both language and vision. Because generative AI can also produce incorrect or biased outputs, strong design, governance, and human-centered testing are essential. Fresh’s multidisciplinary teams combine UX, engineering, and AI expertise to design AI experiences that are safe, transparent, and useful for real users.
In education, Fresh has explored generative AI through DecodaBuild, a concept for helping close the literacy gap by creating engaging, decodable stories at scale. The vision is to generate controlled texts that follow explicit phonics patterns—like CVC and CVCC words or r‑controlled vowels—while still telling compelling stories with clear beginnings, middles, and ends, plus accompanying illustrations. Generative models such as GPT‑4 and modern image models are orchestrated through tools like Brancher to produce high‑decodability texts, maintain language quality, and tailor content to individual learners so teachers can better support struggling readers with personally meaningful material.

Fresh has also applied generative AI to robotics through Jinny, a hackathon-built mobile robot that uses an LLM-powered, speech-enabled web app as its “mind.” In this setup, a large language model turns natural-language voice commands into structured API calls that drive the robot, while generative audio tools synthesize speech back to the user, creating a smooth conversational loop instead of a complex control interface.
Jinny’s semantic interaction pipeline shows how generative AI can hide robotics complexity from everyday users, making it easier to command robots, prototype new behaviors, and move toward more intuitive, human-centered human–robot interaction.

Conceptual AI
Conceptual AI is a branch of artificial intelligence that helps machines understand and work with ideas, not just raw data. It combines human-style reasoning—through structured models and relationships—with AI’s ability to process and learn from large amounts of information.
Think of it as teaching a computer to grasp concepts the way people do. Instead of only spotting patterns, Conceptual AI tries to make sense of what those patterns mean and how they connect in the bigger picture.
Key points about Conceptual AI:
- Knowledge Representation: Uses conceptual graphs and models so an AI system can organize what it knows the way an expert would.
- Explainable AI (XAI): Explainable AI makes AI decisions clearer and easier for people to understand by documenting and communicating how conclusions are reached.
- Human–AI Collaboration: Works alongside people—especially in creative fields like design—by blending human imagination with machine-level precision.
- Beyond Data-Driven Learning: Goes beyond simply identifying trends in data to discover the core ideas that define a domain.
Key concepts of AI and their powerful applications across industries
To put these key concepts of AI in context, it helps to look at major application areas that many organizations explore today.
- Natural Language Processing (NLP): Systems that understand and generate human language, powering chatbots, robotic process automation (RPA), search, summarization, and translation.
- Computer Vision and Machine Vision: Systems that interpret images and video, used in quality inspection, safety monitoring, autonomous vehicles, and augmented reality.
- Robotics and Autonomous Systems: Machines that sense, plan, and act in the physical world using AI for perception and decision making.
- AI in Business Operations: For large and small businesses, AI can take the shape of tools that optimize workflows, automate repetitive tasks, and support strategic decisions in areas like IT operations, customer support, marketing, and cybersecurity.
These are the domains where Fresh often brings these key concepts of AI to life — integrating AI with product strategy, UX research, interface design, industrial design, cloud and edge engineering, and organizational change.

How Fresh Consulting Helps with AI
AI is booming, and the field of artificial intelligence includes a range of subsets. But the true value of understanding the key concepts of AI comes from matching each one to the right business problems, products, and experiences. More and more organizations are moving from experimenting with AI to scaling solutions that create measurable impact.
Fresh Consulting operates at this intersection of strategy, design, engineering, and AI. That means helping clients:
- Identify where AI (including generative AI, machine learning, and robotics) can meaningfully improve customer experiences, products, and operations.
- Design human-centered AI experiences with clear interfaces, strong UX, and thoughtful change management.
- Build and integrate AI systems, from data pipelines and models to full digital products and connected devices.
- Govern and evolve AI solutions over time, focusing on safety, ethics, and continuous improvement.
If you have a project where you want to apply these key concepts of AI, we’d be glad to connect, review your ideas or challenges, and explore what’s possible.


