Red Hat 블로그
AI/ML—short for artificial intelligence (AI) and machine learning (ML)—represents an important evolution in computer science and data processing that is quickly transforming a vast array of industries.
As businesses and other organizations undergo digital transformation, they’re faced with a growing tsunami of data that is at once incredibly valuable and increasingly burdensome to collect, process and analyze. New tools and methodologies are needed to manage the vast quantity of data being collected, to mine it for insights and to act on those insights when they’re discovered.
This is where artificial intelligence and machine learning come in.
What is artificial intelligence?
Artificial intelligence (AI) generally refers to processes and algorithms that are able to simulate human intelligence, including mimicking cognitive functions such as perception, learning and problem solving. Machine learning and deep learning (DL) are subsets of AI.
Specific practical applications of AI include modern web search engines, personal assistant programs that understand spoken language, self-driving vehicles and recommendation engines, such as those used by Spotify and Netflix.
There are four levels or types of AI—two of which we have achieved, and two which remain theoretical at this stage.
4 types of AI
In order from simplest to most advanced, the four types of AI include reactive machines, limited memory, theory of mind and self-awareness.
Reactive machines are able to perform basic operations based on some form of input. At this level of AI, no “learning” happens—the system is trained to do a particular task or set of tasks and never deviates from that. These are purely reactive machines that do not store inputs, have any ability to function outside of a particular context, or have the ability to evolve over time.
Examples of reactive machines include most recommendation engines, IBM’s Deep Blue chess AI, and Google’s AlphaGo AI (arguably the best Go player in the world).
Limited memory AI systems are able to store incoming data and data about any actions or decisions it makes, and then analyze that stored data in order to improve over time. This is where “machine learning” really begins, as limited memory is required in order for learning to happen.
Since limited memory AIs are able to improve over time, these are the most advanced AIs we have developed to date. Examples include self-driving vehicles, virtual voice assistants and chatbots.
Theory of mind is the first of the two more advanced and (currently) theoretical types of AI that we haven’t yet achieved. At this level, AIs would begin to understand human thoughts and emotions, and start to interact with us in a meaningful way. Here, the relationship between human and AI becomes reciprocal, rather than the simple one-way relationship humans have with various less advanced AIs now.
The “theory of mind” terminology comes from psychology, and in this case refers to an AI understanding that humans have thoughts and emotions which then, in turn, affect the AI’s behavior.
Self-awareness is considered the ultimate goal for many AI developers, wherein AIs have human-level consciousness, aware of themselves as beings in the world with similar desires and emotions as humans. As yet, self-aware AIs are purely the stuff of science fiction.
What is machine learning?
Machine learning (ML) is a subset of AI that falls within the “limited memory” category in which the AI (machine) is able to learn and develop over time.
There are a variety of different machine learning algorithms, with the three primary types being supervised learning, unsupervised learning and reinforcement learning.
3 types of machine learning algorithms
As with the different types of AI, these different types of machine learning cover a range of complexity. And while there are several other types of machine learning algorithms, most are a combination of—or based on—these primary three.
Supervised learning is the simplest of these, and, like it says on the box, is when an AI is actively supervised throughout the learning process. Researchers or data scientists will provide the machine with a quantity of data to process and learn from, as well as some example results of what that data should produce (more formally referred to as inputs and desired outputs).
The result of supervised learning is an agent that can predict results based on new input data. The machine may continue to refine its learning by storing and continually re-analyzing these predictions, improving its accuracy over time.
Supervised machine learning applications include image-recognition, media recommendation systems, predictive analytics and spam detection.
Unsupervised learning involves no help from humans during the learning process. The agent is given a quantity of data to analyze, and independently identifies patterns in that data. This type of analysis can be extremely helpful, because machines can recognize more and different patterns in any given set of data than humans. Like supervised machine learning, unsupervised ML can learn and improve over time.
Unsupervised machine learning applications include things like determining customer segments in marketing data, medical imaging, and anomaly detection.
Reinforcement learning is the most complex of these three algorithms in that there is no data set provided to train the machine. Instead, the agent learns by interacting with the environment in which it is placed. It receives positive or negative rewards based on the actions it takes, and improves over time by refining its responses to maximize positive rewards.
Some applications of reinforcement learning include self-improving industrial robots, automated stock trading, advanced recommendation engines and bid optimization for maximizing ad spend.
What is deep learning?
Deep learning (DL) is a subset of machine learning that attempts to emulate human neural networks, eliminating the need for pre-processed data. Deep learning algorithms are able to ingest, process and analyze vast quantities of unstructured data to learn without any human intervention.
As with other types of machine learning, a deep learning algorithm can improve over time.
Some practical applications of deep learning currently include developing computer vision, facial recognition and natural language processing.
AI vs. machine learning vs. deep learning
So deep learning is a subset of machine learning, which in turn is a subset of artificial intelligence. But what are the actual similarities and differences between them?
A common way of illustrating how they’re related is as a set of concentric circles, with AI on the outside, and DL at the center.
As outlined above, there are four types of AI, including two that are purely theoretical at this point. In this way, artificial intelligence is the larger, overarching concept of creating machines that simulate human intelligence and thinking. The ultimate goal of creating self-aware artificial intelligence is far beyond our current capabilities, so much of what constitutes AI is currently impractical.
Machine learning, on the other hand, is a practical application of AI that is currently possible, being of the “limited memory” type.
By and large, machine learning is still relatively straightforward, with the majority of ML algorithms having only one or two “layers”—such as an input layer and an output layer—with few, if any, processing layers in between. Machine learning models are able to improve over time, but often need some human guidance and retraining.
In contrast, deep learning has multiple layers, and it’s these extra “hidden” layers of processing that gives deep learning its name. Deep learning algorithms are essentially self-training, in that they’re able to analyze their own predictions and results to evaluate and adjust their accuracy over time. Deep learning algorithms are capable of independent learning.
DL is able to do this through the layered algorithms that together make up what’s referred to as an artificial neural network. These are inspired by the neural networks of the human brain, but obviously fall far short of achieving that level of sophistication. That said, they are significantly more advanced than simpler ML models, and are the most advanced AI systems we’re currently capable of building.
Why is AI/ML important?
It’s no secret that data is an increasingly important business asset, with the amount of data generated and stored globally growing at an exponential rate. Of course, collecting data is pointless if you don’t do anything with it, but these enormous floods of data are simply unmanageable without automated systems to help.
Artificial intelligence, machine learning and deep learning give organizations a way to extract value out of the troves of data they collect, delivering business insights, automating tasks and advancing system capabilities. AI/ML has the potential to transform all aspects of a business by helping them achieve measurable outcomes including:
Increasing customer satisfaction
Offering differentiated digital services
Optimizing existing business services
Automating business operations
AI/ML examples and use cases
That all sounds great, of course, but is on the abstract, hand-wavy side of things. So let’s take a look at some practical use cases and examples where AI/ML is being used to transform industries today.
AI/ML is being used in healthcare applications to increase clinical efficiency, boost diagnosis speed and accuracy, and improve patient outcomes.
HCA Healthcare received the Red Hat Innovation Award for its use of machine learning to develop a real-time predictive analytics product—SPOT (Sepsis Prediction and Optimization of Therapy)—to more accurately and rapidly detect sepsis, a potentially life-threatening condition.
In the telecommunications industry, machine learning is increasingly being used to gain insight into customer behavior, enhance customer experiences, and to optimize 5G network performance, among other things.
In fact, according to our State of Enterprise Open Source report published in early 2021, 66% of telco organizations expect to be using enterprise open source for AI/ML within the next two years, compared to only 37% today.
In the insurance industry, AI/ML is being used for a variety of applications, including to automate claims processing, and to deliver use-based insurance services.
A majority of insurers believe that the modernization of their core systems is a key to differentiating their services in a broad marketplace, and machine learning is part of those modernization efforts.
Financial services are similarly using AI/ML to modernize and improve their offerings, including to personalize customer services, improve risk analysis, and to better detect fraud and money laundering.
As the quantity of data financial institutions have to deal with continues to grow, the capabilities of machine learning are expected to make fraud detection models more robust, and to help optimize bank service processing.
The automotive industry has seen an enormous amount of change and upheaval in the past few years with the advent of electric and autonomous vehicles, predictive maintenance models, and a wide array of other disruptive trends across the industry.
And of course AI/ML is a big part of this transformation. For example, it is a key part of BMW Group’s automated vehicle initiatives.
Energy providers around the world are also in the middle of an industry transformation, with new ways of generating, storing, delivering and using energy changing the competitive landscape. Additionally, global climate concerns, market drivers and technological advancements have also changed the landscape considerably.
The energy sector is already using AI/ML to develop intelligent power plants, optimize consumption and costs, develop predictive maintenance models, optimize field operations and safety and improve energy trading.
Getting started with AI/ML in your organization
While AI/ML is clearly a powerfully transformative technology that can provide an enormous amount of value in any industry, getting started can seem more than a little overwhelming.
The good news is that you can start small. It’s possible to adopt AI/ML into your organization without a huge upfront investment, so you can get your feet wet and start to figure out how and where AI/ML can benefit your organization in smaller, easier to manage pieces.
If you’d like to know more, we’ve written a 13-point roadmap about how to start your AI/ML journey.
About the author
Deb Richardson is a Contributing Editor for the Red Hat Blog, writing and helping shape posts about Red Hat products, technologies, events and the like. Richardson has over 20 years' experience as an open source contributor, including a decade-long stint at Mozilla, where she launched and nurtured the initial Mozilla Developer Network (MDN) project, among other things.