What is deep learning?

Copy URL

Deep learning is an artificial intelligence (AI) technique that teaches computers to process data using an algorithm inspired by the human brain. 

Explore Red Hat AI

Deep learning uses artificial intelligence and machine learning (AI/ML) to help data scientists collect, analyze, and interpret large amounts of data. The process of deep learning, also known as deep neural learning or deep neural networking, teaches computers to learn through observation, imitating the way humans gain knowledge. 

The human brain contains many interconnected neurons, which act as information messengers when the brain is processing information (or data). These neurons use electrical impulses and chemical signals to communicate with one another and transmit information between different areas of the brain. 

Artificial neural networks (ANNs)--the underlying architecture behind deep learning–are based on this biological phenomenon but formed by artificial neurons that are made from software modules called nodes. These nodes use mathematical calculations (instead of chemical signals as in the brain) to communicate and transmit information. This simulated neural network (SNN) processes data by clustering data points and making predictions.

It may help to think of deep learning as a type of flow chart, starting with an input layer and ending with an output layer. Sandwiched between these two layers are the “hidden layers” which process information at different levels, adjusting and adapting their behavior as they continuously receive new data. Deep learning models can have hundreds of hidden layers, each of which plays a part in discovering relationships and patterns within the data set. 

Starting with the input layer, which is composed of several nodes, data is introduced to the model and categorized accordingly before it’s moved forward to the next layer. The path that the data takes through each layer is based upon the calculations set in place for each node. Eventually, the data moves through each layer, picking up observations along the way that ultimately create the output, or final analysis, of the data.

Red Hat Resources

Applications that utilize deep learning are already integrated into our daily lives and have uses in many different industries. generative AI, which now powers many AI tools, is made possible through deep learning.

The use cases for deep learning are forever evolving, but 3 of the most popular technologies being utilized today are computer vision, speech recognition, and natural language processing (NLP). 

  • Computer vision: Computers can use deep learning techniques to comprehend images the same way humans do. This means automated content moderation, facial recognition, and image classification. 
  • Speech recognition: Pitch, tone, language, and accent can all be analyzed and by way of deep learning models. Not only can this be used to improve customer experience, but it is also helpful from an accessibility standpoint in cases that require real-time transcription.
  • Natural language processing (NLP): Computers use deep learning algorithms to analyze and gather insights from text data and documents. This can aid in the function of summarizing long documents, indexing key phrases that indicate sentiment (such as positive or negative comments), and generating insight for automated virtual assistants and chatbots. NLP is the broader field that encompasses the development and application of large language models (LLMs) to understand and generate human language.
     

Some examples of how industries are utilizing deep learning principles include the following:

  • Customer Service: Chatbots, virtual assistants, and dial-in customer service portals utilize tools like speech recognition. 
  • Financial services: Predictive analytics drive the algorithmic trading of stocks, assess business risks for loan approvals, detect fraud, and help manage credit and investment portfolios.
  • Healthcare: With the digitization of healthcare records, image recognition applications can support medical imaging specialists by learning to automatically detect red flags that indicate a potential medical diagnosis. See how HCA Healthcare uses predictive analysis to establish a standardized, digital approach to sepsis detection. 
  • Media and Entertainment: From online shopping to media streaming services, deep learning is being used to track user activity and develop personalized recommendations.
  • Industrial automation: In factories and warehouses, deep learning applications can automatically detect when people or objects are within an unsafe distance of machines, or can assist with quality control or predictive maintenance. 
  • Self-driving cars: Automotive researchers use deep learning to train cars to detect objects like stop signs, traffic lights, crosswalks, and pedestrians.
  • Law enforcement: Speech recognition, computer vision, and natural language processing (NLP) can save time and resources by aiding in the analysis of large amounts of data. 
  • Aerospace and Military: Those monitoring large geographic areas can use deep learning to detect objects, identify areas of interest from afar, and verify safe or unsafe zones for troops. 

Explore AI/ML use cases

Deep learning is a specialized form of machine learning and differentiates itself by the type of data it works with and the methods in which it learns.

Classical machine learning algorithms require some human intervention by way of pre-processing data sets before they’re introduced to the model. This means that specific features are defined and labeled from the input data then organized into tables before being introduced to the machine learning model. Conversely, deep learning algorithms don’t require this same level of pre-processing and are able to comprehend unstructured data such as text documents, images of pixel data, or files of audio data. 

Deep learning may be preferred to classical machine learning in instances where there is a large amount of data, a lack of background knowledge regarding the subject, or when there is a complex and time-consuming task at hand.

We know that deep learning works by utilizing a structure of nodes that communicate with one another within an artificial neural network. To create an ANN, calculations and parameters must be introduced to the model alongside the data we give it, and precautions must be taken to ensure that these calculations account for bias and variance. 

In the context of machine learning, bias refers to the extent to which your model is making assumptions or generalizations about the data in order to make the target function easier to learn. High bias means that the model is simplifying and creating shortcuts (to a fault) as it processes information. 

Variance refers to the measurement of how far each data point is from the mean, or the statistical measurement of the spread between numbers in a data set. In opposition to bias, variance refers to how sensitive a model is to the training data. High variance (or sensitivity) means that the model is paying too much attention to detail and missing the underlying patterns in the dataset. 

In supervised learning, when variance is too high and bias is too low, it’s called overfitting. When bias is high and variance is low, it’s called underfitting. Creating the perfect fit can be difficult, and this phenomenon is commonly referred to as the bias-variance tradeoff. 

Parameters define boundaries, and boundaries are critical for making sense of the enormous amount of data that deep learning algorithms must process. This means that often, overfitting and underfitting can be corrected by using fewer or more parameters, respectively. 

Accounting for human related bias 

If a deep learning model is trained on data that is statistically biased, or doesn’t provide an accurate representation of the population, the output can be flawed. Unfortunately, existing human bias is often transferred to artificial intelligence, thus creating risk for discriminatory algorithms and bias outputs. 

As organizations continue to leverage AI for improved productivity and performance, it’s critical that strategies are put in place to minimize bias. This begins with inclusive design processes and a more thoughtful consideration of representative diversity within the collected data. 

What is a black box?

“Black box” refers to when an AI program performs a task within its neural network and doesn’t show its work. This creates a scenario where no one–including the data scientists and engineers who created the algorithm–is able to explain exactly how the model arrived at a specific output. The lack of interpretability in black box models can create harmful consequences when used for high-stakes decision making, especially in industries like healthcare, criminal justice, or finance. 

Deep learning models are able to perform more complex computing tasks without human intervention, but this means that they require more processing power, sufficient infrastructure, and larger sets of training data. Cloud computing allows teams to access multiple processors at once, such as clusters of GPUs (graphic processing units) and CPUs (central processing units), which creates an ideal environment for complex mathematical operations to be performed.

By designing, developing, and training deep learning models on the cloud, dev teams can scale and distribute workloads with speed and accuracy, while simultaneously cutting down on operating costs. 

 

Deep learning and machine learning on the edge

Working on the cloud opens up possibilities for machine learning on the edge. By establishing edge computing hubs connected to public cloud resources, information can be captured and analyzed in real time to assist in operations ranging from supply chain status updates to information on disaster evacuation sites. 

Red Hat® AI can help your teams benefit from all of the things we discussed above—computer vision, speech recognition, natural language processing—and more.

Our portfolio of AI products is built on solutions our customers already trust. This foundation helps our products remain reliable, flexible, and scalable.

Red Hat AI can help organizations:

  • Adopt and innovate with AI quickly.
  • Break down the complexities of delivering AI solutions.
  • Deploy anywhere.

Explore Red Hat AI 

A toolbox to experiment with deep learning

Whether you’d like to build machine learning algorithms or create predictive models from scratch, Red Hat OpenShift AI has a lot of different tools to help you get started. 

The AI platform offers tooling such as interactive interfaces based on JupyterLab, seamless access to core AI/ML libraries, and widely used frameworks. Plus, it provides a visual editor for creating and automating data science pipelines. Combined with CI/CD pipelines, teams can improve their MLOps to increase their operational efficiency and move more AI models into production faster.

Explore Red Hat OpenShift AI 

Red Hat AI partners

Additionally, our AI partner ecosystem is growing. A variety of technology partners are working with Red Hat to certify operability with Red Hat AI. This way, you can keep your options open.

Learn more about our partners 

Hub

The official Red Hat blog

Get the latest information about our ecosystem of customers, partners, and communities.

All Red Hat product trials

Our no-cost product trials help you gain hands-on experience, prepare for a certification, or assess if a product is right for your organization.

Keep reading

Predictive AI vs generative AI

Both gen AI and predictive AI have significant differences and use cases. As AI evolves, distinguishing between these different types helps clarify their distinct capabilities.

What is agentic AI?

Agentic AI is a software system designed to interact with data and tools in a way that requires minimal human intervention.

What are Granite models?

Granite is a series of LLMs created by IBM for enterprise applications. Granite foundation models can support gen AI use cases that involve language and code.

AI/ML resources