Jump to section

What are Granite models?

Copy URL

Granite is a series of large language models (LLMs) created by IBM for enterprise applications. Granite foundation models can support generative artificial intelligence (gen AI) use cases that involve language and code.

Granite family models are open source assured under the Apache 2.0 license, which means developers can experiment with, modify, and distribute Granite models for free. This makes Granite models a good choice for organizations that deal with sensitive data and want to run their own LLM rather than relying on an outside service.

See how Granite works with Red Hat

Foundation models are trained to function with a general understanding of patterns, structures, and representations of language. This “foundational” training teaches the model how to communicate and identify those patterns.

The IBM Granite AI models have this baseline of knowledge that can be further fine-tuned to perform specific tasks for almost any industry. Granite family models are trained on curated data and provide transparency into the data that’s used for training.

LLMs use gen AI to produce new content based on the prompt a user enters. Today, people often use gen AI to generate text, pictures, video, and code. Businesses can use LLM foundation models to automate various aspects of operations, such as customer-support chatbots or testing software code.

Other LLM foundation models that use gen AI include Meta’s LLaMa (which includes LLaMa 2 and LLaMa 3), Google’s Gemini, Anthropic’s Claude, OpenAI’s GPT (known for their ChatGPT bot), and Mistral. However, what sets the Granite AI models apart is the disclosure of their training data, building trust with their users and making them more suitable for enterprise environments.

Yes, some of the Granite AI model series are available under an open source license, which means developers can easily access the model and build on it locally. Then they can fine-tune the model for their particular goals. Users even have access to a majority of the data used to train the model (PDF) so they can understand how it was built and how it functions.

When it comes to Granite models, open source means a space where developers can customize the model with their own data to generate user-specific outputs. It doesn’t mean everyone’s private data is available to the whole open source community. Unlike public web service AI, Granite models don’t continuously train. So any data input on the Granite family model will never be shared with Red Hat, IBM, or any other Granite users.

Enterprises in many industries―from healthcare to construction―can use Granite in a variety of ways to help automate their operations on a large scale. Granite models can be trained in business-domain tasks like summarization, question answering, and classification. Here are a few examples:

  • Code generation: Granite code models can help build upon or improve work done by developers to make processes more efficient. For example, developers can take advantage of autocomplete: Similarly to autocomplete on our smartphones, the model can finish a code sentence before the developer finishes typing. 
  • Insight extraction: When you need to simplify, summarize, or explain large data sets, Granite can identify accurate patterns and insights quickly. This saves you the hassle of combing through a lot of data. 

Explore more AI use cases

  • Flexible architecture: Granite can integrate with existing systems and can be deployed on premise or in the cloud. Its interfaces are made to simplify deployment. The Granite family includes models of various sizes, so you can choose one that best matches your needs while managing your computing costs.
  • Custom solutions: Though Granite is sold as a foundation model, it’s built to be trained for business-specific knowledge. Users have the flexibility to scale and fine-tune the model to tailor it to their business needs. For example, if your business is focused on medical devices, you can teach the model lingo used in the healthcare industry. 
  • Low latency: Running a Granite model on your own infrastructure means you can optimize for quick response times. The model can deliver real-time data, making it handy for critical operations. If we stick with the healthcare example, accessibility to real-time data is important for remote doctor-patient collaboration and time-sensitive care.  
  • High accuracy: Developers can fine-tune the Granite series for industry-specific tasks to make the model an expert in any subject. It can also be trained in multiple languages to maintain accuracy and accessibility on a global scale. 
  • Transparent models: Because Granite is available under an open source license, developers can see how the AI model was built and trained, as well as collaborate with an open source community.

IBM has released multiple Granite model series to fulfill the needs of enterprise applications that are becoming more complex. There are different categories and naming conventions of the model series within the Granite family.

Each series serves a different purpose:

  • Granite for Language: These models deliver accurate natural language processing (NLP) in multiple languages while maintaining low latency.
  • Granite for Code: These models are trained on more than 100 different programming languages to support enterprise-level software tasks.
  • Granite for Time Series: These models are fine-tuned for time series forecasting, a method of predicting future data using data from the past.
  • Granite for GeoSpatial: IBM and NASA created this foundation model that can observe Earth with large-scale satellite data collection to help track and address environmental changes.

Within each of these series, Granite offers models of different sizes and specialties. For example, Granite for Language includes:

  • Granite-7b-base, a general-purpose language model for conversations and chat purposes.
  • Granite-7b-instruct, which specializes in following task instructions.

Explore Granite models on Hugging Face

Red Hat® AI is our portfolio of AI products built on solutions our customers already trust. This foundation helps our products remain reliable, flexible, and scalable.

The Red Hat AI portfolio helps organizations:

  • Adopt and innovate with AI quickly.
  • Break down the complexities of delivering AI solutions.
  • Deploy anywhere.

With Red Hat AI, you get access to the Granite family LLMs and bring-your-own-model capabilities. In addition, our consultants can offer hands-on support for your unique enterprise use cases when building and deploying gen AI applications alongside critical workloads.

Explore Red Hat AI

 

Easily access Granite family LLMs

Red Hat Enterprise Linux® AI is a foundation model platform specifically for developing, testing, and running Granite family LLMs. Its open source approach keeps costs low and removes the barrier to entry for a wide range of users. This platform allows you to experiment with your own data and learn as you go. It’s a good place to start if you aren’t sure what your enterprise use cases are yet. Red Hat Enterprise Linux® AI is a foundation model platform specifically for developing, testing, and running Granite family LLMs. Its open source approach keeps costs low and removes the barrier to entry for a wide range of users.

Read more about Red Hat Enterprise Linux AI

 

Start with InstructLab

Red Hat Enterprise Linux AI includes InstructLab, an open source community project for enhancing LLMs. InstructLab's features make it possible for developers with a variety of skill levels and resources to easily contribute, which makes it a good place to begin experimenting with AI models. For example, it requires far less human-generated information and far fewer computing resources during training. In addition, InstructLab is not model specific, so it can provide supplemental fine-tuning to an LLM of your choice.

Watch how to train an LLM on InstructLab

Keep reading

Article

What is generative AI?

Generative AI relies on deep learning models trained on large data sets to create new content.

Article

What is machine learning?

Machine learning is the technique of training a computer to find patterns, make predictions, and learn from experience without being explicitly programmed.

Article

What are foundation models?

A foundation model is a type of machine learning (ML) model that is pre-trained to perform a range of tasks. 

More about AI/ML

Products

Now available

A foundation model platform used to seamlessly develop, test, and run Granite family LLMs for enterprise applications.

An AI-focused portfolio that provides tools to train, tune, serve, monitor, and manage AI/ML experiments and models on Red Hat OpenShift.

An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure. 

Red Hat Ansible Lightspeed with IBM watsonx Code Assistant is a generative AI service designed by and for Ansible automators, operators, and developers. 

Resources

e-book

Top considerations for building a production-ready AI/ML environment

Analyst Material

The Total Economic Impact™ Of Red Hat Hybrid Cloud Platform For MLOps

Webinar

Getting the most out of AI with open source and Kubernetes