It feels like 2024 was the Year of artificial intelligence (AI), quickly going from being an interesting experiment to seemingly the only thing anyone was talking about. And it can be hard to keep up with all the news and advancements being made, but hopefully this will help. In these 11 short videos, Red Hatters cover a variety of topics from open source AI, to the new InstructLab project, through identifying which large language model (LLM) is right for your organization, and more.
Grab a cup of coffee and catch up on some of what Red Hat has been up to in the world of AI.
Open source AI is community built AI
Frank La Vigne explains why using fine-tuning technologies like InstructLab and open source models like Granite makes it easier for developers and people with other specialized knowledge to contribute to generative AI (gen AI) models.
InstructLab 101
Not really sure what InstructLab is? In this 100 second intro video Legare Kerrison walks through what it is, how it works, and how you can get started today.
Which LLM is right for you?
There are a bewildering number of LLMs out there already, and it seems like more are being released every day. So how do you begin to evaluate which LLM is best for you, your projects and your business?
Taylor Smith discusses some of the more important factors to keep in mind when looking at LLMs, including whether it has transparent data sources and the ability to scale and perform well in a small form factor.
With LLMs, does size matter?
While LLMs might be what everyone's been talking about lately, we're already starting to bump up against some challenges with them, including their resource-intensive training, scalability issues and data limitations that are threatening to stall future development. But do LLMs all have to be…large?
Cedric Clyburn talks about how larger isn't always better, and how small, specialized models can be more effective and efficient over all.
RAG vs. fine tuning: Different tools for different jobs
Retrieval augmented generation (RAG) is a popular way to inject domain-specific data into LLMs, but it's not a universal solution. Fine-tuning your models can be more effective in certain scenarios. Cedric explains the difference between these two approaches, and talks about where RAG and fine-tuning work best.
Demo: Red Hat documentation chatbot using RAG
Guillaume Moutier demonstrates how Red Hat customers can use gen AI, LLMs and RAG to extract valuable data out of existing documentation, knowledge bases and other data sources to provide accurate, comprehensive and context-aware answers.
The full model workflow makes use of an array of Red Hat products including:
- Red Hat OpenShift and Red Hat OpenShift AI
- Red Hat OpenStack Services on OpenShift
- Red Hat Ansible Automation Platform
- Red Hat Enterprise Linux
These are used to augment an LLM with documentation data that's pulled from an open source vector database and served on a single stack model server, all within OpenShift AI.
Red Hat OpenShift AI: Features and architecture
Christopher Nuland provides a technical overview of OpenShift AI, which extends the capabilities of Red Hat OpenShift to develop, train, serve and manage AI models. He highlights the end-to-end machine language operations (MLOps) features and tooling for both predictive AI and gen AI that run on-premise or in cloud environments. He also goes over more advanced features like distributed workloads, GPU accelerator support and data science pipelines.
Red Hat OpenShift AI: Predictive and generative AI demo
Expanding on his previous video, Christopher builds an OpenShift AI demo that combines gen AI (for summarization and sentiment analysis) with predictive AI (for vehicle image detection) to create an insurance claims processing application. He walks through how to use OpenShift AI for model training and experimentation, including how to create data science projects, import data, load Jupyter notebooks and train the models.
AI inferencing at the edge using OpenShift AI
Deploying AI at the edge can help provide organizations with real-time insights into their data. OpenShift AI can be used to deploy models, predict failures, detect anomalies and do quality inspection in low-latency environments in near real-time.
Myriam Fentanes and Landon LaSmith demonstrate how an AI model can be packaged into an inference container image and uses data science pipelines to fetch models, as well as build, test, deploy and update them using a GitOps workflow.
AI-assisted farming
Through a precision agriculture example, Guillaume demonstrates how AI can be used to help farmers increase crop yield and lower costs. Using a trained computer vision model, this game-like demonstration simulates drones flying over crop fields to detect whether they are diseased or healthy. If a diseased field is found, tractors are directed on the shortest path to that field to begin treatment. This demo combines telco 5G slices, edge computing, computer vision and operations research, all orchestrated on OpenShift and developed with the help of OpenShift AI.
Watch more
Want to see more? We have a bunch of YouTube playlists for you to explore.
저자 소개
Deb Richardson joined Red Hat in 2021 and is a Senior Content Strategist, primarily working on the Red Hat Blog.
유사한 검색 결과
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.