订阅内容

The software industry has started developing a vast array of artificial intelligence (AI) applications based on large language models (LLMs). While many security threats to LLMs are similar to those affecting traditional software, LLMs and their applications also face unique security risks due to their specific characteristics. These risks can often be mitigated or reduced by applying specific security architecture patterns. Here are 10 ways to mitigate and reduce security risks in LLM applications.

1. Identify, authenticate and authorize all the principals

This includes humans and agents that participate in the LLM application. Use sound authentication and authorization standards, such as OpenID Connect (OIDC) and OAuth2. Avoid allowing unauthenticated access or using API keys if possible.

2. Implement rate limiting

Leverage AI platform components like API gateways—for example, 3Scale APICast—and don’t reinvent the wheel. You can limit the number of requests that can be made to your LLM to 5 per second if you expect that only humans will access it.

3. Use open models

And deploy them locally or on your own cloud instances. Open models provide a level of transparency that closed models cannot provide. If your use case requires that you use a cloud model offered as a service, choose a trusted provider, understand its security posture and leverage any security features it provides. IBM Granite models are trustworthy and open enterprise models that you can fine-tune for your own purposes.

4. Validate LLM output

LLM output cannot be fully predicted or controlled. Use mechanisms to validate it before presenting it to users or using it as input for other systems. Consider using function calling and structured outputs to enforce specific formats. Additionally, leverage AI platform solutions like runtime guardrails, such as TrustyAI or sandboxed environments to enhance reliability and safety.

5. Use logging wisely

LLMs are non-deterministic, so having a log of the inputs and outputs of the LLM might help when you have to investigate potential incidents and suspicious activity. When logging data, be careful with sensitive and personally identifiable information (PII) and do a privacy impact assessment (PIA).

6. Measure and compare the safety of the models you choose

Some models respond with more hallucinations and harmful responses than others. This affects how much trust we can put on a model. The more harmful responses a model provides, the less safe the model is. The safety of a model can be measured and compared with the safety of other models. By doing this we know that the safety of the models we use is on par with the market and is generally what the users of the application expect. Remember that if you are fine-tuning a model independently of the fine-tuning data used, the safety of the resulting model might have changed. In order to measure the safety of a model, you can use open source software like lm-evaluation-harnessProject Moonshot or Giskard.

7. Use models from trusted sources and review their licensing

AI models are released under a variety of different software licenses, some much more restrictive than others. Even if you choose to use models provided by organizations you trust, take the time needed to review the license restrictions so you are not surprised in the future.

8. Data is crucial on LLM applications

Protect all data sources—such as training data, fine-tuning data, models and RAG data—against unauthorized access and log any attempts to access or modify it. If the data is modified, an attacker may be able to control the responses and behavior of the LLM system.

9. Harden AI components as you would harden traditional applications

Some key AI components may prioritize usability over security by default, so you should carefully analyze the security restrictions of every component you use in your AI systems. Review the ports that each component opens, what services are listening and their security configuration. Tighten these restrictions as needed to properly harden your AI application.

10. Keep your LLM system up to date

As your LLM system probably depends on many open source components, treat these as you would in any other software system and keep them updated to versions without known critical or important vulnerabilities. Also, where possible, try to stay aware of the health of the open source and upstream projects that create the components you are using. If you can, you should get involved and contribute to these projects, especially those that produce the key components in your system.

Conclusion

LLM applications pose specific security risks, many of which can be mitigated or eliminated using AI security architecture patterns we've discussed here. These patterns are often available through the AI platform itself. As a software architect or designer, it’s important to understand the platform's built-in functionality so you can avoid reinventing the wheel or adding unnecessary workload.

Red Hat OpenShift AI is a flexible and scalable AI and machine learning (ML) platform that enables enterprises to develop and deploy AI-powered applications at scale across hybrid cloud environments, and can help achieve these security objectives.

product trial

红帽 OpenShift AI(自助服务式)| 产品试用

免费试用红帽 OpenShift AI 60 天,让数据科学家和开发人员体验红帽 OpenShift 上的强大 AI/ML 平台。

关于作者

Florencio has had cybersecurity in his veins since he was a kid. He started in cybersecurity around 1998 (time flies!) first as a hobby and then professionally. His first job required him to develop a host-based intrusion detection system in Python and for Linux for a research group in his university. Between 2008 and 2015 he had his own startup, which offered cybersecurity consulting services. He was CISO and head of security of a big retail company in Spain (more than 100k RHEL devices, including POS systems). Since 2020, he has worked at Red Hat as a Product Security Engineer and Architect.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Original series icon

原创节目

关于企业技术领域的创客和领导者们有趣的故事