Even though artificial intelligence and machine learning (AI/ML) have been around for some time, only recently, when computing power is affordable, have companies started investing in AI/ML. What was once technology only accessible and affordable to enterprises has become more widely available. 

When using AI/ML technology right, organizations can achieve a variety of benefits such as predictive maintenance of hardware on a factory floor, cross-sell products to existing customers, identifying customers' churn before it happens, and improving customer services—just to name a few.

Some organizations have implemented machine learning technology but have not seen the expected return on their investment. Several factors may impact the success of machine learning in streamlining operations: data quality and availability, managing model lifecycle, re-training of models, and collaboration between teams and departments. So what are some things you can do to help ensure success with your AI/ML investment?

This post will provide a roadmap on how to adopt AI/ML into your organization. 

1. Educate yourself and your teams

This step sounds trivial, but a general understanding is vital to successfully using this technology. We recommend taking a deep dive into topics like MLOps, Model Development Lifecycle, and the importance of relative data. Working in cross-functional teams is also a good way to get familiar with AI/ML basics. There are many courses and talks publicly available on Coursera and Udacity as well.

2. Select a pilot project

Start small when selecting your pilot project. Avoid attempting to solve the most complex problem with AI/ML technology in your organization. Instead, find a small initiative that can make a measurable impact for a particular group or department in your organization.

There are tons of information, including sample AI/ML use cases available on the internet that demonstrate how AI/ML technology can play a crucial role in solving critical issues. Some of these examples can inspire similar setups to solve existing issues.

Remember, AI/ML will not solve all the problems. In some instances, a traditional rule-based approach may solve the given problem with ease. Your pilot project needs to have AI/ML at its core to drive AI/ML technology within your organization.

Start with defining simple metrics for success. With defined metrics, you can measure and demonstrate the value of AI/ML to various stakeholders later on.

3. Get expert advice

If your organization doesn’t have the capability in-house, get outside expert advice. You may need experts who can assist you with collaboration across teams, define new processes, and gather technology advice.

Ensure that your organization’s team is available throughout this collaboration to maximise knowledge transfer. Once the pilot project has been implemented successfully, your team will establish and grow your AI/ML practices, even beyond initial implementation.

4. Prepare your data

Data is the most crucial part of your project. You will need lots of it. The more data, the better. Some organizations spend a lot of time on data preparation before using it for a project or a selected use case.

Getting your data for your pilot project can be challenging. Data might be stored in different formats (both structured and unstructured) or located in various locations and require merging. The data will need to be cleaned. Additionally, you’ll want to consider how privacy laws can impact access, how data bias can affect models unfavorably, and how data storage may complicate how much can be obtained. Maintaining awareness of data quality will play a critical factor in a successful AI/ML project.

5. Define the metrics for your model

This is one of the most crucial phases in which the subject matter experts (SME) define how to validate the AI/ML model’s success. There are many metrics available such as precision, recall and accuracy. Every use case is different and selecting the correct validation metric is vital for a successful outcome. A model built for medical diagnoses will have different considerations than building a model for spam detection.

6. Explore data with SMEs and run experiments

Work with SMEs or domain experts to further understand what data is useful, and how to achieve optimal metrics defined earlier. Experiment with different algorithms and hyperparameters to find the best fit for your pilot’s use case. Include stakeholders to gain their buy-in and grow their confidence when using the model in production.

7. Train and validate your model

For training and validating your model, it is recommended to split your data into three sets: a training set (~ 70%), a test set (~15%) and a validation set (~ 15%). Ensure your training set is large enough to see meaningful results when using the model on it.

The test set is required to test the accuracy of your model. Only training and test sets are available to the model building team (maker team) to avoid possible data bias.

The validation set is only available to a different team (checker team) that validates the model metrics against the validation set. This maker-checker approach provides an additional method to reduce the bias in the model. 

8. Implement DevOps and MLOps

Building a model is a job half done, then integrating it into your end-to-end lifecycle might be challenging. Often when data scientists develop models on their laptops, integration into production becomes a significant challenge. The operations teams and development teams need to collaborate with data scientists in iterative ways. Automation is key, and continuous integration and continuous deployment (CI/CD) can help get the model operational, fast.

9. Move your model into production

Once the model has been built and thoroughly tested, it is ready for production rollout. If possible, roll out to a small number of users first. Monitor the model’s performance over several days. Be sure to include stakeholders and SMEs in these discussions to evaluate results and provide continuous feedback. Once the stakeholders have accepted the model’s performance, the model can then be rolled out to a broader audience.

10.  Keep your model relevant to the real world

Once your model is in production, you need to continuously monitor and adjust the model’s performance based on its current market situation. Market conditions can be triggered through various events.

For example, the retail industry may find its models significantly impacted by events such as Black Friday, or holidays like Christmas. With these impacts, the model may need to be retrained. Otherwise, its predictions may become inaccurate, resulting in an outdated model. Drift detection is one technique that can be utilized to monitor the current market and world conditions and help you decide when retraining is necessary.

11. Celebrate success and promote the outcome

Once your pilot project is successful, promote and advertise it within your organization. Use internal newsletters, internal websites, or even consider an email from the pilot’s sponsor sent out to stakeholders to promote the successes. Some organizations use internal webinars demonstrating how the initiative has helped achieve a particular outcome. Use any opportunity to increase the visibility of what you have done, and show off your success!

12. Set up a Community of Practice (CoP)

With interest generated around the pilot, establish a CoP in your organization. In this community, team members can discuss a range of topics related to AI/ML, inform one another of key trends, discuss frameworks, and share details on potential vendors. Regular presentations by this group to the wider organization (such as webinars or lunch and learns) will help to grow interests and investments in the community.

13. Consider AI/ML ethics

When rolling out this new technology, anticipate that employees might raise concerns. Will AI/ML technology replace staff? What decisions will the model take, and who will be able to control it? Who will train the model? Is there bias in the data set? What are the implications of a limited data set? How will I decide that the model is ready to go to production, and who will take the consequences if things go wrong?

To address ethical concerns posed by AI/ML technology, create an internal resource or nominate an informed person as an “AI/ML ethics officer” to answer future AI/ML technology impacts. This instills confidence and demonstrates that the organization is working to address the concerns of its employees. 

Selecting the best tools for your organization

Regarding how to best implement your AI/ML goals, there are hundreds of tools, frameworks and platforms available. It can be challenging to decide on a particular set when starting your journey. Always examine what kind of capabilities and skills your existing team has before purchasing new tools. Review your overall IT strategy in terms of open-source software versus commercial software, as well as your cloud strategy, if applicable. 

These are the essential criteria you may want to consider in your decision-making processes. Before choosing any tool, research its market popularity and the availability of product support. If you are reliant on contractors, check market availability, ideally in your local region, to reduce overall starting costs.

Always remember, going on an AI/ML journey will require time, effort, and practice. It's more than a technology change; it’s also a new way of working. It will require modifying how teams collaborate and improving existing processes and technology. Provide your teams with support, rely on SMEs, ask plenty of questions, and document outcomes to help ensure that your AI/ML journey will ultimately be successful.

Next steps

Let us help you take this journey! Through our AI/ML Red Hat Open Innovation Labs residency, we will work with you directly to coach and mentor your team in AI/ML best practices, review and improve your MLOps processes, and deliver a pilot project throughout the residency experience. For more information, please contact us through the form on our Open Innovation Labs page!


About the author

Sandra Arps is the Head of the Open Innovation Labs & AI/ML Practice at Red Hat. In this role, she helps customers recognize the need for innovation in business transformation, product development, cultural transformation or simply helping set up high performing teams in diverse markets like Australia, India, Indonesia, Japan, China and Korea.

Read full bio