This is a guest post by John Andersen, VP Solutions Consulting and Architecture, CognitiveScale.
At the end of the day, what do most AI developers look for? While the answer may vary depending on specific goals and industry focus, most of us just want tools that work and are easy to use. Meet CognitiveScale’s Cortex Fabric -- a low-code platform for developing, deploying, and managing trusted, enterprise AI applications. Fabric is powerful because it simplifies the whole process. And, oh yeah, it works on Red Hat Openshift.
In this blog (the first in a series), I will provide an overview of Cortex Fabric and its components, so you can see how easy it is to build an AI blueprint, or template, that can be configured to drive a high-value industry-specific business process.
The Power is in the Profile-of-One
The power behind Cortex Fabric is what we call the Profile-of-One (Po1). This unique and highly patented piece of technology drives hyper-personalization and contextualization for your end users. It is simple to create high fidelity data around any entity, not just a person, but anything that is temporal by nature like claims, products, and even locations that are required for cognitive reasoning to generate highly personalized interventions.
How does it do this? To help establish a base understanding of an entity, Cortex Fabric supports over 126 different connectors, with direct integration to Red Hat’s Intelligent Data as a Service (IDaaS), as well as IBM’s Cloud Pak for Data, bringing in data elements needed to drive these types of interactions. Data sources can span Customer Data Platforms (CDP), Master Data Management (MDM), existing Customer Relationship Management (CRM), or Electronic Medical Records (EMR), as well as bring in insights from any machine learning model or analytic. This allows applications to provide the right type of intervention, or personalized outreach, to the right entity, at the right time, over the right channel.
Over the past several years, CognitiveScale has gathered requirements from a variety of different organizations across digital commerce, financial services, and healthcare verticals on what it takes to truly provide a hyper-personalized experience. Through Profile-of-One’s event-driven architecture and scalable data persistence layer, we satisfy fairly strict non-functional requirements as they relate to response time and number of transactions per seconds, along with the ability to provide real time insight generation, build profiles around anonymous individuals, and provide an omni-channel experience.
How does it work? There are 8 capabilities that are part of the Profile-of-One:
- Event Based: Backed by an event-sourcing model
- Schema: Records facts, interactions, and insights about an entity
- Versioning: Provides a temporal view of an entity
- Feature Store: Support training and inferences on new machine-learning models
- Auditability: Tracks changes for auditability and compliance purposes
- Visualization: Can be visualized in Cortex or other BI tools
- Feedback and Learning: Facilitates feedback as well as incremental learning
- Metrics and KPIs: Helps track business metrics and KPIs
Along with the capabilities listed above, the Profile-of-One has a series of out-of-the-box prediction engines that make it easy to quickly derive insights without having to write any code. The six prediction engines that derive these inferences are: a) classification, b) recommendation, c) forecasting, d) similarity, e) segmentation, and f) enrichment.
Don’t have data available to start constructing profiles? Don’t worry. Cortex has an SDK that makes it easy to generate synthetic data that can mirror the behavioral characteristics of what your real data might look like. This is all done through a simple yet powerful set of capabilities that help create a veracity model of weighted patterns, correlation bias, advanced logic gates, and causal markers.
Goal-Driven AI Campaigns
Fabric provides open and extensible building blocks to convert any data and models into action within two weeks. Once several Profile-of-One’s have been created, AI developers, along with business analysts and domain experts can construct a new class of goal driven AI applications called AI Campaigns. An AI Campaign provides a framework for tracking business KPIs by defining cohorts, goals, and interventions (for example, recommendations).
Building an AI Campaign in Fabric: Cohorts, Goals and Missions
Building an AI campaign in Fabric is pretty straightforward. The first step is to define your Cohort by targeting profiles with shared characteristics. This segment, or population of entities, is created through a simple typescript that filters profiles. Cohorts can be identified through the output of a machine learning model (for example, find all people with a propensity to convert greater than 0.8) to analytics across changes in data (for example, target people whose credit score has dropped by 25 points in the last 60 days).
The second step is to establish a goal. Goals are KPIs or business objectives that are tracked throughout the life of a campaign. Goals can be tracked and configured through the same typescript language to identify a cohort. When creating a goal, you can identify the baseline, the target or end state of the application, the duration of how long the KPIs should be tracked, and the frequency of how often feedback should be measured. Goals are inherently important for the next step, which is the development of a Mission.
The last component in an AI campaign is the creation of a Mission, which are the events that take place to help achieve the pre-defined goal. Missions allow you to configure an intervention, or recommendation/outreach to a specific profile, and it is through the orchestration of interventions that missions enable an omni-channel experience for their target audience. More than being able to configure all the different possible ways of outreach to individuals, the most impactful feature of a mission is being able to simulate the effectiveness of the application before deploying it.
If you reflect on how traditional omni-channel applications are validated, you will most likely think about creating predefined rules for journey orchestration or running a series of A/B tests to measure the effectiveness of a particular channel. However, the problem with these approaches is that they often create a long time to value, stretching the verification period to several months.
Imagine an alternative, where you could begin evaluating the effectiveness of different types of personalized outreach messages before ever deploying them. This is the power that the mission simulation offers, enabling domain experts to fine-tune how an omni-channel experience should be, contextualizing with confidence which series of events should take place to accelerate and drive the most value as it relates to the overall goal of the AI Campaign.
Agent Composer
Solving the “Last Mile” of AI Development
Agent Composer is a visual workbench that offers a solution to the “Last Mile” problem by automating the message and connections to external systems that are identified through the Profile-of-One and AI Campaigns. Agent Composer orchestrates atomic assets called Skills and composes more intelligent insights to feed into an existing UI.
A Cortex Skill is an abstract representation of a functional component that can be a machine learning model, a rules engine, or even an RPA. Cortex Skills specification has been open sourced and can be found here. Cortex has a series of out-of-the-box skills available for use without any coding required. And, being an open development environment, Cortex has SDKs, CLIs, APIs, and Plug-ins available to take any existing asset created (for example, a machine learning model) and consistently and repeatedly publish into the Cortex Trusted AI Hub or marketplace/repository that make all of these skills discoverable for the Agent Composer to use.
Finally, promoting the full bill of materials that are part of an AI Application can be difficult. Cortex Fabric packages all of the components that are part of an AI Application (for example, datasets, configurations, models, APIs) and seamlessly publishes them across different logical environments, from development to stage, to eventually production, reducing the complexity of devops cycles significantly and accelerating the operationalization of all outcomes driven through AI.
저자 소개
Red Hatter since 2018, technology historian and founder of The Museum of Art and Digital Entertainment. Two decades of journalism mixed with technology expertise, storytelling and oodles of computing experience from inception to ewaste recycling. I have taught or had my work used in classes at USF, SFSU, AAU, UC Law Hastings and Harvard Law.
I have worked with the EFF, Stanford, MIT, and Archive.org to brief the US Copyright Office and change US copyright law. We won multiple exemptions to the DMCA, accepted and implemented by the Librarian of Congress. My writings have appeared in Wired, Bloomberg, Make Magazine, SD Times, The Austin American Statesman, The Atlanta Journal Constitution and many other outlets.
I have been written about by the Wall Street Journal, The Washington Post, Wired and The Atlantic. I have been called "The Gertrude Stein of Video Games," an honor I accept, as I live less than a mile from her childhood home in Oakland, CA. I was project lead on the first successful institutional preservation and rebooting of the first massively multiplayer game, Habitat, for the C64, from 1986: https://neohabitat.org . I've consulted and collaborated with the NY MOMA, the Oakland Museum of California, Cisco, Semtech, Twilio, Game Developers Conference, NGNX, the Anti-Defamation League, the Library of Congress and the Oakland Public Library System on projects, contracts, and exhibitions.
유사한 검색 결과
Revolutionizing learning: How Ford's Kubernetes community sparks technological innovation
Empowering federated learning with multicluster management
Technically Speaking | Build a production-ready AI toolbox
Technically Speaking | Platform engineering for AI agents
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
가상화
온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래