Subscribe to our blog

Data can be a key to differentiating one financial institution’s products from another institution’s. Data has solidified its place as core to building offerings that target individual customer needs, and not just as a cohort member to a segment. But financial services organizations may have to up their data game to further the smart, innovative solutions and services that their customers expect.

In recent years, digital interactions have improved with the adoption of practices such as agile development. Stringing together each incremental change faster has created levels of responsiveness that are now being applied in new business contexts. 

Now the industry is taking things a step further, transitioning from agile development practices that are primarily focused on logic and software for manipulating data, to one that places priority on data generated logic. Data is as important as code and should be treated as such. A financial company that manages data and data models as well as it manages source code may find it easier to innovate faster, and better differentiate their services. 

Orchestrating data and data models along with application logic is one of the means to building high-value apps. Training data and source code are two basic inputs to intelligent apps, which are built to better contextualize customer interactions to better meet their needs. It’s all about improving  the services being provided to personalize the interaction, even predicting what their needs might be to proactively recommend well-suited communications. 

In other words, it concerns optimizing the services provided to clients, predicting what their needs will be to attract, retain, and engage them. And these same principles apply to internal audiences as well, better serving their information needs within the firm.

Financial services companies can consider using open source as they pursue this optimization. Machine learning (ML) and deep learning methodologies are naturally attuned for models from substantial sources of data, running these compute intense, advanced applications, often in a Linux environment, and distributing them using Kubernetes containers. 

Open Data Hub: Spur to developing intelligent, applications

The shift to data being for widespread use to craft better customer experiences can, however, be a challenge because of the associated increase of demand for data that analysts and data scientists have to build models. 

To help with this, Red Hat has created Open Data Hub, an open source community project to help organizations streamline machine learning pipelines, to help firms move beyond their current limitations. 

Open Data Hub is an open source project that provides open source AI tools for running large and distributed workloads on OpenShift Container Platform. The tools can help financial services companies experiment and develop intelligent applications without having to master the complexity of ML and artificial intelligence (AI) software. 

This could allow users to create models for things like pricing, product adoption propensity,  and anti-money laundering models, amongst many other applications. We are also working with our partners on building better toolkits for collecting data as well as streamlining activity through to deployment.

Open Data Hub is built on the Red Hat Kubernetes-based OpenShift Container Platform, Ceph Object Storage, and Kafka/Strimzi. We invite you to join us in the Open Data Hub community. And for more information about our direction in this space, check out Chris Wright’s keynote at Red Hat’s digital leadership in financial services virtual event, available on demand, where he discusses advancements in AI and ML and how we are pushing the range and scope of insights that financial services firms can get from data.


Über den Autor

Described as a pioneer and one of the most influential people by CRMPower, Fiona McNeill has worked alongside some of the largest global organizations, helping them derive tangible benefit from the strategic application of technology to real-world business scenarios.

During her 25 year professional tenure, she has led teams, product strategy, marketing, and consulted across a wide range of industries, while at SAS, IBM Global Services, and others. McNeill co-authored Heuristics in Analytics with Dr. Carlos Andre Pinheiro, has previously published both in academic and business journals, and has served on the board of the Cognitive Computing Consortium. She received her M.A. in Quantitative Behavioral Geography from McMaster University and graduated with a B.Sc. in Bio-Physical Systems, University of Toronto.

Read full bio

Nach Thema durchsuchen

automation icon

Automatisierung

Erfahren Sie das Neueste von der Automatisierungsplattform, die Technologien, Teams und Umgebungen verbindet

AI icon

Künstliche Intelligenz

Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen

cloud services icon

Cloud Services

Mehr erfahren über Managed Cloud Services

security icon

Sicherheit

Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren

edge icon

Edge Computing

Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen

Infrastructure icon

Infrastruktur

Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen

application development icon

Anwendungen

Entdecken Sie unsere Lösungen für komplexe Anwendungsherausforderungen

Original series icon

Original Shows

Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten