It’s been the stuff of science fiction and our imagination for years, but artificial intelligence (AI) is finally getting its due. Already used to express and understand human speech, drive autonomous cars, and interpret highly complex data, AI has the potential to revolutionize all aspects of our lives. But that’s only half the story. While there has been considerable attention paid to the algorithms and analytics that will comprise these applications, the question remains - what ecosphere will support this new form of life?
AI applications won’t find purpose or meaning without data. They will need an environment to draw resources from, and one that enables freedom of mobility. And while there are a number of innovative technologies influencing AI’s ascent, I believe mobile networks could be the perfect medium for AI applications to live in and traverse the globe, thereby enabling even more capabilities and services.
Call it an AI ocean – an ecosphere essential to the inception, development and ongoing support of discrete units of intelligence. In that ocean are the mobile devices on which both simple and complex AI applications can live – a striking parallel to organisms living in a physical ocean. And given the pervasive nature of mobile communications, AI applications will then have access to every corner of the globe. By the way, a shameless but accurate plug: Red Hat’s various open source platforms can serve as the foundations of this new ocean (more on that later).
Let’s consider how we arrived at the notion of an AI ocean by first looking at the evolution of life. Billions of years ago, single-celled life began to emerge in our primordial oceans. These eventually became multi-cellular organisms, and soon their architectures became more complex. Over time, invertebrates emerged and flourished, with a wide assortment of functions and capabilities but all fully dependent on the environment that gave them life and sustenance.
A similar pattern has taken place in the computing world, where simple programs have become larger and more intricate over time, yet fully dependent on the compute and storage environments in which they reside. IT technology mimics human architectures - servers and storage, for example, are simply primitive representations of the human mind’s core functions. And the human body itself is an environment, with various cells, viruses and bacteria traveling and residing within its physical boundaries. Various IT subcomponents (memory, CPU, etc.) and architectures (virtualization, network topographies, etc.) may have developed separately, but are now slowly congealing into a sophisticated ecology or "body" that will support AI elements. Virtual machines and container architectures have enabled this next step by unchaining software-based applications from their physical, server-based limitations so they can move in a predetermined manner within a virtualized IT network. And AI draws energy and “nutrients” from the platforms that provide them with elemental resources, like electricity, underlying processing capability, and data. Expanding on that, global cloud federation will unify network-connected compute and memory resources in a single large-scale image, yet another step in the formation of an AI ecosphere.
Overall, the virtualization and containerization of applications has provided an unprecedented level of mobility and independence to AI applications. Now, they can now reside and thrive in an exponential number of platforms and devices, with the ability to move amongst those platforms and devices as needed. The rudimentary aspects of AI social interaction will soon grow in sophistication and complexity, as these self-learning applications become more powerful.
So, back to mobile devices. These are more than communications devices; they are multifaceted neurons with a number of powerful senses, including auditory, visual, and verbal. They can process and interpret events at a rudimentary level. They have considerable storage and processing power. And AI applications could roam on these mobile device neurons at will, with little or no latency.
Already, on a smaller scale, mobile devices are interactively monitoring vital signs and biochemical markers to identify early stage illnesses and help prolong life. Consider specialized AI applications that could travel the network to remote locations to assist nurses and physicians with everything from real-time guidance during surgeries to individually tailored treatment plans. Or, on an even larger scale, consider a visual timeline of history created by the billions of photographs and videos, both stored and in real time, on mobile devices. They could all be catalogued through AI facial and structural recognition, cross-referenced by time and date stamps and satellite locations, and correlated with names and identifiers through database or photo mining. Then, the timeline could be transposed into a virtual reality format, allowing one to return and accurately experience human history within any given place in time.
As I’ve said, Red Hat’s various open source platforms – its operating systems, orchestration/virtualization frameworks, automation capabilities and middleware – are all ideally suited as foundational layers for the AI ocean. Open source systems aren’t bound to the whims of a single entity, but are developed, managed and advanced by communities. Open systems are flexible and scalable, and AI applications will require a memory footprint to survive, and the provisioning of memory instances – large enough to hold multiple AI applications as they queue and await assignment – will need to be spread across the network in a low-cost, high performance manner.
And bandwidth will likely require ongoing improvement. Some of the initial applications will resemble today’s business intelligence software, particularly the subset functions that are dedicated to analytics. In some cases, these programs run in a constant "high idle" state, ready to quickly consume and quantify data as it arrives. This resource setting alone will place an ongoing burden on compute resources. Many of these aspects can be handled today, but bandwidth and underlying hardware will likely require improvement, as these complex applications will reside and move within the network en masse.
Of course, AI applications will need to be grounded in practicality. Mobile device architectures will need to be modified to enable AI applications, and in turn, advanced security mechanisms will be required. Best practices and processes will need to be established for federated global clouds and to mitigate the implications of roaming AI applications. Cross-border data provisions at the international policy level will need to be taken into account, as some AI applications might be considered illegal and prohibited from travelling into various nations and data centers.
Finally, a variety of ethical considerations will also come into play before these applications are given full freedom of navigation. A concerted effort needs to be initiated to imbue the more advanced and powerful applications with the best aspects of humanity, including protective instincts for the preservation of human life, and strong elements of kindness and compassion.
It may be early days for the AI ocean, but all the pieces are coming together. I’d really like to hear your thoughts on all of this. Also, I’ll be attending The Machine Learning Conference in San Francisco on Nov. 10, and would love to discuss this in person. Hope to see you there!
À propos de l'auteur
Contenu similaire
Parcourir par canal
Automatisation
Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements
Intelligence artificielle
Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement
Cloud hybride ouvert
Découvrez comment créer un avenir flexible grâce au cloud hybride
Sécurité
Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies
Edge computing
Actualité sur les plateformes qui simplifient les opérations en périphérie
Infrastructure
Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde
Applications
À l’intérieur de nos solutions aux défis d’application les plus difficiles
Programmes originaux
Histoires passionnantes de créateurs et de leaders de technologies d'entreprise
Produits
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Services cloud
- Voir tous les produits
Outils
- Formation et certification
- Mon compte
- Assistance client
- Ressources développeurs
- Rechercher un partenaire
- Red Hat Ecosystem Catalog
- Calculateur de valeur Red Hat
- Documentation
Essayer, acheter et vendre
Communication
- Contacter le service commercial
- Contactez notre service clientèle
- Contacter le service de formation
- Réseaux sociaux
À propos de Red Hat
Premier éditeur mondial de solutions Open Source pour les entreprises, nous fournissons des technologies Linux, cloud, de conteneurs et Kubernetes. Nous proposons des solutions stables qui aident les entreprises à jongler avec les divers environnements et plateformes, du cœur du datacenter à la périphérie du réseau.
Sélectionner une langue
Red Hat legal and privacy links
- À propos de Red Hat
- Carrières
- Événements
- Bureaux
- Contacter Red Hat
- Lire le blog Red Hat
- Diversité, équité et inclusion
- Cool Stuff Store
- Red Hat Summit