Virtual event

Develop. Deploy. Deliver continuously.

2019年 10月 10日 15:00 UTC

A virtual event by app makers for app makers.

Growing cloud adoption has created demand to develop applications optimized for cloud computing infrastructures. Cloud-native development helps teams speed up the development and deployment cycle and improves response times to changing conditions.

During this virtual event, we’ll discuss how cloud-native application development is evolving to make full use of the scale multicloud and hybrid cloud infrastructures can provide. With experts as your guide, take a technical dive into containers, microservices, API-driven integration, and DevOps automation. Learn how to combine these with sharp developer focus to produce lasting application architectures that can help you quickly get to market.

 

Live event date: Thursday, October 10, 2019 | 11 a.m. ET 

On-demand event: Available for one year afterward.

Brian Gracely, Director, Product Strategy, Red Hat

Mike Piech, Vice President and General Manager, Middleware, Red Hat

Innovating in a Hybrid Business World

Brian Gracely, Director of Project Strategy, Red Hat

It's been nearly a decade since software began eating the world and developers became the new kingmakers. But app makers are still frustrated that they can't build fast enough, deploy fast enough, and not worry about other layers of the stack. In this keynote, we'll talk about the reasons why companies are faced with hybrid opportunities and challenges at the business level, and how this impacts app makers. We'll also highlight how Red Hat is bringing together technology, innovation and culture to help remove the friction for app makers in ways that will help them success with existing and future applications. 

Cloud-Native Development with Red Hat Middleware

Mike Piech, Vice President and General Manager, Middleware

Modern business requires the ability to roll out functionality to customers and employees faster than ever before, yet still requires extreme reliability in service delivery. While new technologies offer unprecedented opportunities to boost productivity it is rarely possible to make wholesale platform changes and constancy remains critical. 

In this talk we will provide an overview of key architecture strategies that can boost developer productivity, improve robustness and enable long term evolution of IT environments. Topics covered will include the impact of containerization, APIs, next generation integration, process automation and development processes. The talk will also cover a number of examples that show how such application environment flexibility can have a major impact on business outcomes.

Designing applications with Kubernetes patterns

Designing applications with Kubernetes patterns

Bilgin Ibryam, Principal Architect, Red Hat

The way we design, develop and run applications on Cloud Native platforms as Kubernetes differs significantly from the traditional approach. When working with Kubernetes, there are fewer concerns for developers to think about, but at the same time, there are new patterns and practices for solving every-day challenges. In this talk, we will look at a collection of common patterns for developing Cloud Native applications. These patterns encapsulate proven solutions to common problems and help you to prevent reinventing the wheel. We will look at the following pattern categories: - Foundational patterns, describing a number of fundamental principles that containerized applications must comply with in order to become good cloud-native citizens. - Structural patterns, focused on organizing containers in a Pod to satisfy different use cases - Behavioural patterns, exploring the communication mechanisms and interactions between the Pods and the managing platform. - Configuration patterns, focused on customizing and adapting applications with external configurations for different environments. In the end, you will have a solid overview of how common cloud-native problems can be solved with proven Kubernetes patterns.

Simplify your API strategy with Istio

Simplify your API strategy with Istio

Nicolas Masse, Technical Marketing Manager - 3scale, Red Hat

With the advent of micro-services architecture, APIs skyrocketed. From tens of APIs, companies now have to deal with hundreds or thousands of APIs. Discover how a Service Mesh such as Istio can complete your API Strategy and extend your possibilities. In this session, you will learn: - the difference between an API Management and a Service Mesh solution - how to position both - the benefits of having API Management and Service Mesh integrated together You will also see a demo of an API deployed in a Service Mesh (Istio) and managed by 3scale. Key Takeaways: - Service Mesh and API Management fit nicely together - The value proposition of the 3scale Istio adapter is: "upgrade a service from your mesh to a full fledged API" - The underlying technology is real and working

Integration patterns in a serverless world

Integration patterns in a serverless world

Claus Ibsen, Senior Principal Software Engineer, Red Hat

Cloud-native applications of the future will consist of hybrid workloads: stateful applications, batch jobs, microservices, and functions, wrapped as Linux containers and deployed via Kubernetes on any cloud. In this session, we'll explore the key challenges with function interactions and coordination, addressing these problems using classic integration patterns and modern approaches with the latest innovation from the Apache Camel community: Camel K, a lightweight integration platform that enables enterprise integration patterns to be used natively on any Kubernetes cluster. When used in combination with Knative, a framework that adds serverless building blocks to Kubernetes, and the subatomic execution environment of Quarkus, Camel K can mix serverless features such as auto-scaling, scaling to zero, and event-based communication with the outstanding integration capabilities of Apache Camel. We will show how Camel K works. We'll also use examples to demonstrate how Camel K makes it easier to connect cloud services or enterprise applications using some of the 250+ components that Camel provides.

Microservices and functions with Red Hat OpenShift

Microservices and functions with Red Hat OpenShift

Marius Bogoevici, Principal Specialist Solutions Architect, Red Hat

The key to modern application development is delivering value quickly while keeping development and operations costs under control. Often, this balance involves a trade-off between focusing on experimentation and dealing with unpredictable loads (where functions shine) or focusing on predictable performance and operation costs (where microservices are a better answer). The immediate answer to this is mix-and-match approach, but that can’t happen by naively combining disjointed technologies and platforms. In this session, we'll demonstrate how the Kubernetes ecosystem, and in particular, Red Hat OpenShift, allows you to use both microservices and functions cohesively by taking advantage of the underlying platform and layering technologies, such as Istio and Knative, on top of it. This session will introduce the technologies and compare and contrast microservices and functions, pointing out which use cases are best served by each and providing developers with practical guidance and demos on how to take advantage of both in their applications.

Building microservices on Azure with Java and Microprofile

Building microservices on Azure with Java and Microprofile

Brian Benz, Senior Cloud Advocate, Microsoft & James Falkner, Technical Marketing Manager, Red Hat

This session explores the world of Java, Quarkus, and Eclipse MicroProfile and their combined strengths on the Microsoft Azure platform. Hear from the experts how adopting cloud-native, open source projects and open standards while making use of tools, APIs, and Java specifications that developers are already familiar with, can allow you to achieve superior productivity. This session includes an overview of MicroProfile and demos of how to put it into practice in the cloud using Red Hat OpenShift and Microsoft Azure.

Cloud-native development at local speed

Cloud-native development at local speed

Jan Kleinert, Developer Advocate, Red Hat & Jorge Morales, Principal Product Marketing Manager, Red Hat

The popularity of cloud-native applications, along with the pressure to build faster, has led to sweeping changes in the software engineering field—and to the rise of DevOps practices. However, deploying applications to the cloud has brought a host of concerns that slow down developers. This problem is highlighted during the write-deploy-test phase of the dev cycle, also known as the inner loop, when applications are deployed in an environment similar to production to test them in real-world conditions. Converting applications to a set of linked services, packaging them into containers, or instructing the target cluster to deploy the application (and its dependencies) are important considerations. Moreover, Kubernetes, the de facto orchestrator on which cloud applications run, brings its own concepts that need to be understood, while not being essential to the core functionality of applications. This need increases the development effort just to get an app up and running, not to mention the slowness of the process itself, as containers are built and then deployed. What can be done to improve the day-to-day experience of developers targeting Kubernetes clusters? What can make this inner loop faster, and bring the focus back on code? In this session, we'll look at the friction points that slow development early in a project, and then we'll see where things can be improved.

Future-proof monolithic applications with modular design

Future-proof monolithic applications with modular design

Eric Murphy, Architect, Application Practice, Red Hat & Ales Nosek, Container Application Architect, Red Hat

When building an MVP software application, you may immediately jump to a microservice architecture because it’s the new norm for building cloud-native applications. You may also be skeptical about starting off with a monolith MVP because of the stigmas of monoliths being a relic of the past. We will buck the microservice trend by showing how to evolve a monolith MVP in a highly controlled way using modular design principles. We will end with demonstrating a future-proof Quarkus + Vert.x application that is both a monolith and microservices while using the same code and modular design.

Quarkus—Container native Java

Quarkus—Container native Java

Emmanuel Bernard, Consulting Software Engineer, Red Hat

The traditional Java VM (HotSpot) with all its strength, comes at a disadvantage in a Kubernetes world requiring fast startup time and low memory usage (for higher cluster density). Historically, Java has been optimized for throughput and high dynamism at the expense of boot time and memory usage (JIT warm up and fixed high memory cost, java frameworks using lots of reflection and metadata gathering at startup time, etc). Enter Quarkus and GraalVM, an Ahead of Time close-world approach to immutable deployments that is rocking the Java ecosystem to its core. Come discover how they can enable Java applications to rival the startup time and density of Golang apps, while still allowing you to leverage the benefits of the entire Java ecosystem.

Scaling DevOps for hybrid cloud

Scaling DevOps for hybrid cloud

Steve Speicher, Senior Principal Product Manager, Red Hat

When development (dev) and operations (ops) get together, good things happen for the business. Often associated with containers, microservices, and public clouds, DevOps is first and foremost a cultural transformation focused on collaboration facilitated by automation. DevOps methodologies help developers and IT operations teams break down silos by aligning on standard configurations, security profiles, SLAs, and self-service provisioning policies. Automation eliminates operational friction and frees developers to rapidly develop, test, and release applications. Similarly, because apps are built on standard platforms, IT operations is able to provision and scale resources on demand, as needed—regardless of whether apps run on virtualized servers, private clouds, container platforms, or public clouds. In this session, geared toward developers and IT operations leaders, you'll learn how to simplify and automate DevOps security and operations at scale using Red Hat Ansible Automation with Red Hat OpenShift Container Platform.

Implementing DevSecOps: Lessons learned

Implementing DevSecOps: Lessons learned

William Henry, Senior Distinguished Engineer, Portfolio Architectures, Red Hat & Lucy Kerner, Senior Principal Security Global Technical Evangelist and Strategist, Red Hat

Security doesn't happen in one place in the infrastructure or application life cycle. Instead, security must be addressed continuously across the application pipeline, infrastructure pipeline, and the supply chain. And all of these areas need to be continuously monitored. In this session, we'll: • Discuss how developers, operators, and security teams can achieve DevSecOps through automation, standardization, everything-as-code, centralized management and visibility, and automated security compliance. • Examine how this process provides built-in security in the application and infrastructure pipelines and secures the supply chain, in addition to monitoring, logging, and proactive security and automated risk management. • Share DevSecOps lessons learned from various Red Hat Innovation Labs residencies, including best practices, techniques, and tools that can be used to improve security while reducing the workload of security professionals, developers, operators, and managers. • Discuss how participating in a Red Hat Innovation Labs residency can be like implementing “DevSecOps-in-a-box." In other words, we’ll learn how Red Hat Innovation Labs residencies can help build a starting point for DevSecOps and help customers successfully adopt DevSecOps best practices. • Detail how Red Hat Innovation Labs residencies helped customers accelerate their adoption of automating security, development, and operations, all simplified by using Red Hat OpenShift Container Platform and Red Hat Ansible Automation.

Persistent data implications for apps and microservices

Persistent data implications for apps and microservices

Michael St-Jean, Principal Product Marketing Manager - Storage, Red Hat

As organizations strive to transform their business, cloud-native application and microservices development has gained popularity and adoption. However, delivering on ever-shrinking timelines, and being more adaptive and innovative in developing these cloud-native apps requires a different approach and platform to design, develop, and deploy solutions. Containers have been gaining an overwhelming acceptance for such workloads due to the agility and flexibility they offer to dev/ops communities. Still, many operations teams have overlooked the important role of the underlying storage infrastructure prior to deploying the container-based environment. Stateful applications require persistent storage, and while there are several ways to provide persistent volumes to containers, delivering a cutting-edge dev/ops platform with an archaic, clumsy storage platform can seriously impede success. Today’s development teams need software-defined, container-based storage that is easy to use, highly available, flexible, and allows for faster development cycles for their stateful applications and services.

What are my microservices doing?

What are my microservices doing?

Juraci Paixão Kröhling, Senior Software Engineer, Red Hat

Microservices have become the standard for new architectures. However, the microservices architecture presents some new challenges. One of them is the so-called “Observability problem,” where it is hard to know which services exist, how they interrelate, and how important each one is. In this talk, we’ll have a live demo of an application that includes three Java microservices deployed both on bare metal and OpenShift. We’ll be able to compare how observable the application is in both scenarios based on tracing information extracted using OpenTracing and Jaeger, using three different scenarios: a "no instrumentation approach, "framework instrumentation" approach, and something in-between, where we use service mesh instrumentation via Istio.