OpenShift makes it easy to deploy your containers, but it can also impact your development cycle. The core problem is that containers running in a Kubernetes cluster are running in a different environment than the development environment, for example, on your laptop. Your container may talk to other containers running in Kubernetes or rely on platform features like volumes or secrets, and those features are not available when running your code locally.

So, how can you debug them? How can you get a quick code/test feedback loop during the initial development?

In this blog post we'll demonstrate how you can have the best of both worlds, that is, the OpenShift runtime platform and the speed of local development. We will be using an open source tool called Telepresence.

Telepresence lets you proxy a local process running on your laptop to a Kubernetes cluster, both Minishift/Minikube as well as remote clusters. Your local process will have transparent access to the live environment: Networking, environment variables, and volumes. Plus, network traffic from the cluster will be routed to your local process.

Preparation

Before you begin, you will need to:

  1. Install Telepresence.
  2. Make sure you have the oc command line tool installed.
  3. Have access to a Kubernetes or OpenShift cluster, for example, using Minishift.

Running in OpenShift

Let's say you have an application running inside OpenShift; you can start one like so:

$ oc new-app --docker-image=datawire/hello-world --name=hello-world
$ oc expose service hello-world

You'll know it's running once the following shows a pod with Running status that doesn't have "deploy" in its name:

$ oc get pod | grep hello-world
hello-world-1-hljbs 1/1 Running 0 3m

To find the address of the resulting app, run the following command:

$ oc get route hello-world
NAME HOST/PORT
hello-world example.openshiftapps.com

In the above output the address is http://example.openshiftsapps.com, but you will get a different value. It may take a few minutes before the route goes live; in the interim you will get an error page. If you do, wait a minute and try again. Once it's running you can send a query and get back a response:

$ curl http://example.openshiftapps.com/
Hello, world!

Remember in the above setup that you need to substitute the real address for that to work.

Local development

The source code for the above service looks mostly like the code below, except that this code has been modified slightly. It now has a new version which returns a different response. Create a file called helloworld.py on your machine with this code:

from http.server import BaseHTTPRequestHandler, HTTPServer

class RequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
self.end_headers()
self.wfile.write(b"Hello, world, I am changing!\n")
return

httpd = HTTPServer(('', 8000), RequestHandler)
httpd.serve_forever()

Typically, testing this new version of your code would require pushing to upstream, rebuilding the image, redeploying the code, and so on and this can take a bit. With Telepresence, you can just run a local process and route traffic to it, allowing you a quick develop/test cycle without going through slow deploys.

We'll swap out the hello-world deployment for a Telepresence proxy, and then run our updated server locally in the resulting shell:

$ telepresence --swap-deployment hello-world --expose 8000
@myproject/192-168-99-101:8443/developer|$ python3 helloworld.py

In another shell session we can query the service we already started. This time requests will be routed to our local process, which is running the modified version of the code:

$ oc get route hello-world
NAME HOST/PORT
hello-world example.openshiftapps.com
$ curl http://example.openshiftapps.com/
Hello, world, I am changing!

The traffic is being routed to the local process on your machine. Also, note that your local process can now access other services, just as if it was running inside the cluster.

When you exit the Telepresence shell the original code will be swapped back in.

Wrapping Up

Telepresence gives you the quick development cycle and full control over your process you are used to from non-distributed computing; that is, you can use a debugger, add print statements, or use live-reloads if your web server supports it. At the same time, your local processes have full networking access, both incoming and outgoing, as if it were running in your cluster, as well as have access to environment variables and—admittedly a little less transparently—to volumes.

To get started, check out the OpenShift quick start or tutorial on debugging a Kubernetes service locally. If you're interested in contributing, read how it works, the development guide, and join the Telepresence Gitter chat.


저자 소개

UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Virtualization icon

가상화

온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래