As of October 1, 2023, LINE has been rebranded as LY Corporation. Visit the new blog of LY Corporation here: LY Corporation Tech Blog

Blog


OpenStack Summit Vancouver 2018 Recap (1/2)

A conference was held in Vancouver from May 21st to 24th this year, OpenStack Summit Vancouver, and I'd like to share my time there as an attendee as well as a speaker, over two postings. On this post I'll briefly introduce OpenStack and share some sessions with you. In the following post, I'll share the content of our presentation.

What is OpenStack?

First, let's quickly go through the basic concept of OpenStack. OpenStack is software, provided as an open source project, for building cloud-based infrastructures like AWS and GCP. The term "cloud-based" sounds neat and simple, but if you look into the details, it's not so simple with so many use cases and target services behind it.

The first category for dividing a cloud type would be this; public cloud, which provides resources on cloud in the form of products, like AWS or GCP does. Another is private cloud, providing resources to internal developers for their applications, clients, and services, like we do in LINE. The way in which resources are provided varies too. There is IaaS, which provides products like VM servers or block storages, and there is PaaS, which provides additional services, ranging from application distribution to management features.

OpenStack supports integrating various components and plugins, and even develop plugins for it, in order to support various use cases to be satisfied. We at LINE also have integrated other modules to customize OpenStack to serve internal needs.

Considering all these options available for you, OpenStack is not only a software for building cloud-based infrastructure, but also a library or framework for one.

Sessions attended

We attended a session for private cloud & hybrid cloud, and some for container infrastructure:

Private Cloud & Hybrid Cloud

Root your OpenStack on a solid foundation of leaf-spine architecture!

The focus of this session was more on how to design network for data center over OpenStack. The session was so great, wish it had been setup for a proper session for 40 minutes, instead of having it as a short lightening talk which is for 10 minutes. Many parts of the session did overlap with our session, but unlike ours, the core of this session was data center network. Listening to this session will give you more understanding of our presentation, which I'll introduce you in the next post.

Container Infrastructure

Kata Containers: The way to run virtualized containers

This session introduced how OpenStack Kata Container runs, which is a new container technology announced by the OpenStack Foundation in December, 2017. The session was easy enough for beginners to understand, so those who have no idea what a container is could easily understand. Not only was it easy, but also quite useful. If you'd read the content on the Kata Container's official website you'd know, it is a project handling lightweight VMs like containers, through which we can solve the biggest weakness in using containers; having a kernel shared.

The presenter showed how Kata Container creates lightweight VMs or containers, and what they are doing to access resources on Docker or Kubernetes('k8s' hereinafter). Also he shared how he used KSM or shared roofts to reduce lightweight VM overhead, which is a concern in designing a system.

What intrigued me was how they made it possible to use hard devices not supported on a particular host kernel. Apparently, you can pass thru unsupported device by using the fact that you can change the kernel version of each container.

Friendly coexistence of Virtual Machines and Containers on Kubernetes using KubeVirt

KubeVirt is an extension of Kubernetes for managing VMs like Kubernetes Pod. The first thing that might to your mind could be Kata Container, but the two are completely different.

Recently, it became a common practice for developers to use containers and also distributing using k8s. Perhaps newly developed software would be designed with consideration of using containers. But, practically, we cannot make all the services of a system into containers. An example of it could be having to support old software which would have to be distributed through VM due to their OS dependency. In such case, some services would run in a container, and some on VM. Then we would have to have two tools, one for managing the container and one for managing the VM, complicating our management tasks.

To save us from such complication came KubeVirt, an extension of Kubernetes—container management tool—to include VM in the management scope. VM is added as a new resource type, using k8s custom resource definition, and the custom controller watches the Etcd key, and if any change is detected, hyper scheduling and VM distribution are proceeded. Check out the following video to learn more.

Bringing Istio to OpenStack

Istio is an application service mesh framework. Nowadays, many applications adopt the container related technology and micro service architecture. No longer is it rare to use multiple containers in a single system. In practice, when you run such system, you frequently face the problems on controlling dependency between containers. Suppose we have a system of one hundred types of containers. If one container is having a trouble or has been affected by a critical bug, how many containers would be affected? Or, say we want to upgrade a container, and we will send 20% of the traffic to the old version and the rest to the new version. How would you handle this? This is where application service mesh framework comes into solve these dependency issues.

On top of introducing Istio, they presented how they use it on k8s and OpenStack, the networking between type load balancer and Pod, and a demo on including a VM into Istio's service mesh using script on Cloudinit. Those who are interested, please check out the session.

Leveraging Serverless Functions in Heat to deliver Anything as as Service (XaaS)

The main topic of this session was integrating Fission, a serverless framework running on k8s, and OpenStack. You fill out code on a Heat template and write a Heat stack, then the Fission plugin on Heat creates a package for Fission from the resource definition and runs it.

A brief demo was given on running a simple API server. I've learned that they are revising to trigger running a server through a timer or events. For event notification implementation, they are considering making each OpenStack component to send a notification about resource change to a message queue.

If all it can do is creating a serverless API server, hmm. But, if they can handle more than APIs, say, handle triggers like events and timers, they sure have a great potential to extend OpenStack.

When you implement and run applications on cloud, you'll face many tasks you'd like to have taken care of; in distribution, implementing automatic expansion, dealing with outages of applications, and leaving audit log of a particular resources for governance. Some have implemented tasks to handle these outside cloud to increase management efficiency. However, we think it'd be best for cloud management to deal with management efficiency related to cloud specifics to support application developers to spare their resources on implementing their business logic. And, this is how we work at LINE too. We are currently revising a system to enable developers to link their code with events. Fission integration with OpenStack is at its initial stage, so we can't say for sure we will choose it as our solution, but they surely have our attention. We will keep an eye on it.

Session summary

So these are the sessions we've attended. Recently, there have been movements to use containers with our products, and we've been receiving requests in relation to containers, such as evaluating container orchestrator and supporting containers in private cloud as first-class citizens. We've also been having a lot of discussions about these requests, as there are many contexts and implementations to support the requests.

There were other sessions for us to consider in our discussions, related to various technologies in relation to containers such as Zun, a native container orchestrator for OpenStack and Helm, a catalog feature for k8s. Many of the technologies did start from the same spot, but turned out to be totally different products after a year, if the underlaying ideas were different from the first place. So our approach is not to be overwhelmed by these new technologies, but rather focus on what each of them aims to solve, and see how they grow, and then determine whether to adopt them to ours.

Summary

The vibe of the conference seemed to have lessened by one third or even by half, compared to that of two or three years ago. My guess is that back then, most of attendees were in the stage of revising whether to go with OpenStack for their system, or had just started using and thus with handful of problems to solve. But, this year, attendees seemed to have gone further down the road; some faced expansion issues, some had rather unique and interesting use cases, such as wanting to manage k8s and/or application workload with OpenStack with over a thousand Hypervisors and 20 clusters.

Eight years have passed after the release in 2010 which had given time for OpenStack to "mature" to reach stability. This stability must be based on a series of best practices produced by the teams and companies who used OpenStack in a medium-sized system. LINE's OpenStack based cloud as IaaS is somewhat stabilized too, but we still strive to provide our developers easy-to-use and stabilized cloud.

If you have interests in OSS, middleware, the following and in working in Japan, please check out our job opening here (Although the posting is on Fukuoka site, positions are available both in Tokyo and Japan).

      • Efficiency in expansion on IaaS and multi-cluster management
      • Efficiency in development and management for application developers
      • Implementing event handler (e.g. AWS Lambda) for programmable private cloud
      • Providing managed middleware service, including k8s

In our second posting, we will share with you our presentation, Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet fully redundant.