How to visualise a Kubernetes cluster

A Kubernetes cluster provides a platform that abstracts away the complexity of a distributed, microservices based web offering. However, for the developer or operator of a Kubernetes cluster there will be times when it is necessary to inspect or monitor its various components; for optimisation, costs reduction, to handle system malfunctions or verify bug-fixes.

The components that comprise a Kubernetes cluster include nodes, namespaces, services, replication controllers, replication sets, daemon sets, pods and containers, which have associations that may be one-to-one or one-to-many. Since this creates an arrangement that cannot be characterised by a simple, up-down, hierarchical display, the widely accepted way to represent a Kubernetes cluster on a two-dimensional display is in the form of a node-graph or topology.

However, most Kubernetes clusters are too complex to be shown in full on a single display, especially when the components shown need to have identifying icons and labels. For this reason Cobe does not attempt to show everything, instead it shows the user the parts of the cluster they are interested in.

"Cobe screenshot"

Initially, they are presented with a visual representation of the uppermost tier of the cluster from which they can click on components to reveal their respective sub-components and so on, exploring more of the cluster as they go. Alternatively, they can use the the search facilities to find particular components, types of component or anomalies and then explore up or down the cluster from there.

"Close up of cluster with anomoly.png"

These ways of exploring the cluster are also very good for investigating anomalies since issues affecting one component in the cluster can be traced down to the root causes.

Release Notes for 2018/01/29

On the 29th of January 2018, Cobe released several improvements to its Kubernetes and Docker visualisation and monitoring platform. Both performance and stability have seen considerable progress.

Migration to Cobe Cloud is now complete. Various small caching changes have been added which should improve performance of the Cobe website, UI and APIs.

Version 0.43.0 of the Cobe Agent has been released. Significant performance gains have been made for both Kubernetes and Docker monitoring. Several bug fixes are also included, making the agent more resillient when accessing external data sources:

  • COBE-386: “No such container” when discovering Docker volumes.
  • COBE-426: Recent performance improvements causing “discarding buffers” warning.
  • COBE-409: Missing “podIP” when discovering Kubernetes probes.
  • COBE-410: Error generating relationships when discovering Kubernetes namespaces.
  • COBE-411: “Container is not running” when discovering Docker containers.

Changes to model indexing have dramatically increased indexing throughput. In worst case scenarios, indexing is now 20 times quicker than it was previously. Reliability has also been improved through updates to the indexing backend.

Support for Docker labels has been improved. They now behave the same way as Kubernetes labels — making it possible to compare the same label across Docker and Kubernetes environments.

A new iteration of the Cobe UI makes small adjustments to improve overall user experience. Delineation between Kubernetes and Docker containers is less ambiguous and the metrics panel made less intrusive.

The UI has also received a couple of important optimisations. Highlighting of Kubernetes namespaces and groups is now far more responsive on first application load. Memory usage has been significantly reduced for several core components — saving up to approximately 30% in some cases.

Unified Reporting for Containerized Business Services

Cobe describes itself as: A cloud-hosted SaaS platform that identifies anomalies, pinpoints impact and visualizes interdependencies to restore service faster. It is an open-source technology that incorporates additional entities from outside containerised environments to enrich the entire model with meta data from third party systems.

It’s time for Service-Centric Intelligence

Service-Centric Intelligence” is an umbrella term Cobe have coined to describe the concepts and methods that enable visualization, understanding, and enrichment of containerized business services.

But, what is Service-Centric Intelligence? How does it contribute to microservices deployments? And, who stands to benefit?

Cobe’s CTO, Alan Simpson, tells us more:

Keeping pace with microservices and containerization

As more and more organizations deliver microservices in containers as part of agile development projects, digital transformation strategies, or transition from monolithic/legacy applications to cloud native — a whole new set of challenges are being uncovered.

Whilst the benefits of microservices are well documented: a faster, more cost effective and efficient approach to provisioning software, that supports multiple teams working independently, whilst simultaneously ensuring collective software development to deliver solutions faster is required.

The issue is that microservices can appear, and disappear, in seconds. This makes them difficult to track in real time. And as the environment expands it introduces a new level of complexity as you try to decipher how a particular service is supporting your customers and end users whilst getting a clear understanding of exactly what is happening at a given moment in time.

The challenge is shared by many

Business leaders who are attempting to understand exactly what is happening in their business are struggling to find a way to accurately report on the success (or failure) and costs associated with these projects. Start up organizations with hyper-scale web applications, and established software vendors, who are using agile development approaches and a microservices architecture; enterprise organizations with legacy applications that need updating or rationalising as part of digital transformation or cloud adoption; and MSP’s (Managed Service Providers) that are supporting clients in their digital transformation journey, with tools such as Containers as a Service (CaaS); all face the same set of challenges when adopting a containerized approach.

New approaches for a brave new world

It’s no longer just about looking at the underlying infrastructure and identifying that CPU has hit 90%. Who cares? Your business service, if architected correctly and optimised to benefit from the auto-scaling properties of an orchestration engine like Kubernetes, will be automatically provisioned across a broader range of services — possibly across data centres globally.

All the customer / end user cares about is consistent, constant access to technology that will complete their purchase, book their flights, get their quote, and so on.

Simplifying complexity with Service-Centric Intelligence

The aim of Service-Centric Intelligence is to aggregate a host of relevant information and present a simplified view that allows you to bring order to containerization chaos and let you understand how a business service is being delivered.

Microservice and containerized environments are almost impossible to effectively map. Getting a clear view of the relationships between the underlying hosts, containers, pods and name spaces that are running, and easily understanding how they all interconnect and collectively deliver the business service, is no mean feat.

This is where Service-Centric Intelligence comes into play. This approach enables you to automatically track dynamic applications, even as they horizontally scale out through business logic and rules which have mandated that the app needs to spawn another instance in another data centre in another part of the world. It gives you a business-centric view of how your business services are being supported by your underlying infrastructure.

You can aggregate ALL the individual processes and resources, wherever they may be, and gain a deep level of insight and understanding of how a business service is delivered, this approach dovetails into supporting compliance and reporting regulations, cost management and chargeback processes.

The bottom line is that anyone with an application that has to scale in response to peaks in demand will benefit from getting a clearer view of their containerized environment.

Service-Centric Intelligence delivers value across the organization

For IT Operations teams, or developers who have had operational responsibility thrust upon them as a result of the brave new world of DevOps, Service-Centric Intelligence visualizes interdependencies, identifies impact and, ultimately, enables faster restoration of service.

Within the office of Finance, even the most modern organisation will still have existing technology languishing in a data centre, these cost money, and this money needs to be apportioned to a customer, team or department. Service-Centric Intelligence can apportion costs, not only in public cloud, but also in your data centre. Open APIs integrate with financial management tools to deliver unified, rich view of all aspects, of the associated costs of that business service, be that people or tech.

The ability to easily see these environments allows Cobe to drive deeper understanding and enrich, and extend the value of, containerized business services.

Pruning a Private Docker Registry

Why do we need a private Docker registry?

We currently use Jenkins to run our build and test process and we currently have two slaves running on our Jenkins cluster. When a pull request is created/updated, our process builds the Docker files that are stored within the code repository. Once a slave has built a Docker image it’s ideal that the other slave can access the newly built image. One way to achieve this is to have a centrally accessible registry.

How to run a Docker registry

This is relatively easy step as there is a Docker registry image available on Docker Hub. Currently our registry is running on our Jenkins master server. Execute this command to run it:

docker run -d -p 5000:5000 --name registry \
    --restart=unless-stopped -e REGISTRY_STORAGE_DELETE_ENABLED=true registry

I’ll go through each command option briefly:

docker run
Uses Docker to run a container based on an image.
-d
Run in detatched mode.
-p 5000:5000
Expose port 5000 from the current host into the registry container.
--name registry
Names the container to make it easier to reference.
--restart=unless-stopped
Tells Docker to keep this container running unless manually stopped.
-e REGISTRY_STORAGE_DELETE_ENABLED=true
Configures the registry to allow DELETE requests.
registry
The image to run from Docker Hub.

This will run a Docker registry that allows delete requests on port 5000.

Persisting the registry

When the image is restarted it loses its images that it stores. This is solved by using a Docker volume to store the images.

docker volume create registry-vol

And adding the following argument to the Docker run command above:

-v registry-vol:/var/lib/registry:rw

So the full command is now:

docker run -d -p 5000:5000 --name registry \
    --restart=unless-stopped \
    -e REGISTRY_STORAGE_DELETE_ENABLED=true \
    -v registry-vol:/var/lib/registry:rw registry

Clearing out unused images

As all of the images that Jenkins pushes are tagged as latest our goal is to search through all of the repositories in the registry and delete all of the images tagged as latest.

To do this we first get all of the repositories using this method.

REGISTRY_URL = "https://registry:5000/v2/"

def get_repositories():
    resp = requests.get(REGISTRY_URL + "_catalog")
    return resp.json()['repositories']

For each of our repositories we get a list of tags.

def get_tags(repository):
    resp = requests.get(REGISTRY_URL + repository + "/tags/list")
    return resp.json()['tags'] if json_resp['tags'] else []

In order to delete an image we need its digest.

def get_digest(repository, tag):
    url = "{}{}/manifests/{}".format(JENKINS_REGISTRY_URL, repository, tag)
    headers = {"Accept": "application/vnd.docker.distribution.manifest.v2+json"}
    resp = requests.get(url, headers=headers)
    return resp.headers.get("Docker-Content-Digest")

And we use this method to submit a delete request.

def delete_digest(repository, digest):
    requests.delete(REGISTRY_URL + repo + "/manifests/" + digest)

So to tie all of this together we use this method.

def clear_registry_tag(tag="latest"):
    for repository in get_repositories():
        for found_tag in get_tags(repository):
            if found_tag == tag:
                digest = get_digest(repository, found_tag)
                delete_digest(repository, digest)

For brevity error handling and printing have been removed but a full version of the Python script can be downloaded pruning-docker-registry.py.

Maximizing Microservices in Containerized Environments at DockerCon

Are you running Docker containers on a Kubernetes cluster and struggling to get a clear and detailed performance view of business services?

Docker containers exist to support applications that scale rapidly and update frequently. The challenge is that it makes these dynamic containerized microservices environments notoriously difficult to track and decipher — and problems can arise easily.

The world of containers and microservices are coming together soon at this year’s DockerCon EU 2017.

If getting a clear view of microservices in containerized environments in a challenge you face, Cobe will be at DockerCon Europe in Copenhagen, October 16-18 (booth #E6) to discuss how Cobe delivers an easy way to see your entire microservices environment, visualize problems, identify interdependencies and restore service faster.

We call this Service-Centric Intelligence. You’ll visualize your entire environment and will benefit from seeing: if there is a problem, where it is, the wider context of your cluster with associated potential causes, issues and/or implications between your microservices and how it will impact your other apps throughout the business.

Anyone with an application that has to scale in response to peaks in demand will benefit from getting a clearer view of their containerized environment.

The aim of Cobe is to aggregate a host of relevant information and present a simplified view that allows you to bring order to containerization chaos and let you understand how a business service is being delivered.

Cobe helps you get your app tested and deployed faster as you develop in CI/CD environments, without breaking your front-end services.

See you at DockerCon?

Really looking forward to demonstrating our technology to the Docker Community and hearing first hand how it will help those managing containerized microservices environments.

And it won’t just our technology that gets people in a spin!

At DockerCon we have 200 awesome fidget spinners available to give away AND are running prize draws to win an Amazon Echo Dot. We look forward to meeting you at booth #E6!

Meet the Dockers

Unless you’ve been living under a rock, chances are that you’ve heard of Docker. Wikipedia describes Docker as:

… an open-source project that automates the deployment of software applications inside containers by providing an additional layer of abstraction and automation of OS-level virtualization on Linux.

But there are MUCH better ways of introducing this technology!

Let’s start with: What is a container?

Containers are a generalized term in the industry and are very simple. Docker is the container of choice and has many advantages over other implementations of containers.

  • Developers can package dependencies with their apps in Docker containers for portability and predictability.
  • Docker allows developers to create a container on any machine that can host containers.
  • The package will come up in the same form with anything your app requires regardless of where you run it.

Users will create a Docker image anywhere they want to on their own local machine, in AWS, VMs or bare metal for example — and run the image in a pre-determined, guaranteed fashion.

Container Dependencies

When deploying a Docker app, developers will use Docker libraries specific to the application. For example, if building an app that did image manipulation, the container will need access to a set of libraries to do with GIFS, PNGS, that your app needs to run. These are the app dependencies that developers need to decide on i.e. what libraries will the app call, what function is needed. That creates a dependencies list.

All of those dependencies will be packaged alongside the app in the container. What’s important for the developer is to have transparency into these dependencies — if one library is updated or a dependencies changes, how will that impact the container?

Having a visual representation of your container dependencies can be extremely beneficial as Docker containers are scaled.

Best reason to use Docker

Docker offers many benefits from continuous deployment and testing to portability across multi-cloud platforms to version control, isolation and security. The main reason I like Docker is its ease of shipping. I can orchestrate my containers, scale them and ensure that they are resilient.

Containers by themselves are not a new technology. By providing standard APIs that made containers easy to use and creating a way for the community to collaborate around libraries of containers, Docker has radically changed the face of the technology landscape.

What happens next …

Docker containers are a great way to develop and deploy microservice-based applications. As these become more widely adopted there comes a point where organisations will need a solution to effectively schedule and manage containers.

This is where orchestration tools like Kubernetes come into play by automating deployment, scaling, and management of containerized applications.

The missing piece of the puzzle

We recently described some of the common mistakes people make with Kubernetes, and the problems with Kubernetes clusters — Kubernetes can be a complex and time consuming undertaking, with requirement for manual intervention.

Cobe provides a visual representation of microservice-based applications, and their dependencies, in Kubernetes. This significantly reduces the amount of human intervention required to identify anomalies, understand interdependencies and categorise impact.

Cobe Is Attending DockerCon Europe 2017

DockerCon Europe 2017 is the community and container conference for those developing next generation distributed apps.

We are pleased to be exhibiting and look forward to demonstrating how Service-Centric Intelligence for microservices provides a “top down” view that delivers complete understanding of your business services, including interconnectivity, relevance and impact analysis.

"DockerCon Europe 2017; 16-19 October"

Cobe is a “must see” technology for anyone running Docker containers on Kubernetes.

Are you running Docker containers on a Kubernetes cluster and struggling to get a clear and detailed performance view of business services?

Cobe provides a clear and detailed performance view of business services. Our technology addresses the wealth of complexity and uncertainty that can be introduced in this environments and helps you effectively support digital transformation and cloud migration projects with a built for purpose technology that bridges the gap between cloud and traditional environments; Cobe provides clarity on cost of delivery (and failure) and reduces MTTR with unified reporting for containerized microservice environments.

If getting a clear view of microservices in containerized environments in a challenge you face, visit us on booth #E6 and see how Cobe delivers an easy way to see your entire microservices environment, visualize problems, identify interdependencies and restore service faster.

And it won’t just our technology that gets people in a spin! At DockerCon we have awesome fidget spinners available to give away AND are running prize draws to win an Amazon Echo Dot.

Hope to see you there! Genuinely looking forward to demonstrating our technology to the Docker Community and can’t wait to hear what you think!

Problems With Your Kubernetes Cluster?

Imagine: no more problem hunting for hours at the command line. A quick look at the cluster, a fix, and you are back to what you wanted to be doing.

One of the best parts of being a DevOps engineer is taking advantage of the new waves of tools appearing on the market. Containers and microservices have taken over our daily lives. We work with pioneering technologies from innovative organisations like Docker and Google. But, we all know that these come with certain challenges, issues and limitations.

Does this sound familiar: You discover there’s a problem with your deployed service, fire up Kubectl to gain command line access to Kubernetes, and then feel a level of frustration with the long-winded multi-step manual search for the root cause of the problem. It’s a common issue, Kubernetes is just a bit … well … clunky. There is so much disparate simplicity, that we are faced with complexity.

"I love Kubernetes"

Don’t get us wrong, we LOVE Kubernetes

But we found navigating our clusters to identify issues painful from the command line.

We wanted a faster, simpler way to manage Kubernetes clusters. So we set out to cut the time and effort to traverse and explore Kubernetes to short track the diagnosis and resolution of deployment problems. Eighteen months ago we started development and last month our product became generally available.

Resolving issues within your cluster

Cobe creates a live, searchable model that captures all the relationships, performance data and actionable alarms, in full visualised context. Through search, Cobe enables you to identify and surface relevant telemetry so you can quickly identify and diagnose the root cause of issues.

In its most simple form: when you hit an issue, all you need to do is open a browser, login to Cobe and you’ll be presented with a complete topology of your environment.

As more and more microservices are added, there is an increase in the number of moving parts — all of which need to interact. This increases the number of potential points of failure. There are no issues when all the pieces work harmoniously together, but when one of these parts fails, tracing the root of the problem across multiple tiers can be difficult.

You can pinpoint anomalies

See that problem in the wider context of your cluster with associated potential causes, issues and/or implications between your microservices.

A needle in a kaystack.

You don’t just find the pod that’s not working in your code; instead you see all the pieces of the puzzle. Which container failed? Why is the app crashing? Is there an issue with the node? Is there a bigger problem? What other things are impacted? All this in visualised Cluster context, minimising the need to go to the command line.

We built Cobe because we were frustrated at interfacing “blind” with Kubernetes from the terminal and believe that this will enable teams to focus on the most urgent services, and restore service faster than ever before.

Avoid these common mistakes and kick off Kubernetes in style

The momentum for Kubernetes is continuing to build. The Cloud Native Computing Foundation Executive Director, Dan Kohn, gave an excellent summary supporting the claim that Kubernetes is one of the highest development velocity projects in the history of open source.

Apprenda have recently claimed that there will soon be more than 7,000 people with Kubernetes skills listed in their LinkedIn profiles and more than 1,000 matches for jobs on Indeed. A few months ago those metrics were half of what they are today.

I’ve had extensive experience working with Kubernetes since its release in July 2015. During that time, I’ve come across some common problems people make with Kubernetes. Here’s a list of six.

Kubernetes is so simple your toddler can code

One of the main benefits of Kubernetes is it’s simple: sign up to AWS or GCE and there you go. But with that simplicity comes issues. Just because deployment is easy, it doesn’t mean it’s right. There’s a lot of naivety about deploying “stuff“ into your environment and setting it free on the world. Just getting to your terminal and spinning up a Kubernetes cluster is a good start, but what then? Lots of people shoot themselves in the foot because they get the basics wrong (school boy errors!) like not considering security, logging or resource exhaustion. I’ve seen lots of microservice applications die because people let the disks fill up with log entries.

Resource constrained

The best way to showcase this common problem is with a website. It may serve a million requests daily, but as soon as a million and a quarter requests come in, it stops working. In most cases, it’s because no one acknowledged that there is a finite environment and so they’re not monitoring resources. Kubernetes is the same. When you have a Kubernetes cluster, you have a big collection of compute power. It’s very important that you are careful about the resources used. You can’t keep pressing the accelerator by using the cluster as if it’s a bottomless pit. Your apps and services will start to degrade in a non-linear and unpredictable fashion.

Kubernetes self-healing can throw you!

Kubernetes is self-healing which is one of the major benefits. But consider this: if something goes wrong, the system will automatically fix itself and you won’t know that it happened or keeps happening. The impact might not be that significant (after all that’s why you want to use Kubernetes), but it might be disguising a huge underlying problem — are you losing 50 webpages a day?

Kubernetes and costs

So you’ve set up Kubernetes with Google or AWS. You pay in a very flexible, yet convoluted way for your environment. And it’s hard to align spend with resources used, in fact it’s almost impossible to figure out how resource usage is associated with the line of business. If you need to split your costs across different lines of the business, forget about it! You won’t easily be able to make decisions on how to save money in the cloud when you’re running Kubernetes (but watch this space because Cobe wants to solve it).

Deploying Kubernetes to a containerized architecture

Development teams are developing software on a rapid cycle. No longer do we spend two years of our lives building a piece of software. Today, the schedule is much tighter, a matter of months or possibly weeks. The software changes regularly. As such, the whole development pipeline has to be done a smarter more modern way. Enter agile development. But with Kubernetes, you need to ensure that when you deploy your architecture in this fast pace environment, you do it properly. If you don’t realize you have problems with your code while you develop (or see point three), when you move to production or release software updates, you might face problems.

Speed of change

Both Docker and Kubernetes are going through massive changes. With that means you are dependent on how they make changes. Docker for example might decide to change an API, but not consider how you’ve developed your app or services. And with Docker’s fast paced release cycle you might be adapting your app or service continuously and you may even lose a feature. The same holds true for Kubernetes. Ensure you are continuously looking out for app dependencies because if one features disappears, you need to consider the knock-on effect across your app.

When using Kubernetes, watch out for these common problems. One way to ensure you are covered is to deploy tools that offer a visual representation of your app and its dependencies. That’s where Cobe comes in. Cobe delivers a service-centric visualization of your microservices architecture to see problems, identify interdependencies and restore service faster. Our open platform allows you to incorporate additional entities outside your containerized environment to enrich the entire model with metadata from third party systems.

Our open platform, which was released on 5th June, allows you to incorporate additional entities outside your containerized environment to enrich the entire model with metadata from third party systems.

Cobe Unveils Service Centric Intelligence for Microservices

As we were getting ready to launch Cobe, we had a conversation with our newly appointed CTO — Alan Simpson — who talked about why he had joined the business:

When I started using Kubernetes and microservices, I soon realized there was a problem. Such a cool new technology, yet with every line of code, an app or service might break. That’s when I realized there needed to be a fix. Enter Cobe.

Cobe delivers a service-centric visualization of your microservices architecture to see problems, identify interdependencies and restore service faster.

Service-Centric Intelligence

You can visualize your services easily with Cobe. It discovers your universe and creates a service-centric view of business applications to provide contextual insights for DevOps.

Impact Analysis

You gain a searchable and live view of your world. It allows you to investigate problems in real-time and assess the impact of issues.

Easy to use

The best part? It’s easy to use. You can get started in minutes without heavy lifting as Cobe is a Cloud based, SaaS hosted service.

What Makes Cobe Different?

Cobe is fundamentally different from other debugging and monitoring solutions in that it:

Provides a visual topological graph of your Kubernetes-deployed infrastructure that is intuitive and simple to navigate, based on a topological model that Cobe maintains. This enables you to investigate the context of an infrastructure problem with the ability to explore and understand performance issues of related services and resources

Visualizes your application services, having aggregated their constituent infrastructure entities — such as processes, containers, pods etc. — and their performance metrics. Services are, however, more deeply explorable at constituent entity level, as desired.

Has a powerful search capability that enables the rapid navigation to the area of your performance or debugging interest.

Those are just some of the reasons I joined, very much looking forward to the weeks and months ahead.

Service-Centric Intelligence has arrived.

It’s Easy to Get Started

Download and install the Cobe agent in your Kubernetes environment to send data into the model via a single command, go to your browser and — ‘Bob’s yer Uncle!‘ — you see your infrastructure in its full topological graph glory!

We are excited that Cobe is “out in the open” and we want as many people to try it so we can plan the roadmap based on your feedback. Simply sign up for a free and we will provide you with information on how to feedback.