Coming Up In Kubernetes

Kubernetes 1.3

If you didn't know, we are currently in the middle of migrating our infrastructure from VMs on AWS to Kubernetes on VMs on AWS. We are holding off for version 1.3 which is due to be released on June 24th. In the meantime we are making all the preparations we can with testing on 1.2, planning our migration and writing all the configuration we can.

I thought I'd write about 2 major additions the new version brings to Kubernetes.

PetSets

Petsets solve a problem with stateful applications and services. You may know in Kubernetes the smallest unit of deployment is a Pod. Pods are ephemeral by design, akin to a running instance of a container image and nuking it when it stops. When the Pod dies, it is permanently gone too, replaced by a new instance with a fresh filesystem, new network identity and all the rest.

This is generally fine except in the cases where an instance of your application wants to survive restarts and stops with it's filesystem and identity intact. For example in the case of a database node.

Petsets solve this problem by giving a unique and stable identity to a Pod. This is important for clustered services that need stable identities to refer to when bootstrapping a cluster or seeding additional nodes. The stable ID allows Pods to retrieve the data (volume) associated with a particular identity, meaning db.node1 still holds the same data between restarts.

Ubernetes (aka Kubernetes Cluster Federation)

This is basically what it says on the tin. Kubernetes as of version 1.2 officially only supports single master, multi slave deployments. This works fine but leaves a single point of failure on the master node, which handles cluster state changes and hosts the Kubernetes API.

Ubernetes aims to place a control plane on top individual Kubernetes clusters to support things like failover between clusters running in different availability zone. Hopefully, in practice this means the automatic and dynamic rescaling of services and applications in response to failure of a cluster and/or availability zone.

Ubernetes actually goes a step further than that. It aims to support the use-case of multiple Kubernetes clusters hosted across different cloud providers (e.g. GCE and AWS) and optionally on-premise bare metal. This is nice, but not something we are likely to use being quite comfortable in Amazon's warm embrace.

One more thing to mention is the changes to the script used to bootstrap a Kubernetes cluster. Titled kube-up.sh, it handles deploying the master and minion nodes, their network configuration and so on. In the case of AWS this means picking the AMI, setting up a VPC, gateways, subnets and more. This is being reworked in v1.3 to support Ubernetes, which should remove the manual work that is needed to setup the same in v1.2.

That's all

Minor disclaimer that anything I've written was parsed from the Github issues and discussions. The general concepts of Petsets and Ubernetes are blockers to the 1.3 release for the Kubernetes team. Their implementation and particular details may vary before release though, so do your own research on if they are right for you.

We're looking forward to lots of things from Kubernetes. Our playing around with v1.2 looked like it could remove a lot of the pain of dealing with heterogenous infrastructure and applications, each configured in different and special ways.

Originally posted on Metabroadcast's blog.

Java Socket Programming With Netty

It’s (hopefully) quite infrequent that one needs to work with network sockets directly to chuck bytes around. Normally in an application you’ll use an existing application–level protocol like REST over HTTP to pass data around. The reasons for this include, but are not limited to; convenience, reliability, interoperability and sanity.

That said, should you find yourself in a position where you need better performance or more flexibility than an existing protocol, it’s useful to know where to start.

For example, I used it recently in an IoT project where it would have been time consuming and inefficient to deal with HTTP clients in embedded C++ code.

Netty

Netty is an NIO (non–blocking input/output) client–server framework for Java. It simplifies the process of writing servers and clients that talk to each other under the hood using your typical DatagramSocket, ServerSocket and Socket classes. In this example I’ll show you how to write a very simple server that will accept connections over a TCP port, read and decode JSON and do something with it.

In real life you’re probably more likely to use something binary like Thrift, Protocol Buffers or Smile, instead of JSON.

Getting started

I am assuming you have imported Netty using the dependency manager of your choice and are ready to start typing code.

NioEventLoopGroup acceptorGroup = new NioEventLoopGroup(2); // 2 threads
NioEventLoopGroup handlerGroup = new NioEventLoopGroup(10); // 10 threads

First off we need instances of NioEventLoopGroup. This class implements a multi-threaded Event Loop, that is, something that constantly and frequently polls IO abstractions for stuff to do like read data or start a new connection. There is also the EpollEventLoopGroup available if you're on Linux, which makes use of the more performant Epoll.

We need two of them, one to accept new connections and one to handle existing connections. If you’ve worked with an HTTP server you’ll know it typically uses the same thing.

Configuring the server

Next we must configure the server proper. Lets walk through.

ServerBootstrap b = new ServerBootstrap();
b.group(acceptorGroup, handlerGroup)
        .channel(NioServerSocketChannel.class)
        .childHandler(new MySocketInitialiser())
        .option(ChannelOption.SOBACKLOG, 5)
        .childOption(ChannelOption.SOKEEPALIVE, true);

b.localAddress(port).bind().sync(); LOG.info("Started on port {}", port);

ServerBootstrap is a helper of sorts that lets you avoid configuring every single aspect of the highly complex ServerChannel implementations. Basically does what it says on the tin, it bootstraps a server for us.

It needs setting up with a few things, first we give it the event loops we created earlier which allows our server to accept and handle connections.

Next is a call to .channel() with a class. Netty will creates instances of this class and uses them to accept new connections. In this case that’s NioServerSocketChannel which is an implementation of ServerChannel.

Then a call to .childHandler() with an instance of ChannelHandler. This is where interesting things will happen, it sets up the pipeline that accepted connections are handled through. Here I’m using a class called MySocketInitialiser, my own creation, we’ll come back to this.

Calls to .option() let us set TCP options on the acceptor. In this case SO_BACKLOG tells the server to refuse connections if it already has 5 queued up.

Finally calls to .childOption() let us set TCP options on the handlers. SO_KEEPALIVE tells clients to keep their connections open with keepalive packets.

We then start the server by telling it to bind to a port at the local address and call .sync() to wait for the server to shutdown.

Setting up a pipeline

Back to MySocketInitialiser to see where the magic happens.

/**
 * Performs the initial set up of sockets as they connect to Netty.
 * Registers the pipeline of handlers that received messages are passed through
 */
public class MySocketInitialiser extends ChannelInitializer<SocketChannel> {

@Override
public void initChannel(SocketChannel ch) throws Exception {
    ChannelPipeline pipeline = ch.pipeline();

    pipeline.addLast(LineBasedFrameDecoder.class.getName(),
            new LineBasedFrameDecoder(256));

    pipeline.addLast(StringDecoder.class.getName(),
            new StringDecoder(CharsetUtil.UTF_8));

    pipeline.addLast(JsonDecoder.class.getName(),
            new JsonDecoder&lt;&gt;(Person.class));

    pipeline.addLast(&quot;stdoutHandler&quot;,
            new ChannelInboundMessageHandlerAdapter&lt;Person&gt;() {
                @Override
                public void messageReceived(ChannelHandlerContext ctx, Person person) throws Exception {
                    System.out.println(
                            &quot;Your name is &quot; + person.getFirstName() + &quot; &quot; + person.getLastName() + &quot;!&quot;
                    );
                }
            }
    );

}

The initChannel() method of this class is called by Netty whenever it receives a new connection. A SocketChannel is simply the channel abstraction over a TCP/IP socket.

Each SocketChannel has a pipeline associated with it. You can think of think of the pipeline as an ordered list of handlers with each feeding its output as the input to the next one. There are caveats to this but we can ignore them for now.

In the example pipeline above we have in order the following:

  1. A LineBasedFrameDecoder, this delimits messages by detecting newlines bytes (i.e. \n or \r\n)
  2. A StringDecoder, this decodes bytes into UTF-8 Strings or any other encoding of your choice
  3. A JsonDecoder, this decodes Strings using Gson into objects of type Person or any other type of your choice
  4. An anonymous class that simply prints the name of our decoded Person to standard output

JsonDecoder is not a part of Netty, its implementation is as follows:

public class JsonDecoder<T> extends MessageToMessageDecoder<String, T> {

private static final Gson GSON = new GsonBuilder().create();

private final Class&lt;T&gt; clazz;

public JsonDecoder(Class&lt;T&gt; clazz, Class&lt;?&gt;... acceptedMsgTypes) {
    super(acceptedMsgTypes);
    this.clazz = checkNotNull(clazz);
}

@Override
public T decode(ChannelHandlerContext ctx, String msg) throws Exception {
    return GSON.fromJson(msg, clazz);
}

}

That’s everything we need for our example Netty server to do stuff.

Seeing it in action

First we start up the server. How you do this will depend on the way your project is structured:

INFO  [2016-02-16 12:06:38,880] nettytest.SocketServer: Started on port 9000

We can then use telnet to open a socket to our server:

# jamie at eduD692.kent.ac.uk in ~ [12:08:52]
$ telnet localhost 9000
Trying ::1...
Connected to localhost.
Escape character is '^]'.
{"firstName":"Jamie", "lastName":"Perkins"}

And then in standard out we’ll see:

Your name is Jamie Perkins!

Wrapping up

There is a huge amount of detail I’ve glossed over for the sake of making this a very easy introduction and to get you off the ground quickly. The Netty user guide goes into more depth and is a good place to start when learning more. If you want to read about NIO in general the Oracle docs are also helpful.

Originally posted on MetaBroadcast's blog.