miércoles, 5 de noviembre de 2014

Docker.io 3D


Yes, I have to admit that I’m a bit envious of all those cloud projects that can virtualize their infrastructure and run whatever they want.
As you may know in a renderfarm we can’t use virtualization because of the loss of performance. So we have to ignore all that kind of fascinating new technologies like elastic infrastructure and autoprovisioning and configure our render nodes with all we could need for the current and future scenes, and hope all of that software, plugins and libraries doesn’t transform your nodes into a complete mess.
In a commercial renderfarm like RenderFlow this scenario is even worse, because we have to support any kind of configurations for Maya, Softimage, Arnold standalone, etc. Also, we have to be able to replicate the workspace of every client in our nodes. That includes plugins, shaders, directory structure and so on.
Our first approximation to solving this problem was configuring our nodes to be able to render any kind of scene no matter what 3D software, render engine or plugin it uses, we should have all things previously configured. As you may guess it is really hard to try to fix/debug problems in a node. Also adding a new feature for a client can break a configuration for the rest of the users.

Docker.io to the rescue

The new revolutionary cloud technology has come to Linux and in a couple of years has become the most massive hype on the IT. But before explaining what docker is I have to explain what a Linux Container is.

Linux Containers


A Linux Container is similar to a lightweight virtual machine but running in the same kernel as the host. It doesn’t need a hypervisor and the applications run in the same context as the host but in an isolated way. That means you have all the advantages of virtual machines without the overhead of a hypervisor and a complete SO running into another SO:
  • Portability: You can create your containers on your laptop and then run them in the render nodes. The only requisite is a compatible kernel. 
  • Isolation: Changes on one container don’t affect others. You can configure the live render nodes to accept a new type of render engine even when the node is still rendering. 
  • Lightweight and minimal overhead: Containers don’t require additional infrastructure, they just use the host kernel. 

What is Docker

Docker is a management tool on top of Linux Containers that offers a really useful set of tools. See how easy it is to create a centos image ready to use:

docker run -t -i -v /repository:/repository centos:centos6 /bin/bash

This command will download a basic centos6 distribution from the docker site; after that you will get a bash console to start configuring the container. Yes, you don’t have to install the SO from an ISO and configure the services to run and all those boring things. Docker uses its platform (https://registry.hub.docker.com) to download ready-to-use images of any kind of linux flavour on which you can then install whatever you want.

We have to give a special mention to the last part of the above command: /bin/bash This is the init command in your container. That means you don’t need to use the classical sysvinit since you only need to run one single command in your container: the Render command.

This simplifies the render use case since you don’t have to specify a special script to run your render when the Container starts, just specify the render command as the last argument in the docker command and the container will run the command. When the render finishes, the container will be terminated as well.

Distributing containers

Of course as with a virtual machine you can just export the container to a file and then import it in each render node, but Docker offers some other solutions:

  • Docker registry: You can distribute your containers over the network, uploading the images to a private registry. If you try to execute a container which is not available locally, docker will try to look it up in the registry and it will download it before execution. 
  • Dockerfile: You can create a script that from a base distribution configures a render node. With this file you can create the render container just executing the script in the render node. 

How we use it

We have a Docker registry available from all render nodes. We have uploaded some containers to the registry. Each container has a descriptive name about what it contains. For example we have a maya2015_mtoa_1.1.1.1. Then we use a script to run the render command from a specific container.


render.sh 2015_mtoa_1.1.1.1 test.mb username

This is more or less like:

#!/bin/bash
CONTAINER=$1
SCN=$2
USERNAME=$3


docker run -ti --rm -v /storage/$USERNAME:/storage $CONTAINER /usr/autodesk/maya/bin/Render -rd /storage/output $SCN


This command will:
  • Create a new container with the name specified in the first argument. 
  • Mount in /storage the host directory /storage/<username>. This will give access to the user assets and nothing else. 
  • Run the maya Render command inside the container and wait until it finalizes. 
  • When the render is ready then close and remove the current instance. The render result will be stored in the /storage/<username> directory of the host. 
  • Leave the node prepared for a new job. 
We continue working and learning how to improve the usage of Docker in our RenderFarm but right now I must say this clearly offers really useful advantages over the traditional methodology we’ve been using.

We are discovering new fascinating uses for this technology. For example, we hope in the upcoming versions of our Arnold renderfarm to be able to execute linux containers provided by our clients. This will give you more control over how to render your scenes in RenderFlow: Imagine you can upload the same container you are using to render your scene in your own renderfarm to RenderFlow this will allow us to be able to start rendering your scene without delay and comply with the deadline.

If you want to play a bit with Docker, install it https://docs.docker.com/installation/#installation and enjoy: https://docs.docker.com/userguide/