hi so did you so we’re going to talk about docker introduction because that’s one of the topics that people get excited about after they stop playing with docker generally they get excited they say okay this is great I’m going to pack it my stuff into containers and then when they want to move to production and they have new challenges and they’re like well this is hard especially compared to the how easy it was in the beginning the first steps so this is a bunch of steps so that’s the big step from docker in depth to the current production becomes something that anyone can climb more easily okay so okay so that’s me before doing docker I was in that cloud which is the former name of docker and I was running a pass with a leaky containers and meeting all that stuff so that’s why it was kind of natural to move the docker afterwards one of my favorite pastimes with docker is to run absolutely anything in docker including doctor itself to make recursive jokes to rather devices doctor to real VA Sweden doctor and then you can run docker in vm’s in Locker in dragon dispensaries so this is what I’m going to talk about it so first a quick recap on herself and release was a few months ago then we talked about the problems that of your ass kind of soft easy stuff then series region of million so how do you get multiple antennas to talk with each other and orchestration I am so how do you run a moment later on volume on machine that we talked a little performance again is really performance of containers we can improve the performance then we will brush configuration management and how meticulously bloggers and then a little bit about both the boring stuff that settings have to do when you register in production like a log in practice and in appendix so dr. Aldo is here so we had one ordering dr. Khan evening two incidences going since then we made one lease agreement with a bunch of us interesting features there is a materiality of the features that are on the most exciting pause and unpause which as you should get assistance up shots of containers and so we can as a boss dinner decorative whose natural deposit and assilex because sometimes personal reason indicates years for arson security there so we have enough digression with Scenic’s provided by the people who have read hard working on experience itself then something that as a little guy I think we’re excited is the you that that that option reach that you have a technical performance in containers but also gives you an easy way to implement this TM and makes you an apology power but more importantly doctor one who needs inspiration read so instead of just buying different stickers you can also buy some tracks training and actually start making money so the first steps to doctor how to use instant so that’s a software mu is good girl it’s the Dunleavy and twenty five negatives rather than that it works everywhere you can even put a dispute with me synchronistically whatever and I use that to put your service directly if you never work with spot OS this reminder something on your servers people often ask us what distribution that uses world order the right answer is continue to use whatever music just make sure that you have to choose the version so if you’re at the young person use the latest and for if you are the relative assistant use rel Center 7 you can also get away with six five but try to stay pretty country much days because doctor itself doesn’t need man system destroy but if

we need current teachers so make sure you understand kind of that’s it then you don’t need even one specific tissue you can look at chorus and project atomic which contributions that were designed specifically to dinners this is the only new thing so before counting on that Ronnie make sure that you know what you’re stating that it’s a brand new way to manage servers and identity so don’t jerk on person to make just because you say there’s no grading missus made them so it’s beautiful my the invader rostov it go there if you mean the one where we build container images well we use sacrifice so it’s a kind of the new how am I going to make a beautiful build accessible for the brokers and offset everybody so if you never stopped at occupy and you define starting imaging in the world steps between executor steps and the nice thing is that you get to convey easy way of writing stuff that you get with the sequel shell script but you also get the convenience and the quickness of having snapshots at each step which means that whenever you change something docker will be smart enough to resume the build from the step that actually changed so in that specific case it doesn’t change much but if you have a really long installation scenario where you download tons of packages and build some source code and so on and so forth then when you just change the last few lines or you just change some lines of code it won’t rebuild everything it will jump immediately to whatever has changed like a make file but you will still get the the convenience of being sure that if you rerun that from scratch you will get the same result so now dr. files have a couple of drawbacks the drawbacks are that if you are using a docker file to it just to build a small 10 megabyte jar file but to do that you need to install a 5 gigabyte build environment then it’s not great because the way docker works with the system of layers means that for each step of the docker file you get a new layer and at the end you can say well I just installed like tons of packages just to build a jar file or binary or something like that so now I’m going to remove all those extra files because I don’t need them good idea except not so good because actually when you download the image you will still get the intermediary layers actually instead of getting the final image docker will get each intermediary step and one of those steps will contain information saying remove those files but it’s too late you already downloaded all the files anyway so that’s a drawback of docker files today we’re working on that meanwhile you can use two separate doc efface like one docker file to produce the binary or Java file or whatever the build artifact and then another docker file has a runtime environment to integrate that in and just run it another drawback is when you have a secret information like SSH keys credentials API tokens that can stuff that you need only for the build process like typically git repository credentials then you could say well sure I will have a layer that gets my credentials and then downloads the code and builds it and at the end I will just err M dash RF their credentials and throws on everything again wrong because the credentials and the code will be in the previous layers and when people don’t let the image they will get them anyway so again either you do that in two steps or you use private documents and images again we’re working to improve that and have way to kind of squash images so that the problem is not anymore how do we distribute and ship images so there is a docker hub you can build a container image push that image the docker hub then go to any docker host anywhere and pull that image and run it one of the most frequently asked feature for the docker hub is the ability to run it in-house so that’s your precious code and source and everything will never ever go over the wild internets so this is coming but meanwhile you can run the registry so the registry is the storage component of the docker hub and so it means just storage not authentication users and automated builds lots in that stuff but that part which is like the sensitive part is open source and you can run it in-house and if you want to scale it it’s it supports putting the layers on any kind of cloud object storage like s3 of twist if you’re on OpenStack s3 elliptic swith the crazy Russian storage system you can also use some hacks around our load and save if you have huge images like let’s say multiple

gigabytes and they change often and they change a lot you will see that the registry system is a little bit overhead so here you can make a kind of full dump of your image put that dump on a shared storage whatever that is and then load it that is extremely fast in that scenario we’re also working on adding pluggable transports system because there are stuff like get or sync BitTorrent that work extremely well to move bits around and so we would like to use that instead of just HTTP now let’s talk a little bit about service discovery because when you have multiple containers they need to find each other the simplest scenario is okay I have a database and I have a web server and my web server needs to connect to the database usually you edit your source code and you put the address or DNS name of the database within the web server configuration but that’s not so great because when you want to deploy the same code to dev and QA and production and so on you don’t want to edit those settings each time so you want proper service discovery how does that work there are multiple ways to do it and we will do a quick review so first overview one way to inject like the location of the database within your web container is to use environment variables so with a docker run – e option you can add environment variables in your container and use them you can bind mount a configuration file into the container I will give some details about that but basically that means kind of para dropping a configuration file in the container you can use a key value store like zookeeper it CD your console so that you will put all the location information in there and then you will retrieve that information from the service and then it means that the only thing you need to bootstrap your whole application is the address of that zookeeper or it CD or even radius even if it’s not highly available store to look up the rest of the the other addresses of the other containers and last but not least you can also resolve everything through DNS because that works pretty well so first before exactly going one-by-one through all those methods let’s talk about links links is something that was introduced a while ago in docker and the idea was okay you start your your database and then you start your web container and you link it with the database so linking means I’m telling docker hey this web server is going to talk to database so the web server should be told about the location of the database the result is that you get a bunch of environment variables within the web container telling it exactly the database is running that address that port and you can even give information like logging password and other credentials to connect to it it also creates DNS entries so that in your code you can just put like GB as the address of database well in that example that would be sequel and it will connect to it seamlessly the only problem of links is that currently they don’t work across multiple docker hosts but we will see shortly why this is not such a big deal so first first method of service discovery environment variables it’s super easy to integrate that in your code because every programming language out there as a way to get on run on variables it’s super easy to set up you just stop the container then you get the pot that was allocated by docker and then you once you have all this information you stop the other container it’s even easier if you’re using links because docker will do the work for you it will look up the ports and everything and create the variables for you the same way however it’s static so if you’re terrible move from one place to another then the web server is now connecting to nothing and has to be restarted change you have to have something to do if you want to be able to move services around so I’m only giving great be two unknown variables because they are static now by mounting a configuration file a little a few words about vine mounts because by mounts are a little bit magic it’s not like copying a file into a container it’s more like making a kind of symlink except it’s a symlink that can cross boundaries in that case it can cross the filesystem and the container boundary so I have a file in my local machine in my docker host and I will bind that file within the container it means that whenever I change that file on my host it will also change in the container and it’s not something like oh there is a process watching my file in making a copy each time it changed no it’s exactly the same file so any change made on one side is reflected on the other side instantly so the idea here is that if you have cyclic dependencies between your services and in that case links or the

previous method wouldn’t work then what you can do is that you can create an empty config file put all your configuration information in it and bind mounted in the containers you can add further configuration information and the containers can pick it up on the fly so it’s also very easy to integrate in your code because if you write like JSON oml every programming language out there will be able to pass that it’s also super easy to setup it’s kind of dynamic because you can update the configuration file while the apps are running but it’s also not quite dynamic because even if you change the file you will have to kind of tell to the tree web container that the file has changed and that it’s just reloaded so that’s some additional logic that you might or might not want to add so since it’s also quite good but not perfect only great be now let’s talk about key value stores so the idea is that you have something that will always works like zookeeper or its CD or if you think about single host radius because okay it’s not ed Shea but if you have only one machine anyway the only thing that can go wrong is the Machine goes away and then everything goes away so it’s good enough so the idea is that whenever you start a container like database you put a key like DBE host DB post DB whatever in that key value store and then when your application starts like the web tier when it starts it will get that information from its CG zookeeper and so on this is kind of dynamic because we actually all those systems have a way to watch a configuration value so whenever you move your database from one place to another you will be able to update the location and the the process is watching the location will get a notification saying hey in in your zookeeper thing that key has changed so you probably want to reconfigure yourself it’s nice but if you want to do that it means you have to run an extra process agent or something to watch when the key change and then regenerate the configuration file maybe and we stopped the process so it means a lot of extra logic within your application and I personally don’t like that because first I’m lazy and mostly usually I like when things remain like simple inch in their own container kind of so I don’t want to like sprinkle my web app with some extra logic and processes that are here just to track the location of my database so I’m putting grade D because oh I also forgot to mention that if you want to use zookeeper you have to deploy zookeeper which is usually not a very recent experience now what if we used gns because DNS is easy so it’s super easy to integrate in the code because you just in your code you just connect to DB and you don’t look up variables or do requests on the key value store anything like that it’s easy to set up if you do something static now if you do something dynamic you need a DNS that you can update very easily there are some servers like that there is something called Sky dock which combines Sky DNS and docker to have a dynamic DNS and while the DNS system doesn’t let you push changes like if there is a record that changes there’s no way to tell all the people who queried that wicker that it has changed there is a TTL system which means that whenever someone asks for a DNS record if they resolve it again later they they will get the new value so what will happen here is that if your database crashes and gets moved elsewhere then most likely the web tier will crash as well because it will I mean there will be a database request that will fail because there is no database server anymore and usually most frameworks will just try to reconnect and unless you’re particularly unlucky they will try to riri solve the address so that will connect to dB oh that will do a DNS request for that and they will get the new address or database and things will work so great be because of that almost dynamic thing at that point we haven’t found a single method that would work great for all purposes so both simple to deploy simple to use and simple to handle dynamic cases so we’ll look at links again and we will introduce the Ambassador pattern the idea of ambassadors is that on the left you can see the database host running a database container on the right the web host reading the web container and then we have on each of those hosts a wiring container on ambassador on the web host the wiring container acts as a kind of proxy for the database so when I start my web container I link it to the wiring

container on the right as if it was where the database on the left when I start my database and then I start the database ambassador and I link it with the real database container then the true wiring containers will talk with each other one way or another to find out where exactly they are running and route traffic so the key thing is that for my web and database containers nothing has changed they are not aware that they are actually running on different machines there is nothing to change in the code everything is like business as usual except it’s not but this is all abstracted by the ambassador’s now how do we get communication going between the two ambassadors the best practice is to use flame-throwing unicorns written by cats most usually though this means that there is no single solution that will work for everybody if you’re running in cloud you will probably want to use SSL or TLS encapsulation between the different hosts if you’re running on Prem on your own machines you can leverage on villains or V excellence or some kind of level two capsulation so and that those are just a couple of examples so there are already some implementations from bestsellers out there but the rough idea is that the Ambassador on the left so the container that will the database will register its location into something like zookeeper for instance and then the ambassador on the right so the one that is being talked to by the web container will look up that location and connect to the other ambassador so again those two wiring containers those two ambassadors will set up some arrangements to talk to each other probably using some kind of other system it could be zookeeper it could be a multicast discovery with avahi or something like that and that we abstract the network and the complexity so this is easy to integrate in your code because you’re still be using underman variables and now the key thing is that it’s easy to set up in dev mode because you can use normal links when when the buff containers are on the same machine and it only gets harder when you go to production and this is the key thing because there is no free lunch so when you do something complicated with containers all around it won’t work out of the box like that but at least we can have a solution that remains simple for simples and iOS and gets complex only for complex scenarios that’s great last thing the ambassadors can reroute traffic they can do load balancing network engineering and stuff like that so for that reason we give them grad a it’s still working progress but you can already set up ambassadors using the two projects that I showed earlier another way is to use overlay networks so in that case in addition to the normal document network you set up an extra network where each container has its own global IP address I put quotes around global because that could be global like real public IP addresses but it would also be just internal addresses that are varied only within the scope of an application there are multiple emerging projects out there doing that and none of them is like a clear winner on everything but you can look at rather from which is kind of initially made to work with kubernetes and chorus there is weave which was just out last week and per missing and the respite Rock which is a hack when you really need to get some custom Network on your containers and nothing else works but it’s kind of convenient this is also work in progress so if you want to go to production tomorrow maybe think twice before picking one but one of maybe multiple of those projects would be very useful now orchestration so when I need to run more than one containers on more than one host so I did a kind of flow chart to help you to pick the best orchestration mechanism so first if you want to or have to use up in stack then there are three projects that you can use solemn is something that will evolve into being like the containers for OpenStack or pass for OpenStack it’s still pretty new but if that’s what you want ie specifically containers API or something that really looks like a pass contribute to solemn especially if you’re a Python shop because OpenStack is Python if you have VMs moving them to containers you want to use nova if you are in neither of those two scenarios use heat if you don’t use up in stack then are you looking for a pass if yes I have good and bad news good because there are many options to pick from bad because there are many options to pick from so if you

don’t feel like running a dice you can see that I try to pick like some specific characteristic of each of those passes as you can see have actually even more than half of them is written in go which one year ago would have sounded crazy why all those projects didn’t go but go is kind of being in the past pace one word about pass to kind of to poop the party is that there is some people who think that private pass is not kind of ready yet there is this blog post that I recommend to like see and make your own opinion which is a big difference between public and private pass public pass is stuff like Heroku that cloud and so on and so forth where people are running the stuff for you and like doing the dirty job for you and private pass means when you read it yourself in house so in that case you have to do the bad thing like sitting Sigma H a and all that stuff so it sounds like you’re helping your developers by giving them a pass but your ops folks we have a lot of work to do to to operate it okay if you’re not looking for a pass then if you’re looking for just something generic tour in containers question is how many the Commission’s do you want to run if you only want to run one the commission per application of your environment so like one machine for dev one machine for prod one machine for this other application then you can use fig and then you should use fig because it’s probably the easiest solution to do that if but fig currently handles only one machine so while we are adding superpowers to fictional multiple machines you can look at tools like maestro energy or helios that lets you ramp up to a few tens or a few hundreds of machines you can also use configuration management and the same way that you use to deal with machines and applications in a pet manifest or chef cookbooks or whatever you can use configuration management to instruct your machines to boot up containers if you have lots and lots and lots of machines so thousands then you probably want to look at missiles because last week Amazo zone 20 went out and it’s mezzos with native support for docker containers and missiles is already in production at like Airbnb and Twitter and a few other small companies like that on thousands if not tens of thousands of machines so it works great the new component is not the scale it’s the fact that you would use docker a word about lips WOM lips WOM is the library that is kind of a toolkit when you want to build distributed applications with docker so there are that there is someone here working on lip balm and if you can so if you have questions about leave Swami will be able to answer that later but basically the idea is that currently the docker API deals with container and images but when you want to talk about hosts storage networking data centers racks and all those other concepts we need something more wider broader and that’s lip swap a few words about performance so when you want to get some metrics about your containers you can use C groups so C groups have been around for a very long time they are very stable and they are one of the key elements of containers and they give you extremely fine great stats about CPU usage memory usage actually the the memory stats of C groups are better than what you would get with no C groups along without containers one of the things that C groups don’t tell you is the network usage how to tweak performance good news you don’t have much to tweak because everything is already pretty fast like CPU speed will be native iOS speed will be native if you’re using volumes memory speed can be native if you disable memory accounting so if your workload requires to get rid of that small memory overhead which is incurred by the memory of counting you can get rid of that and as I’m shown earlier we added the docker run – – net option and with the – net host mode you can completely get rid of the network overhead configuration management there are many ways to integrate with configuration management but we kind of don’t really recommend to create docker images with configuration management however it’s definitely okay to use configuration management to spin up containers around now let’s quickly wrap up by talking about min course so backups the idea is to use volumes volumes are a way to share directories between multiple containers so you can start your database container and have

valid my sequel being a volume and then share that room with your backup container so the backup container will be able to scrub that directory save it somewhere without changing the database container logging now there are two ways to do logs there is the like the legacy way and the new rule docker way the legacy way is when you just dump logs into a directory in that case we just do just like for the backup thing we share that directory with another container and that other container will pick up the logs and ship them to log stash or ugly of Splunk or whatever you use now there is the new way which is when your application just rights to stand up output and then you can use the logger a pai to manage those logs remote access well here there would be a lot of things to say but we’re almost out of time so we’ll just recommend to check that blog post here which is kind of my life in containers without SSH which was initially a kind of fan Sergi well you don’t really need to run an SSH daemon in containers and evolve to be well it’s actually kind of complicated but I like challenges so let’s see how I can solve that now one word about moving containers around which is something we didn’t usually do with normal applications it’s not everyday that sis that means decide that they need to move something from one place to another and when they do they buy stuff like VMware’s vMotion or whatever so with containers there is an interesting story if your container is stateless so it’s a web server it doesn’t store anything locally it relies on a database and on some cloud object store then you just arrange to make sure that the container image isn’t the new host which is easy thanks to the docker hub and all your on-prem registry and then you stop that new container and you switch traffic on your load balancer so that it goes to the new machine if the container is stateful so if it’s a terribie or if it’s something that needs some local files and change them then you want to use Flocker which is made by pressurize defaults if you can just so if you have questions about that as the him after that basically uses the ZFS so that you can move not only a container but also it’s volumes from a machine to another and it also take cares of network plumbing so if it’s a database you have you connect to one host and it will nicely redirect traffic so that when you move the database the traffic continues to flow nice that’s it we might have like 30 seconds for questions something like that Thanks sorry and so we discovered that we want to use this not the CPU shares but some more isolation so for example limiting stuff with you limit and also using the CPU throttling and not CPU shares for product usage so we have more like isolation and guaranteed is there any plans to like increases isolation levels for docker like I mean the CPU and memory is like very like just the least common denominator I would say so plans to increase isolation to have even mana meets yeah more more comment or reaction I mean there are many many is like things also like network for example to restrict outgoing absolutely currently it’s like super easy to remit memory and CPU but down the road we also want to add ways to remit IO and network it’s not really hard it’s just that it wasn’t a life-or-death situation for anyone yet but when it will become mission-critical we expect that some people will contribute it and we will be able to review and merge it ok thanks IP version 6 when when does it come up in version 6 sorry oh I believe this yes as soon as someone contributes it yeah a short word about that many people ask us about good map and everything as you might know the core team doctor Inc is like half a dozen of persons and even though they are all mutants with superpowers they don’t sleep they have days of 36 hours and everything there is only a limited amount of code that it can write and review and so now when people ask for feature we say contributed we can we and that sometimes like that they react and then we cry a lot but know that the thing is that we are progressively moving to add like the enterprise features like provenance image signature so stuff that might not be super exciting for like the broad community but which is useful to make sure that we can actually live of that our customers and not our investors so it means that there are lots of feature that we would be extremely happy to see but that are with a slightly lower priority it doesn’t mean that we don’t want those features on that we don’t

care about them it means that we will implement them later but you can make later like tomorrow if you help by contributing like more than 90 percent of the contributors now outside of docker Inc if you look at number of lines of code it’s I think more than half of the code written is written outside and even if you look at the commentators one quarter maybe I’m working for Burger King and the others are like at Google and other companies so don’t hesitate to contribute because that’s the easiest way to get the feature in and if you like absolutely need some feature but the evil docker salespersons that you know though this is like q42 o 18 then we can also tell you who you can contract with and who you can bring on board to help with that all right thank you very much you