VMware Integrated Containers (VIC) vs Boot2Docker

I was able to download a copy of the recent vIC bits and I have to say that I am not impressed. I can see where some of it makes sense, but I don’t understand the logic for creating a whole lot of VMs running 1 container. I can see maybe consolidating all containers in one “App” running in one VM (Apache, MySQL, other app), but I don’t see what running one container per VM really does to boost containers on vSphere. I am going to guess that it is a way for VMware to still stay in the game per say.

Update: VMware answered this question stating that it was to help drive segregated workloads and to help with monitoring as the container would be the only workload on the VM. There is no easy answer to monitoring a container today. 

I was working inside VMware Workstation the other day and figured out that I can deploy from Docker to Workstation using a Docker plugin. That’s how I discovered Boot2Docker ISO. This was great because it is so easy to use. First you get the Boot2Docker ISO and put it on your ISO datastore that is shared to all hosts, then you grab a Ubuntu ISO to format the VM disk with EXT3 and label of “docker-data”(I will double check this). Next you need to set the VM to boot from that datastore ISO every time and done. When you need to upgrade you just mount the upgraded ISO image as it will pull all info from that VM disk for configuration. Then you can deploy as many containers as you need along with being able to reduce IO overhead compared to vIC. I will give that it is not as easy to manage, but a small trade off in my mind.

I am sure vIC will grow into it’s own, but it’s not prime time ready to me. I am playing with Pivotal Cloud Foundry and will report back on that next.

Comments

Popular posts from this blog

Using MBR with older hardware and VMware ESXi 6.0 and 6.5

Ansible: Adding Ansible users and SSH without SSH keys

Getting Rancher LongHorn to work with RancherOS on vSphere