Articles tagged in openvz

  1. Future of OpenVZ Is there a future for OpenVZ, which starts with an unavoidable comparison with Xen. I am not sure why they need to be compared at all -- paravirtualisation and container-based virtualisation are two completely different technology, and both have their pros and cons. On one hand, if you need a hypervisor and want to run full-blown OS, then you can't do that with OpenVZ. If you want burstable memory allocation, shared filesystem, and really utilise your hardware, then Xen is not the best choice. Simple.

  2. Setting up OpenVZ Node at Work

    Sorry I have not been blogging over the past few days. Even Vivian can tell that my blog has somehow been deserted! (Yes, she checked for all the gossips). I have been quite busy at work, felt tired when at home, and was mentally blocked to put things down on paper blog entries.

    OpenVZ I was having "fun" today at work setting up a new Dell PowerEdge SC2850 to be our new development server. It's quite a beefy machine. 2x dual core Xeon with 2x hyper threading on each core, and /proc/cpuinfo on Linux shows total 8 3.6Ghz 64bit CPU's! Not to mention a big array of 15,000RPM RAID disks and lots of RAM.

    It is going to run Gentoo Linux, just like all our other development boxes. However, instead of running one single OS, we thought we would give virtualisation a try, and get OpenVZ to work.

    OpenVZ is operating system level virtualization, and is different from virtual machine or paravirtualization technologies like Xen and VMWare. You are basically running a jailed environment on the same Linux kernel, so you can be "root" of your own environment. Moreover, OpenVZ provides many parameters to allow you to fine tune each virtual environment to give different memory allocation, CPU priority, disk quota, etc.

    We are hoping to use this technology to divide up that beefy box for different services. We can then upgraded different VE without worrying messing up other stable services running on the same piece of hardware.

    Just managed to get the DHCP to work from inside the VE's, via a bridged between virtual ethernet devices and the box's second ethernet card. More to do on Monday...

    Yeah. I have been playing with quite a bit of virtualization these days. Virtual PC on PowerPC-based iBook. VMWare Player/Server on my Pentium M notebook. Xen hosting at unixshell. OpenVZ hosting at VPSLink. Yes. Hardware is arbitary, and virtualization really makes management and server utilisation much easier.

    A comment like this on Slashdot really makes me sight -- when I look at our arrays of rack servers we have deployed for our clients at work. When it comes down to hosting, law of large numbers rules, and I believe virtualization plays an important role in making it feasible. Instead of each physical server a discrete unit, you can treat the entire server farm a big unit. With SAN and virtualization, you can migrate CPU resources to the VM that needs it the most (actually it is the other way around but resource goes to needing node sounds better). You can probably also reduce the number of physical hardware deployed -- in our case more than half are sitting half idle while the others are working hard. They should have been better distributed, if app servers sitting inside their own VE can be migrated on the fly...

    Back to OpenVZ. It is an interesting technology. Not as "polished" as VMWare, and might not be useful if you need to run different OS or different kernel in each of the VE. However, it suits us well, and it gives us virtualization with very minimum overhead.

    I' should be blogging more about it in the future.