5107R ISO's and OS templates
What's this smell? Something's cookin'? :o)
5107R ISO images and OS templates are ready.
During the course of this week I took a leave from my "other project" (the house building) due to problems with my right shoulder and continued to work full bore on the 5107R transition.
With most of the ducks already being lined up, it was time to build both an ISO image for testing and preferably also an OS template for Aventurine and OpenVZ. At the time of the writing of this article both goals have been achieved.
5107R ISO image:
As mentioned in an earlier news post I used a Scientific Linux 6.0 VPS as build environment and on it I used Revisor to roll up an ISO. The process itself is so underwhelmingly simple, that I immediately binned our old ISO building mechanism and integrated Revisor into the set of makefiles that we use to build, sign and publish our ISO images. On the shell the main menu of our cd-builder now looks like this:
+--------------------------------------+
BlueOnyx CD Generator
version 5107R-SL-6.0-20110715-Beta-05
+--------------------------------------+
Options:
version - Modify Version
spec - Modify specfile for custom install script
installer - Make RPM for custom install script
revisor - Run revisor to roll up new tree
import - Import the Revisor tree
kick - Modify kickstart.cfg
readme - Modify readme on the CD
backup - Back up the entire CD
iso - Create a .ISO image
clean - Deletes ISO & Backup Image
comps - Modify the "comps.xml" file
store - Publish ISO to mirror
The steps involving the building of a new CD are these:
make version
This simply updates the version number of the CD.
make spec; make installer
This creates an updated blueonyx-cd-installer RPM and push it to a local YUM repository.
make revisor
This fires up revisor to pull updated RPMs from the Scientific Linux 6 YUM repositories and the BlueOnyx YUM repositories. When all is said and done, this leaves a directory on the server which could directly be burned to a CD to be used as installer. However, we want to modify the ISO a bit, so we just take parts of this generated ISO in the next step.
make import
This fires up a script that dives into the output directory where revisor left the CD data and just takes a few files and folders and copies them into our usual CD building directory, where we already have our own kickstart script, the BlueOnyx splash image that's shown during CD startup.
make kick; make readme
These allow us to update the kickstart scripts on the CD and also the Readme file on the CD.
make backup
Allows to backup old ISO images and old work directories for achieving.
make iso
Fires up the cooker and builds the ISO image from the work directory.
make store
Signs the ISO and publishes it to the BlueOnyx mirror.
make comps
This is a left over from the "old days". Allows to edit the comps.xml file manually if need be.
During the week I built about a dozen BlueOnyx-5107R-SL-6.0-201107*-Beta-*.iso's and released them to the public mirror without really anouncing them to anyone but the fellow developers. I didn't always increase the version number, as some fixes were minor. Most of the CDs (except the really early ones) installed fine after some bugs and issues in the kickstart scripts were worked out. I really hate it how Anaconda sometimes changes the syntax for commands. So initially we had problems partitioning the disks, because "clearpart all" didn't clear all existing partitions, "list-harddrives" suddenly listed not only harddrives, but also partitions on them and "part <target>--size=0 --grow --ondisk=<disk> --asprimary" complained that a size had to be specified. "size=0" no longer works (even with "--grow") and "size=1 --grow" now has to be used.
But after these minor issues were worked out, the ISO installs started to go through and it was time to check if really everything we needed was aboard. I spotted a few missing items along the road. Like two missing PERL modules, "dialog", which our post-install scripts need and there was "mod_perl", which AdmServ needed, but which initially wasn't present <doh!>. After that the next <doh!> came in the form of discovering that "xinetd", "telnet" and "telnet-server" were also missing and that it wouldn't hurt to have "logwatch", "lynx", "jwhois" and "tree" aboard as well.
After amending the dependency list, the next ISO's left the cooker in a fine shape and initial tests confirmed that the ISO was indeed working just fine. Feedback from the beta-testers was quite positive and some lingering small bugs and issues were found, which were fixed by publishing updated RPMs and building yet another ISO.
The current 5107R ISO is this: BlueOnyx-5107R-SL-6.0-20110715-Beta-05.iso and it can be found on the public mirrors in the usual directory where all the other ISO's are.
Status of the ISO:
Is it production ready? The answer to that is: "Depends."
Let's just say I still consider it a "Beta Release" and would feel a lot more comfortable if more people tested it out thoroughly. Our own tests are positive and all in all the 5107R Beta-5 is more well rounded and better working than the initial 5106R ISO from two and a half years ago with which we went public back then.
There may still be some small bugs in the GUI and especially the Java implementation and perhaps our included mod_jk.so isn't compiled with all required options. But if you're willing to be an early adopter and can live with a bumpy initial ride, then there is nothing that speaks against using 5107R productively.
PKGs for 5107R:
This surely will get asked repeatedly. NO, you cannot use PKGs designed for 5106R on 5107R. They will not install, nor would it be wise to force their installation. Things will break (and you can keep both pieces!).
Just give the usual vendors some time to release 5107R versions of their PKGs. :o)
5107R OS template for Aventurine / OpenVZ
The problem:
Now THAT was a bumpy ride this time around and it kept me busy for the better parts of three days. So far I had used a highly modified version of "vzpkgcache" to build CentOS4, CentOS5, Fedora Core, BlueOnyx and BlueQuartz OS templates. Unfortunately OpenVZ deprecated "vzpkgcache" a long time ago and with a lot of tweaks I still could keep my customized version of it ticking along.
But not for building OS templates for SL6 or CentOS6. The problem is that more modern RHEL6 clones use Python-2.6 and RPM-4.8 instead of Python-2.2/2.4 and the RPM-4.4/4.3 that RHEL4 and RHEL5 clones used. This makes it impossible to trick YUM and RPM on a (say) CentOS5 box to download and install SL6 or CentOS6 RPMs into an empty OpenVZ container for OS template building.
"vzpkgcache" used to work around this by providing modified "rpm" and "yum" binaries and/or by injecting YUM with a custom Python script. The separate "rpm" binary was even statically compiled to inlcude all dependencies. Getting this compatible with SL6 would be such a major effort, that it wasn't worth it.
Now there once used to be a free replacement of "vzpkgcache" named "vzpkg2". However, the download site for it had vanished and although I now managed to get a copy of it, it doesn't really do what I want it to do either.
What doesn't work:
The usually recommended method for building OS templates is sort of a mess, as Parallels doesn't release their inhouse tools for that. Which I can understand, as it would eat up a chunk of their available manpower for supporting it. So the usually recommended method of creating a "live" VPS, using YUM to add/remove RPMs and to make all the desired mods and THEN to stop it and to pack it up as OS template leaves a lot to be desired.
If we'd do that, then all the BlueOnyx OS templates that we ship would have the same SSL certificate for AdmServ, Dovecot and so on. Also the GUI backend would already be pre-populated with some data that we rather would not yet have there. Sure, that can be fixed with some extra cleaning up. For doing it once it's fine, but not if you want to build new OS templates every so often.
So what I needed for OS template creation was a method that was similar to "vzpkgcache" or "vzpkg2" in as much as it allowed to install the RPMs into a stopped "empty" container and would then allow us to do some post install fixes before packing it all up for shipping.
Initial attempts at a solution:
One of the key problems at OS template building is: You want to install the latest RPMs of everything you need. AND you need to know WHAT you need. So there must be some kind of dependency resolution. "vzpkgcache" and "vzpkg2" used YUM to solve both issues.
At first I took a slightly wrong route, which would have worked, too, but was less comfortable: I already had Revisor handling the dependency resolution and the download of all RPMs required for an updated ISO image. So I could just take the "Packages" directory from the CD, would remove some unwanted RPMs that we didn't need in an OS template and would add the extra "dummy" RPMs that the OS template needs to start.
From there on I would then use "rpm --root=<directory> *.rpm" on my SL6 build box to let RPM perform an installation of all RPMs into a separate directory. The "--root" switch is designed for this and will also create a separate RPM database in the specified path.
I got it working alongside those lines, but it was a really rough ride as a lot of important RPM post install scripts were failing.
The right solution:
Rickard Osser then pointed me towards the "--installroot" switch of YUM, which basically does the same as the "--root" switch of RPM: It allows to download and install RPMs into a separate directory, where it also sets up a separate RPM database.
With surprisingly little scripting effort we then set up a build environment which uses three YUM calls (two of them CHROOTED into the build directory) to download and install first a minimal SL6, which then is populated with all the BlueOnyx extras.
At the end three post-install scripts run to do some cleanup and fixes and then the OS template is generated by packing up the file area of the empty container.
All in all the scripting effort is really minimal compared to "vzpkgcache":
[root@devel6 build_ost]# tree
.
├── build.sh
├── extras
│ ├── admserv_ssl.sh
│ ├── finish_install.sh
│ ├── installer.sh
│ ├── RPMS.vz
│ │ ├── vzdev-1.0-7.swsoft.noarch.rpm
│ │ ├── vzdummy-init-fc13-1.0-1.noarch.rpm
│ │ └── vzdummy-kernel-el6-2.6.32-SOL1.i386.rpm
│ └── yum.conf
├── logs
├── os-template
│ └── openvz
└── yum-cache
So all that's now needed to build updated BlueOnyx 5107R OS templates is to run the "build.sh" script. Sure, our method has two known drawbacks:
- It can only build OS templates based on the same OS flavour that the "build.sh" script runs on. So you can't build a SL6 template on a CentOS5 box. Or a CentOS5 template on a Fedora Core box. But we can live with that.
- Opposed to "vzpkg2" it cannot build Debian flavoured OS's. Which we don't need either.
So all things considered our improvised and improved OS template building procedure does what we want it to do in the easiest and most comfortable fashion. Which makes me a really happy camper. Thanks again to Rickard for bringing me onto the right track and for providing the initial set of scripts!
What's still left to do:
I tested the new OS template a bit and it still has a few issues and imperfections, which I will need to work out during the next days. In my initial build Apache didn't start itself first time around, but on container restart it came up fine, or when started manually. Additionally during the web based initial setup there seems to be a small hickup near the end, where it hangs on a blank page for a few seconds before it continues.
Previously the OS templates for BlueOnyx 5106R used special constructors to deal with some OpenVZ related network issues. These are no longer necessary and the code of base-network.mod in BlueOnyx 5107R has been adjusted accordingly. If this all works as desired remains to be seen, so further changes may be necessary sometime down the road.
Where to get the OS template:
For now (this will change later on) you can get the 5107R Beta OS templates here. Just grab them and put the tarball (without unpacking it!) into /vz/template/cache/ on your Aventurine or OpenVZ master node. On Aventurine you need to restart CCEd afterwards to be able to see the new template in the GUI ("/etc/init.d/cced.init restart").
← Return