Saturday, November 13, 2010

Netboot Server with Gentoo and AUFS

Abstract

This Howto describes the installation of a gentoo server for a netboot system that uses aufs for user write layers, logical volume management (lvm), raid (mdadm) and ubuntu as guest os. I assume, you start with nothing at all and have to install the server os first.

BOOT FROM CD

There are a number of different ways to install gentoo. Here we do it from scratch, as it will hopefully provide you with some understanding of what you are doing and how this system works. Fetch the systemrescuecd from the link, supplied below, burn it, put it into your drive and boot from it. You may also use your favorite installation disk, lest it includes lvm and mdadm. http://www.sysresccd.org/Download

SETUP DISK(s)

Partitions

We create a software raid using mdadm. So assuming we have two real disks, we will create two partitions, each. The first partition will be very small and is only needed for the /boot folder. Grub supports only version 1.0 of a mdadm raid, thats why we use –metadata=1.0. Also we use raid1, because grub does not support raid 10. The second partition will comprise the rest of available disk space and can for example be of raid type 10, so at any time, one disk may fail and we can recover our data. A raid 10 with two disks, thus behaves like raid 1. On top of this raid 10, we set up the logical volume manager and use logical partitions for data, while being flexible with space distribution, in case we add disks in the future. The setup of the logical partitions below is only a proposition that has proven practical. You may want to create a different setup. But you will want at least one extra partition for the guest os.
/dev/sda:
-/dev/sda1, set boot flag, >= 200mb (this will be the boot partition
-/dev/sda2 = rest (this will be our gentoo server and client system)
/dev/sdb: create EXACTLY the same layout as for /dev/sda

Software Raid

create your raid devices
mdadm –create /dev/md0 –level=1 –raid-devices=2 /dev/sda1 /dev/sdb1 –metadata=1.0
mdadm –create /dev/md1 –level=10 –raid-devices=2 /dev/sda2 /dev/sdb2

LVM

create lvm for “server” and “client” disk
vgcreate system /dev/md1
lvcreate -n server-root -L 20G system
lvcreate -n server-swap -L 4G system # this should be twice your ram size
lvcreate -n client-boot -L 200M system
lvcreate -n client-root -L 50G system # no need to save space here
lvcreate -n client-home -L 500G system # as much as you need

Filesystems

mkfs.ext2 /dev/md0
mkfs.ext4 /dev/system/server-root
mkswap /dev/system/server-swap
mkfs.ext2 /dev/system/client-boot
mkfs.ext4 /dev/system/client-root
mkfs.ext4 /dev/system/client-home

Mount

mkdir /mnt/gentoo
mount /dev/system/server-root /mnt/gentoo
mkdir /mnt/gentoo/boot
mount /dev/md0 /mnt/gentoo/boot
mkdir /mnt/gentoo/dev
mkdir /mnt/gentoo/proc
mount -t proc none /mnt/gentoo/proc

GENTOO INSTALLATION

We set up a basic gentoo installation. Nothing special here. Just follow the instructions and you’ll be fine. If you’re not familiar with gentoo, you can get a pretty good idea from the official gentoo howtos at the link below.
follow instructions from official handbook to obtain a stage and portage snapshot and unpack them http://www.gentoo.org/doc/en/handbook/handbook-x86.xml?part=1&chap=5 cp -L /etc/resolv.conf /mnt/gentoo/etc/
mount –bind /dev /mnt/gentoo/dev
chroot /mnt/gentoo /bin/bash
env-update; source /etc/profile
cp /usr/share/zoneinfo/Europe/Berlin /etc/localtime
nano /etc/locale.gen
en_US ISO-8859-1
en_US.UTF-8 UTF-8
de_DE ISO-8859-1
de_DE@euro ISO-8859-1
locale-gen
emerge –sync

INSTALL SERVER

Here we begin customization. Aside from standard tools, we install tftpd-hpa, which basically IS the netbootservice, mdadm and lvm. After the installation we add the services to default runlevel. We continue by creating directories, in which our guest operating system, ubuntu, will be installed and set up /etc/fstab accordingly. Feel free to choose different paths, if you like. Installation of nfs-utils should be clear. In /etc/hosts.allow we define, which machines are allowed to boot via network. You will probably have a different setup here. Use ip ranges according to your needs. /etc/hosts.deny is called only after hosts.allow so you might want to deny everything else. Then we set the hostname and path that tftpd will look for a kernel to boot via network. This is, where we will put the guest os /boot dir. Setup of mdadm follows. Here we specifiy, which disks go into which raid array. We use genkernel to create a kernel, ramdisk and modules for our purpose and then patch our kernel with aufs and set it to autoload. This is necessary, since aufs is not an official part of the kernel. After installing grub, we’re good to reboot.
eselect profile set 7
passwd # set a secure server password, for example 7531
emerge -av =gentoo-sources-2.6.34-r1
emerge -av sysklogd vixie-cron ssmtp ntp eix htop dhcpc openssh tftp-hpa mdadm grub genkernel
ACCEPT_KEYWORDS=”~x86″ emerge -av =sys-fs/lvm2-2.02.72
rc-update add sysklogd default; rc-update add vixie-cron default; rc-update add sshd default; rc-update add ntpd default; rc-update add ntp-client default
nano /etc/conf.d/net

config_eth0=( “dhcp” )
mkdir -p /tftpboot/static/root
mkdir -p /tftpboot/static/home
mkdir -p /tftpboot/static/boot
nano /etc/fstab

/dev/md0 /boot ext2 defaults 0 0
/dev/system/server-root / ext4 defaults 0 1
/dev/system/server-swap none swap sw 0 0
/dev/system/client-boot /tftpboot/static/boot ext2 defaults 0 0
/dev/system/client-root /tftpboot/static/root ext4 defaults 0 0
/dev/system/client-home /tftpboot/static/home ext4 defaults 0 0
none /proc proc defaults 0 0
USE=”selinux nonfsv4 tcpd” emerge -av nfs-utils
rc-update add nfs default
nano /etc/hosts.allow

ALL: 10.11.0.0/16
ALL: 10.10.0.0/16
ALL: 10.20.0.0/16
nano /etc/hosts.deny

ALL: (ALL)ALL
nano /etc/conf.d/hostname

HOSTNAME=”moros”
rc-update add nfs default
nano /etc/conf.d/in.tftpd

INTFTPD_PATH=”/tftpboot/static/boot”
rc-update add in.tftpd default
nano /etc/mdadm.conf

DEVICE /dev/sda*
DEVICE /dev/sdb*
ARRAY /dev/md0 metadata=1.0 devices=/dev/sda1,/dev/sdb1
ARRAY /dev/md1 metadata=1.1 devices=/dev/sda2,/dev/sdb2
genkernel –install –menuconfig –lvm –mdadm all

ACCEPT_KEYWORDS=”~x86″ USE=”nfs kernel-patch” emerge aufs2
nano /etc/modules.autoload.d/kernel-2.6

aufs
nano /boot/grub/grub.conf

default 0
timeout 30
title Gentoo Linux 2.6.34-r1
root (hd0,0)
kernel /boot/kernel-genkernel-x86-2.6.34-gentoo-r1 root=/dev/ram0 real_root=/dev/system/server-root domdadm dolvm
initrd /boot/initramfs-genkernel-x86-2.6.34-gentoo-r1
grub #this might take some time (7 min)
Grub>device (hd0) /dev/sda (/dev/hda for ide)
Grub>root (hd0,0) and then:
Grub>setup (hd0) # this might take some time
Grub>device (hd1) /dev/sdb (/dev/hdb for ide)
Grub>root (hd1,0) and then:
Grub>setup (hd1) # this might take some time
quit #this might take some time

———————–

DHCP INSTALLATION / SETUP

Since booting from the network requires the client to send broadcasts and listen for a response from a dhcp server that, rough said, contains the kernel to boot, we need to either install a new dhcp server or add a few lines of code to our existing server. The critical lines here are “next-server”, this is the ip address of you netboot server and “filename”. Just leave the filename as displayed. You will understand shortly. There are some possibilities to tell a client, which root path to use, respectively which nfs to mount as /. Since we use aufs to give every client the chance to customize the system in his own way thus having a shared base, that can be configured individually, we use different root paths for each machine. Each of those root paths consists of a static layer that comprises all the shared data and a writeable layer, in which per client data will be saved. Creation of these aufs filesystem follows. Use the information below to find your best way to set up dhcp.

emerge dhcp
nano /etc/conf.d/dhcp

INTERFACES=’”eth0″
nano /etc/dhcp3/dhcpd.conf
subnet 192.168.5.0 netmask 255.255.255.0 {
range 192.168.5.100 192.168.5.254;
option domain-name-servers 10.7.0.1;
option routers 192.168.1.253;
option broadcast-address 192.168.5.255;
default-lease-time 600;
max-lease-time 7200;
next-server 192.168.5.1;
##for each host
host 192.168.5.100 {
hardware ethernet 00:25:64:8e:16:c4;
fixed-address 192.168.5.100;
filename “pxelinux.0″;
option root-path “/tftpboot/dynamic/10.7.0.<ip>”; # this is perhaps the most sofisticated method to get your root fs mounted. see another possibility below }
————————

SETUP PXELINUX

Think of pxelinux as some kind of network-grub. We emerge syslinux but are interested only in one file: pxelinux.0. This is the file, you specified by “filename” in dhcpd.conf. It uses pxelinux.cfg and boot.txt that we will shortly create to display a menu with options on which kernel to boot. We define two options. The default is ubuntu and it is loaded after 3 seconds. The other, admin, has proven helpful if you want to install new programs. Being the admin you don’t want to install them in a per User layer, but in the shared base. To go into admin mode, hit some key early at network boot, type “admin” and hit enter.
mkdir -p /tftpboot/static/boot/pxelinux.cfg
emerge syslinux
cp /usr/share/syslinux/pxelinux.0 /tftpboot/static/boot
nano /tftpboot/static/boot/pxelinux.cfg/default

DISPLAY boot.txt
DEFAULT ubuntu

LABEL ubuntu
kernel /vmlinuz
append initrd=inittrd.img rw root=/dev/nfs ip=dhcp

LABEL admin
append initrd=initrd.img rw root=/dev/nfs nfsroot=10.11.2.2:/tftpboot/static/root ip=dhcp –

PROMPT 1
TIMEOUT 3
nano /tftpboot/static/boot/boot.txt

- Boot Menu -
=============
ubuntu
admin
#boot admin for refsys administration
rm /etc/udev/rules.d/70-persistent-net.rules
exit;reboot (now boot from harddisk)

INSTALL CLIENT SYSTEM AND CREATE NETBOOT KERNEL AND RAMDISK

Now you have to fetch a working ubuntu installation and stuff it into your shared nfs folder. There are several possibilities to get this done. I do it, by installing a normal ubuntu into a virtual machine and then use “tar” to create an archive, containing the whole filesystem, copy it to the server and unpack. The command is: $tar -cpP –absolute-names -f stage-ubuntu.tar /
copy stage-ubuntu.tar to server
on the server in the nfsroot: #tar -xpvf stage-ubuntu.tar

You could also try and run debootstrap and chroot. Configure your system as you wish. Install any packages. Do some customization. Whatever pleases you. When you’re done, we have to create a netboot ramdisk. Ubuntu comes with a nice tool to help us, create it. The ubuntu kernel and the created ramdisk will then be stored on the netboot server. We also rearrange the filesystem on the server a bit, since initially we created an extra partition for /home of the guest os. We configure /etc/fstab and our network interfaces $cp /boot/vmlinuz-`uname -r` /root/vmlinuz
nano /etc/initramfs-tools/initramfs-conf

modules=netboot
boot=nfs
$mkinitramfs -o /root/initrd.img
remember to set initramfs-conf back to

modules=most
boot=local
when finished run the tar command as explained above: #tar -cpP –absolute-names -f stage-ubuntu.tar /
copy your stage to your nfsroot /tftpboot/static/root and unpack, using:

$tar -xpvf stage-ubuntu.tar
$mv /tftpboot/static/root/home/* /tftpboot/static/home
$mv /tftpboot/static/root/root/* /tftpboot/static/boot
cp /etc/resolv.conf /tftpboot/static/root/etc
nano /tftpboot/static/root/etc/fstab

/dev/nfs / nfs rsize=8192,wsize=8192,noatime,async 0 0
192.168.5.1:/tftpboot/static/home/ /home nfs rsize=8192,wsize=8192,noatime,async 0 0
none /proc proc nodev,noexec,nosuid 0 0
none /tmp tmpfs defaults 0 0
nano /tftpboot/static/root/etc/network/interfaces

auto lo
iface lo inet loopback
#auto eth0
iface eth0 inet manual #this is important, otherwise system wont boot

SETUP FOLDERS AND MOUNTS AND EXPORTS ON THE SERVER

I provide a little script here, that you can use as an idea of how to setup your per-client root paths. We create a directory tree for every client machine that contains folders “root” and “tmpfs”, then call aufs to:
set /tftpboot/static/root as the read-only layer (we discussed this)
set /tftpboot/dynamic/<ip>/tmpfs as the write layer
and show it on /tftpboot/dynamic/<ip>/root
We then declare /tftpboot/dynamic/<ip>/root as an nfs export
—–snip——
#!/bin/bash

#get param
IP=$1

#create dirs
mkdir -p /tftpboot/dynamic/10.11.4.$IP/root
mkdir -p /tftpboot/dynamic/10.11.4.$IP/tmpfs

#aufs mount -t aufs -o br=/tftpboot/dynamic/10.11.4.$IP/tmpfs=rw:/tftpboot/static/root=ro none /tftpboot
/dynamic/10.11.4.$IP/root

echo “/tftpboot/dynamic/10.11.4.$IP/root 10.11.0.0/16(rw,async,fsid=$IP,no_subtree_check,no_root_squash,no_all_squash,no_acl)” >> /etc/exports

exportfs -r
——snap—–
That done, we delete /tftpboot/static/root/etc/udev/rules.d/70-persistent-net.rules
$sudo rm /tftpboot/static/root/etc/udev/rules.d/70-persistent-net.rules
That’s because this file contains a static binding of a network adapter that can be problematic, given the fact that many clients with different network adapters will boot this system. Restart the services and it’s done.

$/etc/init.d/nfs restart
$/etc/init.d/in.tftpd restart

RUN A NETBOOT CLIENT AND ENJOY

Remember, for administration, you probably want to make use of the admin mode we named earlier.

Author: Fabian Schütz

Debian from scratch on lvm2 and software raid

Debian usually comes with a great installer, that enables you to use menu-based configuration tools to setup many usefull features. Among them are also lvm2 and software-raid, using mdadm. But if you want to install Debian from scratch, using debootstrap, you have to setup these features youself and if you want a root partition on lvm and raid, you need to consider a few things, so your system will be able to boot.

Debian does not use any custom “boot flags”, as Gentoo does, where you specify “dolvm, domdadm” as kernel parameters in grub configuration, but offers a tool to create a ramdisk, suited for the job. $update-initramfs can be called via command line, but first, some settings need to be made. update-initramfs will read /etc/mdadm/mdadm.conf to retrieve configuration so the raid arrays can be assembled. Basically you want to have something like this

———snip———-
ARRAY /dev/md0 metadata=1.0 devices=/dev/sda1,/dev/sdb1
ARRAY /dev/md1 metadata=1.1 devices=/dev/sda2,/dev/sdb2
———snap———-
in your mdadm.conf.

Further more, update-initramfs will look into /etc/fstab and /boot/grub/menu.lst to gather information about the root device/partition. I fiddled a bit here, but in the end, it seemed, that devices, whose paths contain “mapper” are indentified as logical volumes, thus enabling lvm on boot. I tried
——————–
menu.lst
kernel /vmlinuz-xxx root=/dev/system/root ro quiet
——————–
fstab
/dev/system/root / ext3 defaults 0 1
——————–
first, but that didn’t work. So i put it this way
——————–
menu.lst
kernel /vmlinuz-xxx root=/dev/mapper/system-root ro quiet
——————–
fstab
/dev/mapper/system-root / ext3 defaults 0 1
——————–
and my system would boot. With “system” being my volume group, you basically need a path of this scheme: /dev/mapper/[volume-group]-[volume] in both /etc/fstab and /boot/grub/menu.lst.

That done, run either $update-initramfs -u if you want to update an existing ramfs or create a new one, using $update-initramfs -c -k . The version-label can is only a name and can entirely be made up.

Author: Fabian Schütz