More PXE development

Continuing on previous post’s footsteps there have been quite a lot of progress with the PXE setup.

First of all I copied the menu of Fedora 22 Live and started to build on top of it:

snap973

snap975

I am not happy with the colors and also background is missing so I will most likely fix those later.

Here is Fedora Workstation 22 installing:

snap972

All from local server, no internet required.

And if someone wants to set this up him or herself here is my default isolinux configuration:

default vesamenu.c32
timeout 0
menu background 

menu clear
menu title Select Bootable
menu vshift 8
menu rows 18
menu margin 8
#menu hidden
menu helpmsgrow 15
menu tabmsgrow 13

menu color border * #00000000 #00000000 none
menu color sel 0 #ffffffff #00000000 none
menu color title 0 #ff7ba3d0 #00000000 none
menu color tabmsg 0 #ff3a6496 #00000000 none
menu color unsel 0 #84b8ffff #00000000 none
menu color hotsel 0 #84b8ffff #00000000 none
menu color hotkey 0 #ffffffff #00000000 none
menu color help 0 #ffffffff #00000000 none
menu color scrollbar 0 #ffffffff #ff355594 none
menu color timeout 0 #ffffffff #00000000 none
menu color timeout_msg 0 #ffffffff #00000000 none
menu color cmdmark 0 #84b8ffff #00000000 none
menu color cmdline 0 #ffffffff #00000000 none

menu tabmsg Press Tab for full configuration options on menu items.
menu separator
menu separator

label linux0
  menu label ^Start Fedora Live
  kernel systems/fedora_22_workstation_live/boot/vmlinuz
  append ro rd.live.image quiet rhgb rd.luks=0 rd.md=0 rd.dm=0 rootflags=loop ip=dhcp initrd=systems/fedora_22_workstation_live/boot/initrd.img root=live:http://192.168.1.1/systems/fedora_22_workstation_live/root/LiveOS/squashfs.img
  menu default
label linux1
  menu label ^Start Fedora Install
  kernel systems/fedora_22_workstation_install/boot/vmlinuz
  append ip=dhcp initrd=systems/fedora_22_workstation_install/boot/initrd.img inst.stage2=http://192.168.1.1/systems/fedora_22_workstation_install/root inst.repo=http://192.168.1.1/systems/fedora_22_workstation_install/root
menu separator
menu begin ^Diagnostics
  menu title Diagnostics Tools
label tool0
  menu label Ultimate Boot CD
  kernel menu.c32
  append ubcd/menus/syslinux/main.cfg
  text help
      Collection of diagnostics tools for PC
  endtext
label tool1
   menu label System Rescue CD 32bit
   kernel systems/systemrescuecd/root/isolinux/rescue32
   append initrd=systems/systemrescuecd/root/isolinux/initram.igz netboot=http://192.168.1.1/systems/systemrescuecd/root/sysrcd.dat
   text help
      Light weight graphical rescue tool, 32 bit version
   endtext
label tool2
   menu label System Rescue CD 64bit
   kernel systems/systemrescuecd/root/isolinux/rescue32
   append initrd=systems/systemrescuecd/root/isolinux/initram.igz netboot=http://192.168.1.1/systems/systemrescuecd/root/sysrcd.dat
   text help
      Light weight graphical rescue tool, 64 bit version
   endtext
menu separator
label local
  menu label Boot from ^local drive
  localboot 0xffff
menu separator
label returntomain
  menu label Return to ^main menu.
  menu exit
menu end

And the Fedora initrd comes from netinst iso. The ordinary ones do not have dracut netlive module build in and cannot load stuff over the network like this.

All the rest is pretty much trial and error and figuring it out. Pretty straight forward process.

Some resources which may or may not be helpful:

  1. http://www.das-werkstatt.com/forum/werkstatt/viewtopic.php?f=24&t=2054
  2. https://unix.stackexchange.com/questions/186302/fedora-network-install-via-pxe-boot
  3. https://major.io/2013/11/03/speed-up-your-fedora-pxe-installations-by-hosting-the-stage2-installer-locally/
  4. https://lists.fedoraproject.org/pipermail/users/2013-July/437593.html
  5. https://bugzilla.redhat.com/show_bug.cgi?id=1154670

RHEL and CentOS and all the other Fedora derivatives should follow the same procedure. Some of those may not include the needed dracut modules and you might need to hack the initrd yourself. It is also not packed with the ordinary cpio apparently so unpacking it is still a bit of a mystery.

The install repository was copied with rsync from local Fedora mirror:

rsync -4 -Pavr rsync://rsync.nic.funet.fi/ftp/pub/mirrors/fedora.redhat.com/pub/fedora/linux/releases/22/Workstation/x86_64/os/ .

And setting it to stay updated via simple script shouldn’t be a problem.

CentOS 6 i386

Also added CentOS 6 installation to the menu.

snap976

Everything seems to work.

snap980

CentOS 6 repository was once again cloned with rsync.

rsync -4 -Pav --delete rsync://rsync.nic.funet.fi/ftp/pub/mirrors/centos.org/6.7/os/i386/ .

Correct kernel and initrd are available under images/pxeboot directory. So when those are used they get updated automatically to boot with the latest system.

Anaconda

I had a problem with the local CentOS 6 repository and because of that I branched into Anaconda installation scripting and it is quite powerful. Installation can be automated.

But because it was possible to feed CentOS 6 anaconda stage2 and repo as boot parameters, this fits me better than scripting the whole boot process, which isn’t really what I am after.

https://wiki.centos.org/HowTos/NetworkInstallServer

  1. https://wiki.centos.org/HowTos/NetworkInstallServer
  2. http://www.server-world.info/en/note?os=CentOS_6&p=pxe&f=3
  3. https://www.centos.org/docs/5/html/Installation_Guide-en-US/s1-kickstart2-options.html

Tuning syslinux

And because when this needs to be used something has most probably gone horribly wrong and because we might be in bad mood, we better make it suiting for our emotions:

snap982

It isn’t perfect, but damn good enough for me! Some graphics.

More about vultr

Spun off a 16 core 32GB instance. It required sending message to their support because they had security limit on how much I can spend, but within minutes they asked me which instance I would like to spin up, and promised to raise the limit to match it. And when I told 8 cores, they apparently gave me at least 16 cores. So “always exceeding your customer expectations” is well understood in vultr, which is always nice.

16 cores is an overkill because I am only doing latency testing but since my $20 will expire in a month then I better spend it.

So now testing CentOS 7 install with 16 cores and 32 GB of memory, to see the difference between them. I do not quite expect much because this is such a simple operation. It doesn’t really depend on CPU nor it does depend on any other resources other than perhaps speed of single CPU core and SSD, which should be identical with these machines, I would expect.

But perhaps with this one I will go with full on desktop GNOME installation? Because why not? I have 16 cores, 32 gigs of ram and VNC.

Also I cannot understand how they can provide me with the capability to upload 4GB sized ISO images to their server. That bends the mind. And with these prices they are able to do that. Makes you wonder how much other providers rip you off. So I am now uploading the full CentOS 7 DVD to get realistic performance over the installation. It seems to be only two iso images but that is still two more than anyone else.

Main server heavily loaded

My soon 15 years old server is still going strong but age is starting to show as quite a large number of stuff is going on.

snap953

But it does wonderful job as disks are all encrypted and the system has grown over the past 2 years quite a bit. But I am still not considering doing anything about this because it can still serve everything I need. CentOS 6 will provide maintenance updates until 2020 so that will be the last day this server will serve me with its current purpose. But more than like is that since full updates are halted mid 2017, this server will then turn into router and my current virtual host will turn into general purpose server.

Routing doesn’t take resources so an old server can still perform excellently well with any sort of routing setup.Currently the server has 14 gigabit ethernet ports so it is perfect as a router.

Average loads have risen slowly but steadily:

graph_image

That is tenths, so average fifteen minute load is about 2.50 and the server has two cores or four threads, so on average there are more processes than there are cores but less than there are threads. I wouldn’t be too worried about this as long as the 15 minute load stays under 6 or so. The problem is that every five minutes certain heavy tasks must be run. These quite ironically are mainly related to updating these graphs. So the more I add the more the loads increase. They do provide valuable information, though.

Postfix and IPv6

Since I now have /64 network of IPv6 at my new VPS I decided to make my e-mail IPv6 compatible finally.

http://www.postfix.org/IPV6_README.html

Pretty straight forward. And just because I can I will make it have its own IPv6 address because there are plenty to spend.

http://www.cyberciti.biz/faq/redhat-centos-rhel-fedora-linux-add-multiple-ip-samenic/

And to mention it: I use ClouDNS as free DNS provider. Great service. Nice and simple old school front-end for management.

Seems to work.

 

Testing PHP 7

So testing new PHP 7 from Git on this same platform that the blog is running. Very convenient to take lxc-clone of this and do the necessary tests and if everything works, then make the switch.

And one good thing is that PHP 7 seems to compile on i386 which is both surprising and extremely welcomed. Reminds me from 2004 or so when PHP was still compiled from source. Along with Apache.

Following roughly the php7.sh on this tutorial: http://www.intracto.com/nl/blog/running-symfony2-on-php7

But there also seem to be CentOS 6 RPM packages available here:

https://webtatic.com/packages/php70/

So the blog you are now reading runs on new PHP 7 and to me it feels quite a bit faster. It can be placebo though, but it surely feels to load a lot faster. The hardware serving this blog is very old so the effect could easily be amplified.

CentOS 7 and firewalld: not very wise decision

What is the purpose of using script that takes existing and working firewall program called iptables and abstracts it so that it is requires to learn new syntax for something that already existed?

I have no answer to this but that is very stupid. It has been the principle in CentOS as far as I can tell, to make things easier by making them increasingly complicated.

iptables has been de facto program for manipulating firewallbut CentOS has decided that it is no good and created a wrapper around it to hide it. I suppose they wanted to have some framework with which people could work but anyone whom has worked with iptables for any number of years is perfectly capable of creating their own structures.

I simply cannot understand this sort of “development” where new things are created simply because. It makes no sense to me. By creating such wrappers one always looses some capability. And when one then tries to use the iptables under the wrapper to overcome these limitations, it no longer is compatible with the upper, wrapper layer.

Some distributions are probably better at these things, namely Debian and other old school distros such as Slackware, whom rely on well-established systems. It is this CentOS way of dumbing things and people down. Or adding support for people whom can’t figure something as simple as iptables out.

How fucking retarded can these people be.

 

Looking into QEMU QCOW2 live/online backup

Reading couple documents.

www_gonzalomarcote_com_2014_kvm_live_backups_with_qcow2

http://www.gonzalomarcote.com/2014/kvm-live-backups-with-qcow2/

wiki_qemu_org_Features_QAPI_GuestAgent

http://wiki.qemu.org/Features/QAPI/GuestAgent

http://dustymabe.com/2013/06/26/enabling-qemu-guest-agent-anddddd-fstrim-again/

<channel type='unix'>
  <source mode='bind' path='/var/lib/libvirt/qemu/Fedora19.agent'/>
  <target type='virtio' name='org.qemu.guest_agent.0'/>
</channel>

Great number of commands now available:

virsh # qemu-agent-command 2-base-centos-7.0-vm1 '{"execute":"guest-info"}'
{
 "return": {
 "version": "2.1.0",
 "supported_commands": [
 {
 "enabled": true,
 "name": "guest-set-vcpus",
 "success-response": true
 },
 {
 "enabled": true,
 "name": "guest-get-vcpus",
 "success-response": true
 },
 {
 "enabled": true,
 "name": "guest-network-get-interfaces",
 "success-response": true
 },
 {
 "enabled": true,
 "name": "guest-suspend-hybrid",
 "success-response": false
 },
 {
 "enabled": true,
 "name": "guest-suspend-ram",
 "success-response": false
 },
 {
 "enabled": true,
 "name": "guest-suspend-disk",
 "success-response": false
 },
 {
 "enabled": true,
 "name": "guest-fstrim",
 "success-response": true
 },
 {
 "enabled": true,
 "name": "guest-fsfreeze-thaw",
 "success-response": true
 },
 {
 "enabled": true,
 "name": "guest-fsfreeze-freeze",
 "success-response": true
 },
 {
 "enabled": true,
 "name": "guest-fsfreeze-status",
 "success-response": true
 },
 {
 "enabled": false,
 "name": "guest-file-flush",
 "success-response": true
 },
 {
 "enabled": false,
 "name": "guest-file-seek",
 "success-response": true
 },
 {
 "enabled": false,
 "name": "guest-file-write",
 "success-response": true
 },
 {
 "enabled": false,
 "name": "guest-file-read",
 "success-response": true
 },
 {
 "enabled": false,
 "name": "guest-file-close",
 "success-response": true
 },
 {
 "enabled": false,
 "name": "guest-file-open",
 "success-response": true
 },
 {
 "enabled": true,
 "name": "guest-shutdown",
 "success-response": false
 },
 {
 "enabled": true,
 "name": "guest-info",
 "success-response": true
 },
 {
 "enabled": true,
 "name": "guest-set-time",
 "success-response": true
 },
 {
 "enabled": true,
 "name": "guest-get-time",
 "success-response": true
 },
 {
 "enabled": true,
 "name": "guest-ping",
 "success-response": true
 },
 {
 "enabled": true,
 "name": "guest-sync",
 "success-response": true
 },
 {
 "enabled": true,
 "name": "guest-sync-delimited",
 "success-response": true
 }
 ]
 }
}

One complaint I have: QEMU has no documentation on QMP commands. At least it is not easy to find. There is nothing coherent under their Git in docs nor anywhere else to be found.

The problem that came about is that I have libvirt 0.10 which is the latest available for CentOS 6 and it does not support everything I need. So I am not compiling new libvirt.

After some two hours of compiling and seeking answers everything, in the end went without a glitch. I wonder why do they not supply CentOS 6 with latest versions since they work perfectly well.

./configure \
  --with-numactl \
  --with-dbus \
  --with-pciaccess \
  --with-udev \
  --with-qemu \
  --with-lxc \
  --with-storage-dir \
  --with-storage-fs \
  --with-storage-lvm\
  --with-storage-iscsi \
  --with-qemu-user=qemu \
  --with-qemu-group=qemu \
  --with-yajl \
  --with-readline

I first compiled it with –with-qemu-user without supplying it with any parameter and this resulted in very weird error. Documentation says I should have been able to still set the user and group in qemu.conf but it didn’t register that at all. It had compiled in username ‘yes’ and ignored the configuration parameters in qemu.conf.

But now I am running QEMU 2.2.1 and libvirt 1.2.17, both latest [stable] releases.

Testing the live backup

So now that I have the latest tools I can get back to where I began: testing the online backup.

Did a quick test and it seems to work. So that’s now in check.

Replacing in-distro libvirt with compiled one also broke python-virtinst which I use to clone my virtuals so had to create that one by compiling one with pip. But that too is now at latest version and cloning works once more.

Latest QEMU 2.2.0 on CentOS 6

I wanted to do this for a long time and finally I made it happen. And it went in very well without any problems.

./configure \
 --prefix=/opt/qemu-stable-2.2.0 \
 --target-list=i386-softmmu,x86_64-softmmu,arm-linux-user,i386-linux-user,x86_64-linux-user \
 --disable-gtk \
 --disable-virtfs \
 --disable-cocoa \
 --disable-xen \
 --disable-vnc-tls \
 --disable-vnc-sasl \
 --disable-vnc-jpeg \
 --enable-vnc-png \
 --disable-vnc-ws \
 --disable-curses \
 --disable-curl \
 --disable-fdt \
 --disable-bluez \
 --disable-slirp \
 --enable-kvm \
 --disable-rdma \
 --enable-vhost-net \
 --disable-rbd \
 --enable-libiscsi \
 --disable-libnfs \
 --disable-libusb \
 --disable-smartcard-nss \
 --disable-usb-redir \
 --enable-lzo \
 --disable-glusterfs \
 --disable-archipelago \
 --disable-tpm \
 --enable-numa

I am currently running as “pc” machine which translates into pc-i440fx-2.2 from 1996 but everything works so I am not too concerned about that. Eventually I want to upgrade to Q35 but my current virsh configuration does not support it and gives error message about missing PCI.

But after that was compiled I simply pointed my virsh configuration to use the newly compiled qemu-system-x86_64 binary.

Sadly this update doesn’t seem to have changed the fact that my disk IO is still extremely slow. I was hoping there was something wrong with me 4 years old QEMU 0.12 but this does not seem to have been the case.

Testing HHVM for PHP performance for WordPress on CentOS 7

Looking into HHVM right now if it could give me some extra umpf.

Rather than directly interpret or compile PHP code directly to C++, HHVM compiles Hack and PHP into an intermediate bytecode. This bytecode is then translated into x64 machine code dynamically at runtime by a just-in-time (JIT) compiler. This compilation process allows for all sorts of optimizations that cannot be made in a statically compiled binary, thus enabling higher performance of your Hack and PHP programs.

There are no binaries for CentOS or at least I could not find any so I am compiling one right now.

I know very little about HHVM or how it integrates with nginx or anything so we will see how it goes.

snap550

 

Progress is happening but we are yet nowhere:

snap551

http://code.tutsplus.com/articles/using-hhvm-with-wordpress–cms-21596
https://github.com/facebook/hhvm/wiki/Building-and-installing-hhvm-on-CentOS-7.x
https://ckon.wordpress.com/2014/07/18/hhvm-centos-7/
http://www.nginxtips.com/wordpress-hhvm-nginx/
https://gist.github.com/EloB/7295596

But now we are:

snap552

And I must say the moment that page loaded I was impressed because it loaded immediately. Faster than it should have if it was unconfigured plain PHP setup.

Cannot go much further because HHVM requires 64bit system and my only 64bit system available for this has too slow disk IO for now. But it certainly should work after I get that sorted.

hpacucli, CentOS 6.5 i386, ProLiant DL380 G3, Smart Array i5

http://www.datadisk.co.uk/html_docs/redhat/hpacucli.htm

yum install ftp://ftp.hp.com/pub/softlib2/software1/pubsw-linux/p414707558/v68034/hpacucli-9.0-24.0.noarch.rpm

# hpacucli
HP Array Configuration Utility CLI 9.0-24.0
Detecting Controllers...Done.
Type "help" for a list of supported commands.
Type "exit" to close the console.

=> ctrl all show config

FIRMWARE UPGRADE REQUIRED: A firmware update is recommended for this controller
                           to prevent rare potential data write errors on a
                           RAID 1 or RAID 1+0 volume in a scenario of
                           concurrent background surface analysis and I/O write
                           operations.  Please refer to Customer Advisory
                           c01587778 which can be found at hp.com.


Smart Array 5i in Slot 0 (Embedded)

   array A (Parallel SCSI, Unused Space: 0 MB)


      logicaldrive 1 (33.9 GB, RAID 1, OK)

      physicaldrive 2:0   (port 2:id 0 , Parallel SCSI, 36.4 GB, OK)
      physicaldrive 2:1   (port 2:id 1 , Parallel SCSI, 36.4 GB, OK)

   array B (Parallel SCSI, Unused Space: 0 MB)


      logicaldrive 2 (101.7 GB, RAID 5, OK)

      physicaldrive 2:2   (port 2:id 2 , Parallel SCSI, 36.4 GB, OK)
      physicaldrive 2:3   (port 2:id 3 , Parallel SCSI, 36.4 GB, OK)
      physicaldrive 2:4   (port 2:id 4 , Parallel SCSI, 36.4 GB, OK)
      physicaldrive 2:5   (port 2:id 5 , Parallel SCSI, 36.4 GB, OK)

Seems to work.

   Array: A
      Interface Type: Parallel SCSI
      Unused Space: 0 MB
      Status: OK



      Logical Drive: 1
         Size: 33.9 GB
         Fault Tolerance: RAID 1
         Heads: 255
         Sectors Per Track: 63
         Cylinders: 4427
         Strip Size: 128 KB
         Full Stripe Size: 128 KB
         Status: OK
         Array Accelerator: Enabled
         Unique Identifier: 600508B100184155435041414B4E0010
         Disk Name: /dev/cciss/c0d0
         Mount Points: /boot 320 MB
         OS Status: LOCKED
         Logical Drive Label: A0001572FE34

      physicaldrive 2:0
         SCSI Bus: 2
         SCSI ID: 0
         Status: OK
         Drive Type: Data Drive
         Interface Type: Parallel SCSI
         Transfer Mode: Ultra 3 Wide
         Size: 36.4 GB
         Transfer Speed: 160 MB/Sec
         Rotational Speed: 15000
         Firmware Revision: HPB3
         Serial Number: A0A1P4508GN50419
         Model: COMPAQ  BF036863B5

      physicaldrive 2:1
         SCSI Bus: 2
         SCSI ID: 1
         Status: OK
         Drive Type: Data Drive
         Interface Type: Parallel SCSI
         Transfer Mode: Ultra 3 Wide
         Size: 36.4 GB
         Transfer Speed: 160 MB/Sec
         Rotational Speed: 10000
         Firmware Revision: HPB6
         Serial Number: UPL1P460K0ML0425
         Model: COMPAQ  BD03686223