Building more backups

So I have quite a good plan for Linux server and Linux desktop backups but virtual machines and especially VMware has been neglected and I even thought maybe I don’t need to do a backup but because that’s fundamentally a bad idea and because the VMware I have is the core of the infrastructure it is required to have a backup.

So Veeam was something that I had heard and decided to give it ago.

snap1470-2

Not fast with current setup but it doesn’t matter much because it will still finish in couple of hours. With compression it will probably take less than 20GiB for good amount of guests.

Also my Hetzner backup rent server was delivered today and looking at upload speeds it seems that I have approximately 1000GiB of daily upload capacity so network will not become a bottleneck for quite a while. I am expecting to be using somewhere around 20-35GiB daily.

The server had so much power that I can easily compress everything with gzip-9. Also storage isn’t going to be a problem because I expect to use roughly 500GiB for daily backups which will leave me with over 2500GiB unused space.

And I must say that the other systems I have built are pretty damn good as well. This is a clear step forward in terms of versatile backup plan.

Notes

So that I remember later.

# lvcreate -Z y -n thin1 -l100%FREE -T -r 0 vg0_sda5
# lvcreate -Z y -n thin1 -l100%FREE -T -r 0 vg0_sdb5
# lvcreate -n pool -r 0 -T -V 500G vg0_sda5/thin0
# lvcreate -n pool -r 0 -T -V 500G vg0_sdb5/thin0

And that is how thin pool is created. I use this method to create the filesystem on top of these thinly provisioned volumes to enable couple tricks and make the setup more versatile.

Linux is very nice (replacing disk)

PXE booted to Fedora live and dd’d the old disk as a whole to new one and re-created the partitions to give Linux all the new space; booted the system and whole operation took less than 15 minutes. Linux is very nice

Afterwards couple tasks and now there is 61GiB of more space available;

cryptsetup resize luks-eac0d97c-5848-4c83-87ea-82b37eddd1b4
pvresize /dev/mapper/eac0d97c-5848-4c83-87ea-82b37eddd1b4
lvextend -L+5G system/root
xfs_growfs /dev/mapper/system-root

And now there is plenty of space.

Network modernization (OpenvSwitch)

So current networking scheme is 100% pure Linux, or  very naive implementetion in some sense. It is very laborsome as it gets more complicated and changes require a lot of changes to lot of places and it is very prone to failing if wrong or incorrect changes are made.

So as pfSense was previously mentioned as a possible system of choice for firewalling, currently looking into OpenvSwitch if it could be something that should be deployed. I am not completely sure what OpenvSwitch is so I am studying it to get an idea if it would be a good choice.

I was today also looking for standalone gigabit switches but using those would sort of void the whole idea of having a dedicated Linux machine, plus it would add costs. So if OpenvSwitch could simplify this configuration and turn the dedicated Linux machine with 12 gigabit NICs into powerful router, then that would be perfect.

The question whether I can then add pfSense on top of that, or if OpenvSwitch itself has these sort of functions within it, still remains open. Technically my firewalling is nothing but iptables so it wouldn’t require that much; all I need is flexibility for rules, QoS, VLAN, and possible Link aggregation.

Three years old talk but since it is introduction, the system should have not changes too much on its principles.

Also web based management would be a big plus.

Support for 4-monitor desktop system (Linux problems)

Previously I talked how I wanted to have third monitor, and even bought a second graphics card for that, but it didn’t work out because the system, graphically, became extremely slow. Probably due to the fact that the graphics card was extremely low-end and the driver couldn’t somehow handle them both.

Both of those cards were NVIDIA, but despite that, the driver didn’t work properly, and it wasn’t worth using.

Yesterday I was browsing local eBay alternative, probably for servers, and found this baby:

Matrox M9148 LP PCIe x16

M9148LP_PCIeX16

The Matrox M9148 LP PCIe x16 quad graphics card renders pristine image quality on up to four DisplayPort™ monitors at resolutions up to 2560 x 1600 per output for an exceptional multi-monitor user experience. With 1 GB of memory and advanced desktop management features, the M9148 LP PCIe x16 card supports both independent or stretched desktop modes and drives business, industrial, and government applications with extraordinary performance. Its low-profile form factor makes it easy to integrate into a wide variety of systems. It offers multiple operating system support, and can be paired with another M9148 graphics card to support up to eight DisplayPort monitors from a single system.

Now this supports four displays, and has DisplayPort, and along with the sale comes three DisplayPort › DVI cables, so I can hook my old monitors to it hopefully.

At the local electronics supermarket these go for 650 € but I managed to score one for just 58 €. Even on eBay they have been sold for over 200 € and even at higher prices. Couple have gone for pennies via auctions but generally this card seems to be quite professional and high ranked.

They advertise Linux support even on their site so I am quite hopeful that Fedora 22 will understand and work with the card. Didn’t check it out so fingers crossed that nothing surprising comes along.

Two monitors work just fine but with electronics design I have noticed that it isn’t quite enough. Ideally two monitors would be occupied one by schematic and one by PCB, and then on third monitor there could be documentation and even fourth monitor could become handy because Digikey or some other parts catalog could be there all the time.

And then fifth for personal stuff.

But same goes for system administrators. Two monitors for work and third for documentation and fourth and fifth could be occupied some static content such as monitoring the systems.

Then something like this would be quite optimal:

2_291200-552x432

But lets start with three.

According to Matrox, their M-series has supported Fedora since version 10, so it should be expected that there will be no problems.

Matrox_Linux_support

http://www.matrox.com/graphics/en/products/graphics_cards/m_series/linux_distributions.html

Linux problems

The driver does not support Xorg versions newer than 1.15, which is the last supported version. So if you intend to use M9148 in Linux, make sure you are running no newer than 1.15 version of Xorg.

I have kindly asked Matrox Technical support to make the drivers support 1.17 but who knows if they still support this device. It is only 2 years old so in my opinion they should. And there are some indications that when the problem existed previously for 1.15, they made the changes.

But for now there is no support for Xorg 1.16, 1.17 or anything newer.

Solving the problem

Since 60 € was already spent on this graphics card I am going to spend 80 € more to get Samsung 850 EVO 120GB and try to make a switch from Fedora 22 to Debian 7, which is still new and supported distribution but it has older version of Xorg, and should therefore support the card.

So it’s not a Linux problem per se but the card simply won’t function in Fedora 22 because that is a distribution which is riding the cutting edge of software development, whereas Debian is more of stability and compatibility oriented. Debian 7 also has GNOME and it is version 3 so not too many things should change hopefully.

Because waiting for Matrox to release new version of drivers might take forever and it still might not happen.

Three hours later

Well that was money well wasted.

The device works perfectly well in Windows 7 but at no time did I get any signal to second display using Debian 7. In Windows the driver even has nice professional looking control panel, but in Linux you are on your own.

One display works perfectly fine. So perhaps there is some chance that with enough dedication and time and will you can make it work but it will probably require some insight into X server and there is one user whom got it to work by using xrandr.

Here’s a thread: http://ubuntuforums.org/showthread.php?t=2158868

And his Xorg configuration:

Section "ServerLayout"
    Identifier     "X.org Configured"
    Screen      0  "Screen0" 0 0
    Screen      1  "Screen1" RightOf "Screen0"
    Screen      2  "Screen2" RightOf "Screen3"
    Screen      3  "Screen3" Below "Screen0"
    InputDevice    "Mouse0" "CorePointer"
    InputDevice    "Keyboard0" "CoreKeyboard"
EndSection

Section "Files"
    ModulePath   "/usr/lib/xorg/modules"
    FontPath     "/usr/share/fonts/X11/misc"
    FontPath     "/usr/share/fonts/X11/cyrillic"
    FontPath     "/usr/share/fonts/X11/100dpi/:unscaled"
    FontPath     "/usr/share/fonts/X11/75dpi/:unscaled"
    FontPath     "/usr/share/fonts/X11/Type1"
    FontPath     "/usr/share/fonts/X11/100dpi"
    FontPath     "/usr/share/fonts/X11/75dpi"
    FontPath     "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType"
    FontPath     "built-ins"
EndSection

Section "Module"
    Load  "glx"
EndSection

Section "InputDevice"
    Identifier  "Keyboard0"
    Driver      "kbd"
EndSection

Section "InputDevice"
    Identifier  "Mouse0"
    Driver      "mouse"
    Option      "Protocol" "auto"
    Option      "Device" "/dev/input/mice"
    Option      "ZAxisMapping" "4 5 6 7"
EndSection

Section "Monitor"
    Identifier   "Monitor0"
    VendorName   "Monitor Vendor"
    ModelName    "Monitor Model"
EndSection

Section "Monitor"
    Identifier   "Monitor1"
    VendorName   "Monitor Vendor"
    ModelName    "Monitor Model"
EndSection

Section "Monitor"
    Identifier   "Monitor2"
    VendorName   "Monitor Vendor"
    ModelName    "Monitor Model"
EndSection

Section "Monitor"
    Identifier   "Monitor3"
    VendorName   "Monitor Vendor"
    ModelName    "Monitor Model"
EndSection

Section "Device"
        ### Available Driver options are:-
        ### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
        ### <string>: "String", <freq>: "<f> Hz/kHz/MHz",
        ### <percent>: "<f>%"
        ### [arg]: arg optional
        #Option     "NoAccel"               # [<bool>]
        #Option     "SWcursor"              # [<bool>]
        #Option     "Independent"           # [<bool>]
        #Option     "UseKernelModule"       # [<bool>]
        #Option     "mon0_forcedvi"         # [<bool>]
        #Option     "mon1_forcedvi"         # [<bool>]
        #Option     "mon2_forcedvi"         # [<bool>]
        #Option     "mon3_forcedvi"         # [<bool>]
        #Option     "ICDOP1"                # [<bool>]
        #Option     "ICDOP2"                # [<bool>]
    Identifier  "Card0"
    Driver      "m9x"
    BusID       "PCI:1:0:0"
        Screen      0
        Option      "Independent"
EndSection

Section "Device"
        ### Available Driver options are:-
        ### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
        ### <string>: "String", <freq>: "<f> Hz/kHz/MHz",
        ### <percent>: "<f>%"
        ### [arg]: arg optional
        #Option     "SWcursor"              # [<bool>]
        #Option     "kmsdev"                # <str>
        #Option     "ShadowFB"              # [<bool>]
    Identifier  "Card1"
    Driver      "m9x"
    BusID       "PCI:1:0:0"
        Screen      1
        Option      "Independent"
EndSection

Section "Device"
        ### Available Driver options are:-
        ### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
        ### <string>: "String", <freq>: "<f> Hz/kHz/MHz",
        ### <percent>: "<f>%"
        ### [arg]: arg optional
        #Option     "ShadowFB"              # [<bool>]
        #Option     "Rotate"                # <str>
        #Option     "fbdev"                 # <str>
        #Option     "debug"                 # [<bool>]
    Identifier  "Card2"
    Driver      "m9x"
    BusID       "PCI:1:0:0"
        Screen      2
        Option      "Independent"
EndSection

Section "Device"
        ### Available Driver options are:-
        ### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
        ### <string>: "String", <freq>: "<f> Hz/kHz/MHz",
        ### <percent>: "<f>%"
        ### [arg]: arg optional
        #Option     "ShadowFB"              # [<bool>]
        #Option     "DefaultRefresh"        # [<bool>]
        #Option     "ModeSetClearScreen"    # [<bool>]
    Identifier  "Card3"
    Driver      "m9x"
    BusID       "PCI:1:0:0"
        Screen      3
        Option      "Independent"
EndSection

Section "Screen"
    Identifier "Screen0"
    Device     "Card0"
    Monitor    "Monitor0"
EndSection

Section "Screen"
    Identifier "Screen1"
    Device     "Card1"
    Monitor    "Monitor1"
EndSection

Section "Screen"
    Identifier "Screen2"
    Device     "Card2"
    Monitor    "Monitor2"
EndSection

Section "Screen"
    Identifier "Screen3"
    Device     "Card3"
    Monitor    "Monitor3"
EndSection

In this article an ordinary NVIDIA card was used so that should work.

What next?

I will probably try NVIDIA GeForce 670 GTX:

s-l1600

Revisiting the home data center architecture

If all goes well I will be adding one or two extremely powerful and new servers in the coming months.

Those servers use 2.5″ disks so the only question is how to implement large scale storage system. I have an old E6600 based server which would be perfectly fine if two 1Gbit connections were trunked together to get 2Gbit iSCSI connection.

2TB in 2.5″ form factor seems to be most cost effective, and prices for 3TB are beyond economical. So if one server could take 4 disks that would in mirrored configuration give 2TB of storage with some faster storage in form of SSD; left over from L2ARC and SLOG.

The old DL360 G3 would be dedicated to only work as firewall and traffic shaper and routing and switching would be moved to dedicated managed gigabit switches.

Also now all servers boot from NFS which has proven to be good, but problematic in case of failure in that NFS server, which has potential to either lock or bring down all the other servers. So NFS would be removed in favor SSD based mirrored ZFS root.

One question mark is my current networking setup which relies heavily on Linux, and which would need to be ported to managed switches. It shouldn’t be a problem, though, since it is technically all VLAN based with some bridges with more specific rules; so those would need to addressed somehow.

Also something like pfSense could be considered. But with firewall and router, if such system is used, I would like to move from i386 to 64bit architecture because currently there have been problems with not enough memory. HP ProLiant DL380 G5 might suit the purpose perfectly as a low cost server.

Quad gigabit PCIe network cards seem to be quite cheap so with three slots it would act as 12-port gigabit router. That would enable either the current Linux-based routing scheme or transition to something like BSD based pfSense. BSD has a reputation of being network oriented system and some studies have demonstrated that it performs extremely well as a router.

But one thing to remember with Linux/BSD based routers is to make absolutely certain that the driver support for network cards is perfect. Otherwise the stack will fall apart. Dedicated routing hardware works perfectly because it has been built to match perfectly with what it was built to be — router and nothing more.

So if the new QEMU/KVM hypervisor would set me back 400 €, disks perhaps 500 €, router 300 € and one or two additional small switches yet another 200 € and 1400VA UPS 250 € then the price tag woud be 1 550 € which isn’t too bad.

That cost would hopefully give me room for another 3 years at least and 2TB of storage and possibility to expand that storage to 14TB by using the router as FC based storage node by dropping 4 gigabit ports to accomodate for the FC card.

ZFS: L2ARC and SSD wear leveling

I noticed this in arcstats:

l2_write_bytes                  4    260231528448

In less than 24 hours it has written 240GB into 20GB partition. That’s quite a hell of an impact on such a small space on an SSD, but I assume much of this is because I had to move large amounts of data back and forth.

But this is definitely something that must be monitored because my daily backups could theoretically eat away that SSD quite fast. Especially since I am in process of making new backup system which would verify large amounts of previous backups every single day.

Also the hit-ratio is extremely poor:

l2_hits                         4    2496
l2_misses                       4    5801535

So it might not even be smart to use L2ARC at all for this pool. Seems more random than ZFS can make use of.

233 Media_Wearout_Indicator 0x0032   000   000   000    Old_age   Always       -       655

 

Linux network encapsulation

Had to build more networks to isolate my new QEMU portion of my virtuals here at home, and something called GRE came by. Seems it can be used to encapsulate Ethernet over IP which would have provided the necessary isolation.

But then came a quick thought “why not use existing VLAN and change my access port into trunk instead” so that was the easiest way to go.

But good to know there are these things in Linux kernel to take advantage of if need be in the future.

Also a word about Fedora

Fedora’s graphical Network management tool isn’t all that bad.

In the past years these have never been on any usable level beyond any ordinary use, but in Fedora 22 at least it is possible to even create VLAN devices which is quite remarkable. Bridge is also possible.

So thumbs up for Fedora for that. My desktop setup is beginning to look quite impressive:

snap1102

LGA 771 MOD problem update

Had one little glitch and Linux kernel reported that

Kernel panic – not syncing: Timeout: Not all cpus entered broadcast exception handler

Long story short: decided to force Microcode reload;

# echo 1 > /sys/devices/system/cpu/microcode/reload

After which dmesg showed following:

# dmesg|grep "microcode"
[ 0.379109] microcode: CPU0 sig=0x1067a, pf=0x40, revision=0x0
[ 0.379122] microcode: CPU1 sig=0x1067a, pf=0x40, revision=0x0
[ 0.379133] microcode: CPU2 sig=0x1067a, pf=0x40, revision=0x0
[ 0.379144] microcode: CPU3 sig=0x1067a, pf=0x40, revision=0x0
[ 0.379227] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
[ 947.508200] microcode: CPU0 sig=0x1067a, pf=0x40, revision=0x0
[ 947.509150] microcode: CPU0 updated to revision 0xa0b, date = 2010-09-28
[ 947.512629] microcode: CPU1 sig=0x1067a, pf=0x40, revision=0x0
[ 947.513623] microcode: CPU1 updated to revision 0xa0b, date = 2010-09-28
[ 947.516941] microcode: CPU2 sig=0x1067a, pf=0x40, revision=0x0
[ 947.517003] microcode: CPU2 updated to revision 0xa0b, date = 2010-09-28
[ 947.521374] microcode: CPU3 sig=0x1067a, pf=0x40, revision=0x0
[ 947.522365] microcode: CPU3 updated to revision 0xa0b, date = 2010-09-28

The first lines are from the boot, but the later ones from the action above. So it couldn’t update the microcode at boot for some reason, and when Kernel was manually requested to do so, it found that revision was 0x0 and microcode was updated to revision 0xa0b. In other words, there was available microcode but it couldn’t find it at boot time. Perhaps Fedora 22 ships with old microcode in their stock Kernel.

So hopefully this will make the system stable. It was only one panic since the mod was made so the machine has been stable, no doubt about that.

Main server heavily loaded

My soon 15 years old server is still going strong but age is starting to show as quite a large number of stuff is going on.

snap953

But it does wonderful job as disks are all encrypted and the system has grown over the past 2 years quite a bit. But I am still not considering doing anything about this because it can still serve everything I need. CentOS 6 will provide maintenance updates until 2020 so that will be the last day this server will serve me with its current purpose. But more than like is that since full updates are halted mid 2017, this server will then turn into router and my current virtual host will turn into general purpose server.

Routing doesn’t take resources so an old server can still perform excellently well with any sort of routing setup.Currently the server has 14 gigabit ethernet ports so it is perfect as a router.

Average loads have risen slowly but steadily:

graph_image

That is tenths, so average fifteen minute load is about 2.50 and the server has two cores or four threads, so on average there are more processes than there are cores but less than there are threads. I wouldn’t be too worried about this as long as the 15 minute load stays under 6 or so. The problem is that every five minutes certain heavy tasks must be run. These quite ironically are mainly related to updating these graphs. So the more I add the more the loads increase. They do provide valuable information, though.

Libvirt odd bridging default behavior

By default libvirt modifies Linux Kernel bridge module so that it will not send traffic to iptables and apparently not to ebtables either.

This is of course a problem because it removes the very core mechanism of Linux for controlling the networking behavior and firewalling of these guests.

http://wiki.libvirt.org/page/Net.bridge-nf-call_and_sysctl.conf

So thumbs down for libvirt for this.

This will probably be rectified at next boot when I add SSD for virtual servers. Favorite shop is out of them so it will have to wait.

Issue fixed

This page with small blinking notification explained why those keys didn’t exist in my system. http://ebtables.netfilter.org/documentation/bridge-nf.html

Since Linux kernel 3.18-rc1, you have to modprobe br_netfilter to enable bridge-netfilter.

 

So after modprobing that those keys came to existence and those settings were loaded and now the traffic goes through the iptables and ebtables. But this was rather well hidden change.