Great score!

Scored HP ProLiant DL385 G6 for 100 €. Easily worth 200 €. It only appears to have some problems with one of the Broadcom chips and cannot PXE boot from those, nor is Linux able to configure them. lspci shows the two NICs/devices but they do not appear in system. But that’s a small problem because with one additional riser card I can have 6 PCIe expansion cards and the other two NICs do work so installation will be easy.

Also included was two 160GB Hitachi SATA disks and two 36GB Seagate SAS disks. But considering that I paid 100 € for DL365 G1, this was much, much better deal.

It has single hexa-core AMD Opteron 2431, and according to AMD that should be 50% faster than previous generation (23xx) Quad-core, which the DL365 G1 has. So ESXi will be moving to this server.

Also I am wishful, as I read, that these six-core Opterons have AMD IOMMU and I can do true passthrough for all the disks, bypassing any ESXi.

And thumbs up for HP engineer who programmed or engineered very nice soft ramp-up for those fans; it sounds professional when the fans don’t go full on immediately but they ramp up nicely. Same when they settle down.

But again, as it was with one 146GB SAS which I ordered, this too has disks which contain data. Apparently people in general aren’t too concerned about security. Of course these disks will be backed up for further forensic studies.

How to set it up?

With 8 bays I can now use two disks for ESXi in hardware RAID0, two disks for ZFS ZIL and then configure the remainding four as I wish. L2ARC probably doesn’t make sense to striped mirrors might be nice option.

I should also be able to move the existing 16GB to this machine, which luckily is PC2-6400P, since this server supports those speeds. And with 16 slots I can utilize those and get more, but bigger sticks. Fully loaded memory configuration would be 192GB which is more than I currently have disk space available for this machine. With 192GB it would be possible to do pretty much everything.

Server now has DVD-RW drive but I might want to replace it with tape drive and start taking backups to tape. Tapes are relatively cheap, easy to handle, and when stored correctly, can last for 30 years.

The broken NIC issue

It seems the problem is with Broadcom firmware: http://www.wooditwork.com/2014/04/25/warning-hp-g2-g7-server-nics-killed-firmware-update-hp-spp-2014-02/

So it might simply be because of this. It would fit well because the NIC is complaining it cannot start. And my server has

  • HP NC382i DP Multifunction Gigabit Server Adapter

Which can be affected.

Problems

I have no cache on Smart Array P400 so it is not possible to create enough logical drives on RAID that I need. So I might go and get me a P800 with batteries and cache.

Upgrades

Following upgrades are likely to be made:

  • Secondary power supply 40€
  • 64GB PC2-5300P 50€
  • 512MB BBWC + Battery 30€

Then later on also CPU upgrade to 12 cores total:

  • AMD Opteron 2431 ~40€
  • Heatsink 20€
  • Two fans for second CPU 30€

(40 + 50 + 30 + 40 + 20 + 30)€ total of 210€ so more or less fully equipped and redundant the total price for this DL385 DL6 would be 310€ which is quite cheap. Same from eBay would be twice that.

Another issue is the dust. Because couple days ago I had a disk fault on one server and when I replaced the disk the old one was full of what ever flows in air.

Second look into HP Omen and quality

I must say that I am deeply disappointed.

The packaging was good but accessories (cables) were poorly packages. The package was nice but they failed with the accessories which came in ugly wrinkly plastic bags.

Then when I booted the machine I saw HP logo which was pixelated. They had not used SVG or PNG but JPEG.

Third problem came with charging plug. The charger did not fit properly and the pin inside the plug seems to have been bent. So it either came in bent or they have used poor connector which is so loose that it allows user to accidentally bend the pin.

So they promised a lot but it seems it was most surface and the small things have been left for luck.

Design Style over everything

Because design actually means there is function. But style is artistic and it does not have to be functional. And what this means by HP OMEN terms is that because they have styled their device the way they have, the backside connectors are extremely difficult to use.

You cannot see the connectors from small angle from top of the machine but you need to reach over a lot because they are in a negative angle under a lid of some sort. So I believe the machine was designed by students of some sort, from some arts school, and no real world testing was ever done with the design. So it ended up sucking quite a bit.

This picture illustrates the problem quite well:

serveimage (100)

The connectors are in one of those deep slopes. Hard to reach.

Also why have all the connectors been placed on the backside? It might work OK but it feels odd because every single laptop ever made has had connectors on the side. Side was always reserved for commonly used things such as USB and the backside was for VGA and Ethernet and such. But here every single connector is on the backside. But this too comes down to their wish for style over functionality. Or someone decided that this is the style and then everything else was sacrificed.

Another small annoyance is the fact that they haven’t put enough individually controllable LED lights on keyboard but it is devided into some arbitrary sectors of some X number of leds, which can be controlled as a group. But yes it still catches your eye so I am nitpicking. This is a non-issue compared to other issues.

But even more later on.

But I mean come on, HP, why do you use poor quality JPEG image for your logo? The first thing the customer sees, and it is grainy. Somebody didn’t do their job in Quality assurance.

And going back to the ugly outcome when it comes to accessory packaging; after you have unpacked the machine from beautiful box and then when you find wrinkled plastic bags it is similar experience like finding gum under your Rolls Royce seat stuck in there. It spoils the whole experience.

If you take Apple, you won’t find mistakes in their whole chain of things. There are no wrinkled plastic bags from cheap Chinese manufacturer. It is polished. So when the machine is looking good you start to notice these little things and then the show sort of falls apart.

I am quite certain that the machine performs well, but because it was designed to look good, everything in and around it should have followed the same standards. And HP should have not used their normal supplier for any of the accessories. They should have made the machine 50 € more expensive or cut their margins and manufacture batch of custom-built accessories and plastic bags. How ever vain that may sound. But it would have been the cherry on top of the cake.

More issues

Their mouse pad isn’t very good. Two-finger scrolling functionality doesn’t quite work. It cannot recognize properly that there are two fingers on the pad. So that same problems applies to simulator right click. So perhaps these are the things that you won’t get in quality when all the budget went into more visible things such as graphics card and large SSD and things like that. So while Lenovos come with less SSD and fewer cores, and with no LEDs, they would probably have highly superior components otherwise.

Display

Display isn’t too high quality. Backlight is clearly visible from few points on the edge of the screen. when background is black and surroundings are dark.

Other than that I cannot see any problems with it. The touch screen is extremely nice feature and you should fall in love with it quite fast.

Revisiting the home data center architecture

If all goes well I will be adding one or two extremely powerful and new servers in the coming months.

Those servers use 2.5″ disks so the only question is how to implement large scale storage system. I have an old E6600 based server which would be perfectly fine if two 1Gbit connections were trunked together to get 2Gbit iSCSI connection.

2TB in 2.5″ form factor seems to be most cost effective, and prices for 3TB are beyond economical. So if one server could take 4 disks that would in mirrored configuration give 2TB of storage with some faster storage in form of SSD; left over from L2ARC and SLOG.

The old DL360 G3 would be dedicated to only work as firewall and traffic shaper and routing and switching would be moved to dedicated managed gigabit switches.

Also now all servers boot from NFS which has proven to be good, but problematic in case of failure in that NFS server, which has potential to either lock or bring down all the other servers. So NFS would be removed in favor SSD based mirrored ZFS root.

One question mark is my current networking setup which relies heavily on Linux, and which would need to be ported to managed switches. It shouldn’t be a problem, though, since it is technically all VLAN based with some bridges with more specific rules; so those would need to addressed somehow.

Also something like pfSense could be considered. But with firewall and router, if such system is used, I would like to move from i386 to 64bit architecture because currently there have been problems with not enough memory. HP ProLiant DL380 G5 might suit the purpose perfectly as a low cost server.

Quad gigabit PCIe network cards seem to be quite cheap so with three slots it would act as 12-port gigabit router. That would enable either the current Linux-based routing scheme or transition to something like BSD based pfSense. BSD has a reputation of being network oriented system and some studies have demonstrated that it performs extremely well as a router.

But one thing to remember with Linux/BSD based routers is to make absolutely certain that the driver support for network cards is perfect. Otherwise the stack will fall apart. Dedicated routing hardware works perfectly because it has been built to match perfectly with what it was built to be — router and nothing more.

So if the new QEMU/KVM hypervisor would set me back 400 €, disks perhaps 500 €, router 300 € and one or two additional small switches yet another 200 € and 1400VA UPS 250 € then the price tag woud be 1 550 € which isn’t too bad.

That cost would hopefully give me room for another 3 years at least and 2TB of storage and possibility to expand that storage to 14TB by using the router as FC based storage node by dropping 4 gigabit ports to accomodate for the FC card.

HP gaming laptop, HP Omen, pre-thoughts

This is my new work laptop, believe it or not. My first thoughts were “this is way too much” and felt shameful for asking one. But when I afterwards made comparisons and tried to look for an alternative I found there were few.

Initially I thought I wanted MacBook Pro but it doesn’t compare with HP Omen and seems highly inferior. Obviously the price too is much less, but still. And I was in for a Windows based laptop so Mac wouldn’t have been the best choice.

But I didn’t kind of want a gaming laptop because it isn’t “professional” device, if you understand what I mean. If you look at the device it looks gaming device, not professional and serious, which wasn’t something that I was after.

So I went to seek a good model from obvious choice, Lenovo, but to my disappointment their comparable (price) models were inferior in pure technical terms, at least on surface they were. Less cores, less SSD, less memory..

snap1210

http://store.hp.com/us/en/pdp/Laptops/hp-omen—15t-quad-touch-select-lapto-p0a65av-1

So despite the fact that it is gaming laptop, and quite hell of a powerful one, and quite expensive, there seem to be little competition. It looks ridiculously “cool”; when you first see it, it turns your head for real, and it has nice specification.

If I could have gotten what I ultimately wanted, it would have been laptop with ECC memory but those are difficult to find. Because since I use proper servers the ECC choice would have been obvious one for me.

But I will be getting back on this once I receive the device. Expect QEMU virtualization at some point, with VGA passthrough hopefully.

One minus it has. Because it is so large (15.6″) one cannot simply drag it around carelessly. But the little we need, it packs perfect power in compact enough package for any other possible use case for sure.

Ideal server

If I am willing to pay 150 € for the server, then the additional upgrades would come in as follows:

  • HP Smart Array P800
    • 50 €
    • The included E200i does not support JBOD
  • 5 pieces of 10K RPM SAS 300GB
    • 270 €
    • 600GB of fast SAS storage plus one spare
  • CPU upgrade to Dual Quad-Core X5470
    • 100 €
  • Intel 320 120GB
    • 40 €
    • To be used as sLOG

So the price of this new server which began as 75 € idea suddenly would cost over 600 €.

But that would then be quite a beast.

And then my home data center would also require UPS which would add another 200 €. It is good to have dreams.

Updated calculations

Still looking for this server but with modified specs:

  • HP Smart Array P800
    • 50 €
    • The included E200i does not support JBOD
  • 3 pieces of 10K RPM SAS 300GB
    • 170 €
    • 300GB of fast SAS storage plus one spare
  • CPU upgrade to Dual Quad-Core E5440
    • 55 €
  • Intel DC S3500 120GB
    • 117 €
    • To be used as sLOG

And the upper limit to pay for locally available pick-up only server — after which it becomes cheaper to get this from eBay — is 135 € and not 150 €.

Intel 320 does not seem to be very available any more so S3500 should replace it. Also some suggest that one could define all the disks in the original E200i controller as RAID0 and use them like that. It would save that 50 €.

So that would be about 520 € server for 22.64GHz of Core 2 Hapertown architecture with 32GB of memory and 300GB of mirrored 10K SAS with SSD sLOG.

Lack of Vt-d support

This is the big question. Am I willing to spend this much money when I would really like to get Vt-d. http://ark.intel.com/search/advanced?VTD=true

The next generation (G6) has additionally DDR3 and probably otherwise quite a bit more advanced as well. Also supports 12 cores. So I might put this G5 on hold because it would regardless cost close to 400 €. Perhaps if I can put in 200 € more I can get the G6.

DL360 G6 setup

The following DL360 G6 would cost about 650 € so only 130 € more than the G5 so the G5 is out of the question.

  • Base server with 8GB of memory
    • 210 €
  • Heatsink for second CPU
    • 38 €
  • CPU upgrade to Dual Quad-Core X5570
    • 50 €
  • Memory upgrade to 32GB
    • 72 €
  • 3 pieces of 10K 300GB SAS
    • 170 €
  • Intel DC S3500 120GB
    • 117 €

Surprisingly cheap. The disk is the most expensive upgrade here and could perhaps be postponed. I need to be on the look for good base on top of which buy the upgrades. That seems to be the cheapest route to go.

Little bit better setup

Replacing the three spinning SAS disks with two OCZ Vector 180 240GB would reduce the cost 6 € but more importantly guarantee superior performance under all conditions. 60GB less space and no spare but also no used disks and all brand new and extremely fast.

There of course was a catch behind that price:

For cost reasons, OCZ didn’t go with full power loss protection similar to enterprise SSDs and hence PFM+ is limited to offering protection for data-at-rest. In other words, PFM+ will protect data that has already been written to the NAND, but any and all user data that still sits in the DRAM buffer waiting to be written will be lost in case of a sudden power loss. (http://www.anandtech.com/show/9009/ocz-vector-180-240gb-480gb-960gb-ssd-review)

So we are back to square one. The alternate solution would be to use the Intel DC S3500 but that would limit the space even further, to 120GB. At which point we come close to being so tight that not all capability of the server can be realized. If you want to test something you must have disk space available to you. 300GB in addition to super fast SSD is for that.

What about aes-ni?

I am glad you asked, because it turns out that the three six-core Xeons supported by the G6 have aes-ni. The downside is that they are 6 cores and 12 threads (and cost $100+ each) and with that there is nothing for me to burn those cycles with. Currently if I do disk IO, all my traffic goes through AES encryption and if that is speeded up, there is really nothing to spend the CPU capacity on.

So that’s a positive problem. At this point it would seem that the best way would be to go and purchase that Intel DC S3500 and use it to greatly increase the performance of my current DL140 G3.

I love this planning but I also hate it because there are so many ways to go.

Main server heavily loaded

My soon 15 years old server is still going strong but age is starting to show as quite a large number of stuff is going on.

snap953

But it does wonderful job as disks are all encrypted and the system has grown over the past 2 years quite a bit. But I am still not considering doing anything about this because it can still serve everything I need. CentOS 6 will provide maintenance updates until 2020 so that will be the last day this server will serve me with its current purpose. But more than like is that since full updates are halted mid 2017, this server will then turn into router and my current virtual host will turn into general purpose server.

Routing doesn’t take resources so an old server can still perform excellently well with any sort of routing setup.Currently the server has 14 gigabit ethernet ports so it is perfect as a router.

Average loads have risen slowly but steadily:

graph_image

That is tenths, so average fifteen minute load is about 2.50 and the server has two cores or four threads, so on average there are more processes than there are cores but less than there are threads. I wouldn’t be too worried about this as long as the 15 minute load stays under 6 or so. The problem is that every five minutes certain heavy tasks must be run. These quite ironically are mainly related to updating these graphs. So the more I add the more the loads increase. They do provide valuable information, though.

Eyeing on DL360 G5 server

Looking if I can get one of these for cheap price.

c04284194

It would be without hard drives but with 32GB of memory and both CPU sockets occupied with something. One other bidder with 25 € and 8 days to go. Not sure if there’s a minimum price, which I suspect there might be, but I would be willing to bid 100 € easily.

And according to this video it is also extremely silent one.

So would make great desktop machine to replace my old one or to replace my virtual host which is X5350, while this could support X5470.

And maximum memory is 192GB so that would be perfect for even commercial system. Apparently supports only 64GB but that would still be double my current maximum of 32GB.

Four SAS/SATA ports would also allow raidz1 + one SSD for caching and would deliver extraordinary performance with 15K disks.

One sad thing is that if my memory serves correctly, 5400 series processors do not support PCI-passthrough so it would not be possible to virtualize Windows on top of Linux and get native graphics support. Same for network but these are minor issues.

Calculating the cost

I am beginning to warm for the idea of paying upto 150 € for this machine as it has 32GB of memory in it which is worth 50-70 €.

The rest of the upgrades to get either desktop of server would cost something like this:

  • CPU upgrade from 4 low-end cores to 8 high-end cores 120 € (X5470)
  • SSD upgrade for desktop/server use 70 € (half decent Corsair)
  • 15K RPM SAS disk upgrade for server 140 € (3x300GB)

So fully eguipped with 32GB for high-performance server use the total cost would be close to 500 €. It would be more cost beneficial to buy two SSDs, put them in mirroring mode and then use cheap iSCSI storage.

That would lower the cost from 480 € to 340 € which is quite a good deal for two SSDs and safety and high performing machine. So I don’t know. If it goes for under 150 € I am quite certain I will take it to myself.

Attachments

The another server

I got three servers one of which is running 24 hours a day. That’s my main server which mainly does routing and all the basic infrastructure. It is used to test all sorts of things mainly network related. Lately of course IPv6 and SixXS.

And that SixXS IPv6 address space they kindly gave me is by the way so big that it can accommodate 65,535 subnets each roughly 18 million trillion addresses wide.

But the other server I was going to talk about is my HP DL140 G3 which currently is being equipped. I intend to replace the current dual core low-end CPU with two Quad core high-end ones.

In addition to this it will have first 16 GiB of memory and later if needed another 16 GiB of memory.

Disks are going to be enterprise level. It has two bays so one is going to house 3 TB WD Re and the other one either generation 5 or generation 6 WD VelociRaptor 10K.

The Re is expensive one but it has over double the MTBF of any consumer grade. The VelociRaptor of course is legendary and can compete with SSD neck and neck with no trouble at all. That is going to be either 300 or 500 or maybe 600 GB depending.

That is quite an expensive investment at around $500 perhaps but combined with 8 cores and memory I should be able to run very generous amount of any sort of virtual machines.

And the enterprise level of these disks will guarantee they will last for a long time.

The rationality for having one 3 TB is to have it as a backup and long-term data whereas the faster (access time) VelociRaptor will house the working data such as virtual machine images.

And the CPU count is extremely important to me since it is hard to find (impossible) SATA drives with hardware encryption. Sadly the CPU I have do not have the x86 instruction set for Advanced Encryption Standard.

But given the CPU will be relatively modern and high-end Xeon I am expecting to get enough throughput to have the disks as a bottleneck – not the processing power.

And if one core must be sacrificed for the encryption then so be it and there will still be 7 left for actual processing.