Upgrading main server to dual PSU

Not currently in production but will be once it has been properly equipped.

26 € delivered:

s-l1600-7

Although I don’t know why people bother with dual PSU because my experience is these sort of quality power supplies don’t fail. But of course if you have hundreds of servers then probably the odds are some of them will fail and if it is a production server then that is a problem.

But good deal nevertheless. Next in line upgrade from 6 cores to 12 cores and from 32GB to 64GB.

More Israeli shopping

They have some good stuff for good price. Jews know how to sell and that is a compliment!

c04373949

These are available for $10 and small postage fee, but still trying to negotiate some USD off of the price:

fgh25

 

Don’t really need these, but I might put together a pfSense firewall, load balancer and a router just because it’s simple and fun. Too much advanced things going on all the time, so maybe some small project instead would be nice, for a change.

With these the router would have 8 gigabit Ethernets which is just about enough for a basic setup.

I have an old ProLiant that these would go to, and I have had similar but 32 bit machine running non-stop for the last 3 years and it has not sneezed once, so these old ProLiants are workhorses which will never seem to fail.

s-l1600-6

And the seller accepted the offer, so unless these get stuck to the customs, that was $31 well spent.

Attachments

Great cheap score

Scored an old old machine for pennies. Or perhaps more accurately, for spares.

The ad said it’s DL350 G1 but I believe there isn’t such machine so I suspect it is actuallu DL380 G4 because it has quite the same specifications.

  • 2x AMD Opteron 250 64-bit 2.4 GHz, 1MB L2 cache
  • 8 GB (2x 2 GB, 4x 1 GB, original HP)
  • Smart Array 6i Ultra320
    •  3x 36 GB Ultra 160 SCSI 10000 RPM
    • 3x 147 GB Ultra 320 SCSI 10000 RPM
  • 2x PSU

I have serious disk shortage on my current G3 machinine and these 147GB disks should rectify that situation. Whole thing cost me 20 € and had I gone and bought those 147GB disks, it would have cost me multiple times that.

Update

And talking about great 20 € score:

Vendor (Seagate/Hitachi) factory information
  number of hours powered up = 26.58

Vendor (Seagate/Hitachi) factory information
 number of hours powered up = 4.42

Two of the disks are brand new! But one 2GB memory stick was bad so the other 2GB cannot be used either. So 4GB of memory but still everything is practically free. It’s a dual single-core Opteron 250 so I may upgrade that to 285 to get four cores.

The only problem with this server is that it casually sips 320W of power doing nothing. Compare that to the other new server which is an order of magnitude faster but takes only 100W idle. So while cheap, the cost of electricity and the trouble of having to deal with the heat is the issue. But for stand-by server or as a some sort of emergency router it would be great.

Now the problem is the noise this thing makes. I believe it is full throttle and doesn’t understand that everything is cool.

http://downloads.linux.hpe.com/SDR/repo/spp/RedHat/7.1/x86_64/current/

But the problem is that hp-health on this hardware (DL385 G1) depends on hpasmd, which isn’t included in CentOS7/RHEL7. So I am forced back to CentOS6 because I have no time nor interest to figure out how could I hack it to work. I am sure it could be done somehow.

OK. So after installing CentOS 6 I now have hpasmcli access and it shows fans like this:

hpasmcli> SHOW FANS
Fan  Location        Present Speed  of max  Redundant  Partner  Hot-pluggable
---  --------        ------- -----  ------  ---------  -------  -------------
#1   PROCESSOR_ZONE  Yes     NORMAL  18%     Yes        2        Yes           
#2   PROCESSOR_ZONE  Yes     NORMAL  18%     Yes        1        Yes           
#3   I/O_ZONE        Yes     NORMAL  18%     Yes        1        Yes           
#4   I/O_ZONE        Yes     NORMAL  18%     Yes        1        Yes           
#5   PROCESSOR_ZONE  Yes     NORMAL  18%     Yes        1        Yes           
#6   PROCESSOR_ZONE  Yes     NORMAL  18%     Yes        1        Yes           
#7   POWERSUPPLY_BAY Yes     NORMAL  18%     Yes        1        Yes           
#8   POWERSUPPLY_BAY Yes     NORMAL  18%     Yes        1        Yes

So it hasn’t ramped them up to 100% so I can infact use CentOS 7 but not have this information available to me. But the machine is much louder than DL380 G3. Even while it uses the exact same fans.

And my choice of Kernel for these servers of mine is always the latest long-term. Explanation of what that is can be found here, but basically long-term is supposed to be stable release of Linux Kernel.

Disk performance

I am surprised by the performance of this thing: with only two cores it can do 77MB/s on encrypted three-disk RAIDZ1, while raw single disk performance is only 66MB/s. Scrub went through at little under 150MB/s. And all these are CPU bound an upgrade to quad core Opteron 285 is coming. But it takes 400W of power to read data from three disks and to decrypt on it. That’s a alot. Read speed is 150MB/s.

Great score!

Scored HP ProLiant DL385 G6 for 100 €. Easily worth 200 €. It only appears to have some problems with one of the Broadcom chips and cannot PXE boot from those, nor is Linux able to configure them. lspci shows the two NICs/devices but they do not appear in system. But that’s a small problem because with one additional riser card I can have 6 PCIe expansion cards and the other two NICs do work so installation will be easy.

Also included was two 160GB Hitachi SATA disks and two 36GB Seagate SAS disks. But considering that I paid 100 € for DL365 G1, this was much, much better deal.

It has single hexa-core AMD Opteron 2431, and according to AMD that should be 50% faster than previous generation (23xx) Quad-core, which the DL365 G1 has. So ESXi will be moving to this server.

Also I am wishful, as I read, that these six-core Opterons have AMD IOMMU and I can do true passthrough for all the disks, bypassing any ESXi.

And thumbs up for HP engineer who programmed or engineered very nice soft ramp-up for those fans; it sounds professional when the fans don’t go full on immediately but they ramp up nicely. Same when they settle down.

But again, as it was with one 146GB SAS which I ordered, this too has disks which contain data. Apparently people in general aren’t too concerned about security. Of course these disks will be backed up for further forensic studies.

How to set it up?

With 8 bays I can now use two disks for ESXi in hardware RAID0, two disks for ZFS ZIL and then configure the remainding four as I wish. L2ARC probably doesn’t make sense to striped mirrors might be nice option.

I should also be able to move the existing 16GB to this machine, which luckily is PC2-6400P, since this server supports those speeds. And with 16 slots I can utilize those and get more, but bigger sticks. Fully loaded memory configuration would be 192GB which is more than I currently have disk space available for this machine. With 192GB it would be possible to do pretty much everything.

Server now has DVD-RW drive but I might want to replace it with tape drive and start taking backups to tape. Tapes are relatively cheap, easy to handle, and when stored correctly, can last for 30 years.

The broken NIC issue

It seems the problem is with Broadcom firmware: http://www.wooditwork.com/2014/04/25/warning-hp-g2-g7-server-nics-killed-firmware-update-hp-spp-2014-02/

So it might simply be because of this. It would fit well because the NIC is complaining it cannot start. And my server has

  • HP NC382i DP Multifunction Gigabit Server Adapter

Which can be affected.

Problems

I have no cache on Smart Array P400 so it is not possible to create enough logical drives on RAID that I need. So I might go and get me a P800 with batteries and cache.

Upgrades

Following upgrades are likely to be made:

  • Secondary power supply 40€
  • 64GB PC2-5300P 50€
  • 512MB BBWC + Battery 30€

Then later on also CPU upgrade to 12 cores total:

  • AMD Opteron 2431 ~40€
  • Heatsink 20€
  • Two fans for second CPU 30€

(40 + 50 + 30 + 40 + 20 + 30)€ total of 210€ so more or less fully equipped and redundant the total price for this DL385 DL6 would be 310€ which is quite cheap. Same from eBay would be twice that.

Another issue is the dust. Because couple days ago I had a disk fault on one server and when I replaced the disk the old one was full of what ever flows in air.

More on new HP ProLiant DL365 server

BBWC

I believe it stands for Battery Backed Write Cache, and it will enable the R/W cache of the P400i RAID controller. Without one, it does nothing.

The battery is the one below, the upper part is the actual RAM memory. With battery fully charged it can store the content upto 72 hours, which is quite plenty enough for any short-term sudden loss of electricity.

$_57 (4)

I already have 256MB RAM installed since I have the performance model, but for some odd reason there is no battery. I wonder if the person from who I bought the server took stuff out because I offered 50 € less than what he was asking for. Who knows, could have easily done it. These sort of specifications were not listed, and nor did I ask them.

Batteries themselves are 45 € for new, delivery included, so I am between new battery and new battery plus 512MB RAM. Both cost practically the same. But my concern is if these batteries age; Ni-MH does suffer if it is left to self-discharge too much.

So I will probably just get the new battery.

On the other hand..

AAh. You have a RAID controller with on-card RAM. Based on my testing with 3 different RAID controllers that had RAM and benchmark and real world tests, here’s my recommended settings for ZFS users:

1. Disable your on-card write cache. Believe it or not this improves write performance significantly. I was very disappointed with this choice, but it seems to be a universal truth. I upgraded one of the cards to 4GB of cache a few months before going to ZFS and I’m disappointed that I wasted my money. It helped a LOT on the Windows server, but in FreeBSD it’s a performance killer. :(
2. If your RAID controller supports read-ahead cache, you should be setting to either “disabled”, the most “conservative”(smallest read-ahead) or “normal”(medium size read-ahead). I found that “conservative” was better for random reads from lots of users and the “normal” was better for things where you were constantly reading a file in order(such as copying a single very large file). If you choose anything else for the read-ahead size the latency of your zpool will go way up because any read by the zpool will be multiplied by 100x because the RAID card is constantly reading a bunch of sectors before and after the one sector or area requested.

So perhaps I should simply save the 45 € and spend it on NIC instead? Needs more research.

Power consumption

With one PSU takes about 15W while shutdown.

While booting and doing very little, takes about 300W with 5 10K SAS plus 1 15K SAS.

After booting into ESXi 5.5 settles to 240W. And with only one PSU it drops to about 218W.

When heavily loaded, both disks and CPU cores, I can get to about 360W maximum.

Memory upgrade

Glad that I checked it out, because it turns out the server has the more expensive PC2-6400P memory and not the ordinary PC2-5300P. So it was pretty good deal. But still I am running short on memory and I still have boxes to configure.

So looking to upgrade to at least 24GB. But these 800MHz modules are not that common so they are hard to get, or they cost a lot.

Currently giving 10GB to ZFS back-end storage and it looks like this:

# mdb -k
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc pcplusmp zvpsm scsi_vhci zfs mpt sd ip hook neti arp usba kssl stmf stmf_sbd sockfs md lofs random idm cpc crypto fcip fctl fcp smbsrv nfs ufs logindmux nsmb ptm sppp ii
nsctl rdc sdbc sv ]
> ::memstat
Page Summary                 Pages             Bytes  %Tot
----------------- ----------------  ----------------  ----
Kernel                      408421              1.5G   16%
ZFS Metadata                 37449            146.2M    1%
ZFS File Data              1525761              5.8G   58%
Anon                         44523            173.9M    2%
Exec and libs                 1709              6.6M    0%
Page cache                    8864             34.6M    0%
Free (cachelist)                 0                 0    0%
Free (freelist)             578290              2.2G   22%
Total                      2621327              9.9G

But ZFS can of course suck all the memory there is available on the machine, within some configured percentages of course.  No tuning of any parameters has been done so the performance probably isn’t optimal or has definitely not been tuned to match with the setup and the system.

HP Smart Array without-cache limitations

I cannot remember if I covered this earlier somewhere but I had to halt the work on DL385 G6 because I couldn’t create more than two logical drives, while I needed one for ESXi and then at least 4 to ZFS so that it has access to individual disks. But today I got the 512MB BBWC and with that I could now create all the drives I needed. So the controller perhaps requires more memory to deal with larger configurations, or then it is some sort of HP licencing “penalty”.

Revisiting the home data center architecture

If all goes well I will be adding one or two extremely powerful and new servers in the coming months.

Those servers use 2.5″ disks so the only question is how to implement large scale storage system. I have an old E6600 based server which would be perfectly fine if two 1Gbit connections were trunked together to get 2Gbit iSCSI connection.

2TB in 2.5″ form factor seems to be most cost effective, and prices for 3TB are beyond economical. So if one server could take 4 disks that would in mirrored configuration give 2TB of storage with some faster storage in form of SSD; left over from L2ARC and SLOG.

The old DL360 G3 would be dedicated to only work as firewall and traffic shaper and routing and switching would be moved to dedicated managed gigabit switches.

Also now all servers boot from NFS which has proven to be good, but problematic in case of failure in that NFS server, which has potential to either lock or bring down all the other servers. So NFS would be removed in favor SSD based mirrored ZFS root.

One question mark is my current networking setup which relies heavily on Linux, and which would need to be ported to managed switches. It shouldn’t be a problem, though, since it is technically all VLAN based with some bridges with more specific rules; so those would need to addressed somehow.

Also something like pfSense could be considered. But with firewall and router, if such system is used, I would like to move from i386 to 64bit architecture because currently there have been problems with not enough memory. HP ProLiant DL380 G5 might suit the purpose perfectly as a low cost server.

Quad gigabit PCIe network cards seem to be quite cheap so with three slots it would act as 12-port gigabit router. That would enable either the current Linux-based routing scheme or transition to something like BSD based pfSense. BSD has a reputation of being network oriented system and some studies have demonstrated that it performs extremely well as a router.

But one thing to remember with Linux/BSD based routers is to make absolutely certain that the driver support for network cards is perfect. Otherwise the stack will fall apart. Dedicated routing hardware works perfectly because it has been built to match perfectly with what it was built to be — router and nothing more.

So if the new QEMU/KVM hypervisor would set me back 400 €, disks perhaps 500 €, router 300 € and one or two additional small switches yet another 200 € and 1400VA UPS 250 € then the price tag woud be 1 550 € which isn’t too bad.

That cost would hopefully give me room for another 3 years at least and 2TB of storage and possibility to expand that storage to 14TB by using the router as FC based storage node by dropping 4 gigabit ports to accomodate for the FC card.

HP gaming laptop, HP Omen, pre-thoughts

This is my new work laptop, believe it or not. My first thoughts were “this is way too much” and felt shameful for asking one. But when I afterwards made comparisons and tried to look for an alternative I found there were few.

Initially I thought I wanted MacBook Pro but it doesn’t compare with HP Omen and seems highly inferior. Obviously the price too is much less, but still. And I was in for a Windows based laptop so Mac wouldn’t have been the best choice.

But I didn’t kind of want a gaming laptop because it isn’t “professional” device, if you understand what I mean. If you look at the device it looks gaming device, not professional and serious, which wasn’t something that I was after.

So I went to seek a good model from obvious choice, Lenovo, but to my disappointment their comparable (price) models were inferior in pure technical terms, at least on surface they were. Less cores, less SSD, less memory..

snap1210

http://store.hp.com/us/en/pdp/Laptops/hp-omen—15t-quad-touch-select-lapto-p0a65av-1

So despite the fact that it is gaming laptop, and quite hell of a powerful one, and quite expensive, there seem to be little competition. It looks ridiculously “cool”; when you first see it, it turns your head for real, and it has nice specification.

If I could have gotten what I ultimately wanted, it would have been laptop with ECC memory but those are difficult to find. Because since I use proper servers the ECC choice would have been obvious one for me.

But I will be getting back on this once I receive the device. Expect QEMU virtualization at some point, with VGA passthrough hopefully.

One minus it has. Because it is so large (15.6″) one cannot simply drag it around carelessly. But the little we need, it packs perfect power in compact enough package for any other possible use case for sure.

Ideal server

If I am willing to pay 150 € for the server, then the additional upgrades would come in as follows:

  • HP Smart Array P800
    • 50 €
    • The included E200i does not support JBOD
  • 5 pieces of 10K RPM SAS 300GB
    • 270 €
    • 600GB of fast SAS storage plus one spare
  • CPU upgrade to Dual Quad-Core X5470
    • 100 €
  • Intel 320 120GB
    • 40 €
    • To be used as sLOG

So the price of this new server which began as 75 € idea suddenly would cost over 600 €.

But that would then be quite a beast.

And then my home data center would also require UPS which would add another 200 €. It is good to have dreams.

Updated calculations

Still looking for this server but with modified specs:

  • HP Smart Array P800
    • 50 €
    • The included E200i does not support JBOD
  • 3 pieces of 10K RPM SAS 300GB
    • 170 €
    • 300GB of fast SAS storage plus one spare
  • CPU upgrade to Dual Quad-Core E5440
    • 55 €
  • Intel DC S3500 120GB
    • 117 €
    • To be used as sLOG

And the upper limit to pay for locally available pick-up only server — after which it becomes cheaper to get this from eBay — is 135 € and not 150 €.

Intel 320 does not seem to be very available any more so S3500 should replace it. Also some suggest that one could define all the disks in the original E200i controller as RAID0 and use them like that. It would save that 50 €.

So that would be about 520 € server for 22.64GHz of Core 2 Hapertown architecture with 32GB of memory and 300GB of mirrored 10K SAS with SSD sLOG.

Lack of Vt-d support

This is the big question. Am I willing to spend this much money when I would really like to get Vt-d. http://ark.intel.com/search/advanced?VTD=true

The next generation (G6) has additionally DDR3 and probably otherwise quite a bit more advanced as well. Also supports 12 cores. So I might put this G5 on hold because it would regardless cost close to 400 €. Perhaps if I can put in 200 € more I can get the G6.

DL360 G6 setup

The following DL360 G6 would cost about 650 € so only 130 € more than the G5 so the G5 is out of the question.

  • Base server with 8GB of memory
    • 210 €
  • Heatsink for second CPU
    • 38 €
  • CPU upgrade to Dual Quad-Core X5570
    • 50 €
  • Memory upgrade to 32GB
    • 72 €
  • 3 pieces of 10K 300GB SAS
    • 170 €
  • Intel DC S3500 120GB
    • 117 €

Surprisingly cheap. The disk is the most expensive upgrade here and could perhaps be postponed. I need to be on the look for good base on top of which buy the upgrades. That seems to be the cheapest route to go.

Little bit better setup

Replacing the three spinning SAS disks with two OCZ Vector 180 240GB would reduce the cost 6 € but more importantly guarantee superior performance under all conditions. 60GB less space and no spare but also no used disks and all brand new and extremely fast.

There of course was a catch behind that price:

For cost reasons, OCZ didn’t go with full power loss protection similar to enterprise SSDs and hence PFM+ is limited to offering protection for data-at-rest. In other words, PFM+ will protect data that has already been written to the NAND, but any and all user data that still sits in the DRAM buffer waiting to be written will be lost in case of a sudden power loss. (http://www.anandtech.com/show/9009/ocz-vector-180-240gb-480gb-960gb-ssd-review)

So we are back to square one. The alternate solution would be to use the Intel DC S3500 but that would limit the space even further, to 120GB. At which point we come close to being so tight that not all capability of the server can be realized. If you want to test something you must have disk space available to you. 300GB in addition to super fast SSD is for that.

What about aes-ni?

I am glad you asked, because it turns out that the three six-core Xeons supported by the G6 have aes-ni. The downside is that they are 6 cores and 12 threads (and cost $100+ each) and with that there is nothing for me to burn those cycles with. Currently if I do disk IO, all my traffic goes through AES encryption and if that is speeded up, there is really nothing to spend the CPU capacity on.

So that’s a positive problem. At this point it would seem that the best way would be to go and purchase that Intel DC S3500 and use it to greatly increase the performance of my current DL140 G3.

I love this planning but I also hate it because there are so many ways to go.

Main server heavily loaded

My soon 15 years old server is still going strong but age is starting to show as quite a large number of stuff is going on.

snap953

But it does wonderful job as disks are all encrypted and the system has grown over the past 2 years quite a bit. But I am still not considering doing anything about this because it can still serve everything I need. CentOS 6 will provide maintenance updates until 2020 so that will be the last day this server will serve me with its current purpose. But more than like is that since full updates are halted mid 2017, this server will then turn into router and my current virtual host will turn into general purpose server.

Routing doesn’t take resources so an old server can still perform excellently well with any sort of routing setup.Currently the server has 14 gigabit ethernet ports so it is perfect as a router.

Average loads have risen slowly but steadily:

graph_image

That is tenths, so average fifteen minute load is about 2.50 and the server has two cores or four threads, so on average there are more processes than there are cores but less than there are threads. I wouldn’t be too worried about this as long as the 15 minute load stays under 6 or so. The problem is that every five minutes certain heavy tasks must be run. These quite ironically are mainly related to updating these graphs. So the more I add the more the loads increase. They do provide valuable information, though.