Got this card and it works well, and it cost 15 euros, and it has beautiful and professional looking management menu; but it doesn’t work on the machine with which I wanted to use it. So we are back to drawing board and possibilities are now to either buy a new LGA771 motherboard for 60 €, to buy new main workstation for 230 €, or to keep using the machine without the card and simply upgrade the hard drives to larger ones.
So regardless how nice it would be to finally have a proper Xeon workstation, I believe I will simply replace the small disks with larger ones. And then when the dying motherboard finally decides to give up and die, I will then consider actually buying the Workstation, and then pass the components from this one to the storage server, and then use the card above with all of the disks.
Doesn’t that sound like a good plan? Good enough for me.
But I don’t understand why the card won’t work. It is PCIe 1.1 and I believe I got a chipset which is 1.1 and even if it was 1.0a it should still work because 1.1 is supposed to be backwards compatible. But it can be that the many capacitors that I had to replace are having an affect. Because I believe there was one other card too which didn’t work. So it may be that there is something wrong with that motherboard.
But meanwhile I get myself one of these, to test some shit with SAS disks:
“Ordinary” spinning disks use PMR but it has become near its limits and manufacturers are, and have been, developing new replacing magnetic technologies.
HGST has it’s Helium drives which utilize SMR technology in place of more common PMR and the Seagate Archive 8TB is also SMR based hard drive.
This of course was not mentioned on local store’s web pages but luckily I did some research into the drive.
It turns out these drives they have internal firmware based software to take care of all the intricacies of SMR, since it is somewhat complicated, and there should be no problems using them like any other hard drive, but because of their differences they may behave differently.
That is a highly technical talk from HGST and some discussion with OpenZFS developers. I watched this yesterday before falling a sleep and couldn’t really catch how SMR differes but it seems that it is better at sequential writes than random writes, which are problematic, and reads, be that random or sequential, should be relatively fast.
But the bottleneck is random writes for which I believe these drives have internal “normal” PMR type section which is used to do some tasks related to that, to not completely destroy the performance.
But since ZFS is copy-on-write filesystem, random writes should not pay that big of a part. That is my understanding.
But these drives are so cheap, and since they are meant more or less for cold data, that I would probably myself go for the Seagate Archive 8TB, which is available from Germany for 233 € which is pennies.
Cheapest 3TB drive that I can find is 88 € so it is exactly the same price per Gigabyte but with less drives, or with room for more 8TB drives and more storage capacity in a single machine.
HGST uses Helium because of it’s density and how lower density reduces friction which reduces heat and makes the spinning platters more stable, and hence the drive more reliable; I am not sure if Seagate uses Helium – probably not, but they are still able to offer 36 months (3 years) warranty.
So probably not for ordinary usage, but for cold storage should be fine. Place the data once, rarely if ever modify, and read once in awhile.
Cache type and size: The drives use a persistent disk cache of 20GiB and 25GiB on the 5TB and 8TB drives, respectively, with high random write speed until the cache is full. The effective cache size is a function of write size and queue depth.
This here illustrates the basic operating principle:
To decrease the bit size further, SMR reduces the track width while keeping the head size constant, resulting in a head that writes a path several tracks wide. Tracks are then overlapped like rows of shingles on a roof.
So the write head writes over multiple tracks while writing one single track, and the tracks which are being written over as a side effect will be copied to different place prior to overwriting them.
When drive gets full there are no empty tracks spaces left and all new data must first copy tracks to safe places, which might have to copy data to safe places, which might have to copy data to safe places [et cetera] and then the new data can be stored.
And all this because the write head cannot be made any smaller.
And what was said one paragraph before is said differently in this document as well; about the cascading relocations of data:
Modifying any of this data, however, would require reading and re-writing the data that would be damaged by that write, and data to be damaged by the re-write, etc. until the end of the surface is reached. This cascade of copying may be halted by inserting guard regions —tracks written at the full head width—so that the tracks before the guard region may be re-written without affecting any tracks following it, as shown in Figure 2.
It isn’t using the memory bus to transmit the data but it is using it for its space! That’s one problem I am facing because 1U servers are so compact that there isn’t much to spare. So that’s a very novel idea but for very limited use cases.
But perhaps if you need to do tricks like this it is an indication that you need new hardware. I do not know what the margins here are or if the product is otherwise superior, but my feeling is that once you need to go for these sort of things the better alternative might be to get new server.
But perhaps this would work fine for some extremely urgent situation where you absolutely need the space but cannot find any other way to do it. Or some other odd situation like that. Perhaps you have server which was designed to be used as RAM server but you aren’t using it for that so you can equally well get a SATA controller and use those slots for disk space. It might save you the price of new server.
Works just fine unlike the HDD tests previously. I have no ZIL nor L2ARC for that pool but because DDT is on SSD and is therefore fast, the problem of evicted DDT from ARC doesn’t become such an issue.
DDT-sha256-zap-duplicate: 130595 entries, size 286 on disk, 141 in core
DDT-sha256-zap-unique: 841291 entries, size 301 on disk, 157 in core
dedup = 1.14, compress = 2.36, copies = 1.00, dedup * compress / copies = 2.68
Small 120GB SSD so that additional 14% saving there becomes handy.
Edit: and after all the images were copied, the deduplication went up quite a bit, along wit the compression:
# zdb -D vm
DDT-sha256-zap-duplicate: 277827 entries, size 288 on disk, 141 in core
DDT-sha256-zap-unique: 1251538 entries, size 303 on disk, 158 in core
dedup = 1.36, compress = 2.49, copies = 1.00, dedup * compress / copies = 3.39
So it’s storing much more data than the whole drive is in size. Giving me essentially 170GB SSD for the price of 120GB. The server and setup used isn’t high-end and there is no need for superior performance so the hit from deduplication combined with heavy compression doesn’t affect me much.
The additional things that I can do with that extra 50GB is warmly welcomed.
Went and acquired cheap (65 €) Kingston HyperX Fury 120GB SSD just to test out how the performance improves.
Using it as L2ARC for ZFS but not for ZIL because it isn’t good enough for that. The rest of it which is left is dedicated to virtual host disk images and preliminary test gives quite impressive results.
The disk itself is nowhere near capable of these sort of figures to the ZFS ARC must be interfering with these tests somehow. Or something else is going on. But the disk nor the 3Gb/s SATA are capable of 923MiB/s read performance. So that must come from cache of some sort. But they still should give somewhat accurate performance results for at least any ordinary work.
If I want to get ZILs online I am going to have to spend pretty penny because UPS will cost 250 € and two data center grade SSDs will probably cost an additional 250 € minimum. And they probably still won’t be SLC.
And these numbers are with GZIP-9 compression enabled. Theoretically that can add quite a bit to these numbers, depending on sort of data the application is writing. If it compresses well then these can certainly be real world figures that are coming from the disk.
I did some simple dd testing with Linux without any encryption (these are AES 256 bit results) and got read speed of about 280MB/s and write speed of about 230MB/s.
The first lines are from the boot, but the later ones from the action above. So it couldn’t update the microcode at boot for some reason, and when Kernel was manually requested to do so, it found that revision was 0x0 and microcode was updated to revision 0xa0b. In other words, there was available microcode but it couldn’t find it at boot time. Perhaps Fedora 22 ships with old microcode in their stock Kernel.
So hopefully this will make the system stable. It was only one panic since the mod was made so the machine has been stable, no doubt about that.
I can confirm that E5430 runs very smoothly with Linux and Gigabyte P43-ES3G.
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
stepping : 10
cpu MHz : 3352.053
cache size : 6144 KB
8x419MHz with no settings tuning.
Socket was easy to modify with sharp carpet knife and the board booted right after CMOS was cleared with the jumper.
Some simple tasks such as switching browser tabs are faster and it can play 1080p50fps from Youtube.
This Xeon would definitely do more than 419MHz but with this motherboard and with this chipset it is not worth the hassle.
Was this worth spending 50€? It is difficult to say. My usage is not processing intensive, but while working with large number of tabs open and a lot of going on, the two extra cores help a lot. If you got the heatsink ready it is definitely upgrade worth doing. It will extend the life of that LGA775 system by another year or two. And this isn’t even the highest rated CPU supported by this motherboard so there is still room to squeeze as prices of those higher end Xeons keep dropping in the future.
One core always runs hotter than the rest so not sure what’s with that. It may have gotten damaged as it ran momentarily without heatsink. But the system is perfectly stable and it does not seem to be uncommon to have one core running hotter.