I believe it stands for Battery Backed Write Cache, and it will enable the R/W cache of the P400i RAID controller. Without one, it does nothing.
The battery is the one below, the upper part is the actual RAM memory. With battery fully charged it can store the content upto 72 hours, which is quite plenty enough for any short-term sudden loss of electricity.
I already have 256MB RAM installed since I have the performance model, but for some odd reason there is no battery. I wonder if the person from who I bought the server took stuff out because I offered 50 € less than what he was asking for. Who knows, could have easily done it. These sort of specifications were not listed, and nor did I ask them.
Batteries themselves are 45 € for new, delivery included, so I am between new battery and new battery plus 512MB RAM. Both cost practically the same. But my concern is if these batteries age; Ni-MH does suffer if it is left to self-discharge too much.
So I will probably just get the new battery.
On the other hand..
AAh. You have a RAID controller with on-card RAM. Based on my testing with 3 different RAID controllers that had RAM and benchmark and real world tests, here’s my recommended settings for ZFS users:
1. Disable your on-card write cache. Believe it or not this improves write performance significantly. I was very disappointed with this choice, but it seems to be a universal truth. I upgraded one of the cards to 4GB of cache a few months before going to ZFS and I’m disappointed that I wasted my money. It helped a LOT on the Windows server, but in FreeBSD it’s a performance killer.
2. If your RAID controller supports read-ahead cache, you should be setting to either “disabled”, the most “conservative”(smallest read-ahead) or “normal”(medium size read-ahead). I found that “conservative” was better for random reads from lots of users and the “normal” was better for things where you were constantly reading a file in order(such as copying a single very large file). If you choose anything else for the read-ahead size the latency of your zpool will go way up because any read by the zpool will be multiplied by 100x because the RAID card is constantly reading a bunch of sectors before and after the one sector or area requested.
So perhaps I should simply save the 45 € and spend it on NIC instead? Needs more research.
With one PSU takes about 15W while shutdown.
While booting and doing very little, takes about 300W with 5 10K SAS plus 1 15K SAS.
After booting into ESXi 5.5 settles to 240W. And with only one PSU it drops to about 218W.
When heavily loaded, both disks and CPU cores, I can get to about 360W maximum.
Glad that I checked it out, because it turns out the server has the more expensive PC2-6400P memory and not the ordinary PC2-5300P. So it was pretty good deal. But still I am running short on memory and I still have boxes to configure.
So looking to upgrade to at least 24GB. But these 800MHz modules are not that common so they are hard to get, or they cost a lot.
Currently giving 10GB to ZFS back-end storage and it looks like this:
# mdb -k
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc pcplusmp zvpsm scsi_vhci zfs mpt sd ip hook neti arp usba kssl stmf stmf_sbd sockfs md lofs random idm cpc crypto fcip fctl fcp smbsrv nfs ufs logindmux nsmb ptm sppp ii
nsctl rdc sdbc sv ]
Page Summary Pages Bytes %Tot
----------------- ---------------- ---------------- ----
Kernel 408421 1.5G 16%
ZFS Metadata 37449 146.2M 1%
ZFS File Data 1525761 5.8G 58%
Anon 44523 173.9M 2%
Exec and libs 1709 6.6M 0%
Page cache 8864 34.6M 0%
Free (cachelist) 0 0 0%
Free (freelist) 578290 2.2G 22%
Total 2621327 9.9G
But ZFS can of course suck all the memory there is available on the machine, within some configured percentages of course. No tuning of any parameters has been done so the performance probably isn’t optimal or has definitely not been tuned to match with the setup and the system.
HP Smart Array without-cache limitations
I cannot remember if I covered this earlier somewhere but I had to halt the work on DL385 G6 because I couldn’t create more than two logical drives, while I needed one for ESXi and then at least 4 to ZFS so that it has access to individual disks. But today I got the 512MB BBWC and with that I could now create all the drives I needed. So the controller perhaps requires more memory to deal with larger configurations, or then it is some sort of HP licencing “penalty”.