From a previous post, I noticed the SSD that I’m using in my server showed only a 50 MB/s write speed. That’s weird because the specs show around 145 MB/s.
So, I decided to investigate some more. I removed the SSD from the zpool and did some dd tests on it.
server# dd if=/dev/zero of=/dev/ada3 bs=4M count=32
32+0 records in
32+0 records out
134217728 bytes transferred in 1.587130 secs (84566307 bytes/sec)
server# dd if=/dev/zero of=/dev/ada3 bs=4M count=128
128+0 records in
128+0 records out
536870912 bytes transferred in 5.888121 secs (91178650 bytes/sec)
server# dd if=/dev/zero of=/dev/ada3 bs=4M count=1024
1024+0 records in
1024+0 records out
4294967296 bytes transferred in 45.720133 secs (93940394 bytes/sec)
server# dd if=/dev/zero of=/dev/ada3 bs=4M count=16384
^C6528+0 records in
6527+0 records out
27376222208 bytes transferred in 277.460367 secs (98667145 bytes/sec)
That last run (which I interrupted) showed 94 MB/s.
The disks I am using (in a mirror) as a ZIL are much smaller. Let’s check their write performance:
server# zpool status
pool: backup
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
backup ONLINE 0 0 0
gpt/zfs-backup ONLINE 0 0 0
errors: No known data errors
pool: tank
state: ONLINE
scan: scrub repaired 0 in 12h12m with 0 errors on Sat Feb 11 03:35:06 2012
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/WD20EARS ONLINE 0 0 0
gpt/WD15EARS ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gpt/WD5000BPVT ONLINE 0 0 0
gpt/Maxtor7Y250M0 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
gpt/zil0 ONLINE 0 0 0
gpt/zil1 ONLINE 0 0 0
errors: No known data errors
server# zpool remove tank mirror-1
server# dd if=/dev/zero of=/dev/gpt/zil0 bs=4M count=1024
1024+0 records in
1024+0 records out
4294967296 bytes transferred in 88.818498 secs (48356676 bytes/sec)
They are around 46 MB/s write speed. Hmm. If I were willing to give up the mirror on the ZIL, I could break up the 64 GB SSD into a two-part partition and use one part as a ZIL and the rest as the L2ARC cache.
I’d also free up a couple of SATA ports in the process–which is probably good, because I’m using a 4X SATA PCI card that’s probably very limited on bandwidth anyway. Rather than pushing this platform to its extreme, it might make sense to live with less.
Post a Comment