I got my hands on an m550 128GB drive (for around $75 with the recent pre-Thanksgiving/pre-Christmas discounts). Here are some comparisons between my old ZIL (the m500 128GB) and the new one: ZIL Throughput (sustained) MB/s none 195.6 m500 128GB 124.0 m550 128GB 265.9 Here are some snippets of zpool isotat output: No ZIL
|
2014-11-25 22:07:33,635 - 16777216000 bytes written in 85.77223777770996 seconds 195.60193874713124 MB/s |
[…]
I re-did my benchmarks of my ZFS ZIL using a Crucial m500. The main reason I got this drive is that it has power-loss protection. And, it’s speeds are adequate to keep up with my gigabit Ethernet samba server. Here are the results. My first run was decent, but not stellar:
|
2014-04-11 23:32:15,079 - 16777216000 bytes written in 192.07929635047913 seconds 87.345259581685 MB/s |
I didn’t do […]
This is a follow-up on my previous post. I ran iozone both with and without the [ccie]-e[/ccie] flag. This flag includes a sync/flush in the speed calculations. This flush should tell ZFS to commit the in-process data to disk, thus flushing it to the SLOG device (or to the on-disk ZIL). I ran 4 tests: […]
Be the first to like. Like Unlike
Just rebuilt my ZFS pool using RAIDZ:
|
NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gpt/WD15EARS ONLINE 0 0 0 gpt/WD20EARS ONLINE 0 0 0 gpt/ST1500 ONLINE 0 0 0 logs gpt/tank_zil0 ONLINE 0 0 0 gpt/tank_zil1 ONLINE 0 0 0 cache gpt/tank_cache0 ONLINE 0 0 0 gpt/tank_cache1 ONLINE 0 0 0 |
So, time for another bonnie++ benchmark (all interfering service disabled including powerd): Version 1.96 Sequential Output Sequential Input Random Seeks Sequential Create Random Create Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete K/sec % CPU K/sec % CPU K/sec […]
Summary Version 1.96 Sequential Output Sequential Input Random Seeks Sequential Create Random Create Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % […]
Note that the Samba shares resides in a ZFS pool with dedup turned on. Since the blocks that make up the file being sent are probably already on the pool, it’s not necessarily writing the block data to disk. Using the same method as last time: Whoa. That’s way worse than before. Let’s re-run each: […]
I’m about to upgrade to FreeBSD. While I csup the latest RELENG-9.0 branch, I’m looking at my Samba performance on 8.2. I’m measuring a copy of to the Samba server using the following batch file (taken from here): That’s a 1751164811 byte file, so that’s 21.59 MiB/s. To receive: This is a 2,388,531,200 byte file, […]
I ran iozone on many different ZFS pool configurations, to get an idea of which drives are best for L2ARC (cache) and the ZIL. I also wanted to get an idea of whether using gpart affects performance. The configurations shown in the tables below have the formation [cache]_[gpart/gnop]_[zil]. Where [cache] is the L2ARC type. [gpart/gnop] […]
I recently had a problem with ZFS. I went back to not using glabel, mainly because I wanted to force 4KB sector alignment on my drives and therefore used a gnop trick. About a month after doing so, I shuffled my drives around. I had ada4 and ada5 set up in a mirror configuration. At […]