Skip to content

Crucial m500 120GB as a ZFS slog

12-Apr-14

I re-did my benchmarks of my ZFS ZIL using a Crucial m500. The main reason I got this drive is that it has power-loss protection. And, it’s speeds are adequate to keep up with my gigabit Ethernet samba server.

Here are the results. My first run was decent, but not stellar:

2014-04-11 23:32:15,079 - 16777216000 bytes written in 192.07929635047913 seconds 87.345259581685 MB/s

I didn’t do a zpool iostat during this run. This matches around what I was getting as a 4k random write. My second run was much better:

2014-04-11 23:35:22,221 - 16777216000 bytes written in 136.14709854125977 seconds 123.22859744907173 MB/s

And it matches a typical zpool iostat:

                        capacity     operations    bandwidth
pool                 alloc   free   read  write   read  write
-------------------  -----  -----  -----  -----  -----  -----
tank                 5.17T  2.71T      0  2.96K      0   262M
  raidz1             5.17T  2.71T      0  1.13K      0   137M
    gpt/TOSH-3TB-A       -      -      0    565      0  68.7M
    gpt/TOSH-3TB-B       -      -      0    565      0  68.7M
    gpt/SGT-USB-3TB      -      -      0    563      0  68.7M
logs                     -      -      -      -      -      -
  gpt/tank_log0      1.56G  30.2G      0  1.83K      0   125M
cache                    -      -      -      -      -      -
  gpt/tank_cache0    24.5G   150G      0    632  3.15K  78.5M
-------------------  -----  -----  -----  -----  -----  -----

Overall, I’m pretty happy. I was hoping to get at minimum 125 MB/s, which my gigabit Ethernet can’t really saturate.

I can now even try turning on forced sync writes in Samba.

Be the first to like.

Crucial m500 120GB benchmarks

09-Apr-14

The latest in my obsession with SSD’s. I jumped on a $70 deal at NewEgg. I bought it to use as a SLOG (ZIL) in my ZFS server, because of its write speeds and because of its power loss protection.

I immediately updated the MU03 firmware to MU05.

This may be with a SATA II (3 Gbps) cable:

-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

           Sequential Read :   268.487 MB/s
          Sequential Write :   144.591 MB/s
         Random Read 512KB :   252.690 MB/s
        Random Write 512KB :   144.619 MB/s
    Random Read 4KB (QD=1) :    23.558 MB/s [  5751.4 IOPS]
   Random Write 4KB (QD=1) :    60.558 MB/s [ 14784.8 IOPS]
   Random Read 4KB (QD=32) :   202.598 MB/s [ 49462.4 IOPS]
  Random Write 4KB (QD=32) :   122.029 MB/s [ 29792.2 IOPS]

  Test : 1000 MB [F: 0.1% (0.1/111.7 GB)] (x5)
  Date : 2014/04/08 22:45:00
    OS : Windows 7 Ultimate Edition SP1 [6.1 Build 7601] (x64)

Wha? The 4K random write is quite low. Let’s repeat:

-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

           Sequential Read :   268.109 MB/s
          Sequential Write :   146.449 MB/s
         Random Read 512KB :   248.750 MB/s
        Random Write 512KB :   145.732 MB/s
    Random Read 4KB (QD=1) :    23.962 MB/s [  5850.1 IOPS]
   Random Write 4KB (QD=1) :    76.037 MB/s [ 18563.8 IOPS]
   Random Read 4KB (QD=32) :   202.476 MB/s [ 49432.7 IOPS]
  Random Write 4KB (QD=32) :   119.411 MB/s [ 29153.0 IOPS]

  Test : 1000 MB [F: 0.1% (0.1/111.7 GB)] (x5)
  Date : 2014/04/08 22:50:44
    OS : Windows 7 Ultimate Edition SP1 [6.1 Build 7601] (x64)

Little better at 76 MB/s random 4k write, but not the 120 MB/s that I expected.

Let’s try the native SATA III port and a shorter cable:

-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

           Sequential Read :   477.494 MB/s
          Sequential Write :   143.072 MB/s
         Random Read 512KB :   434.218 MB/s
        Random Write 512KB :   143.168 MB/s
    Random Read 4KB (QD=1) :    24.571 MB/s [  5998.7 IOPS]
   Random Write 4KB (QD=1) :    76.281 MB/s [ 18623.2 IOPS]
   Random Read 4KB (QD=32) :   232.590 MB/s [ 56784.6 IOPS]
  Random Write 4KB (QD=32) :   131.644 MB/s [ 32139.6 IOPS]

  Test : 1000 MB [F: 0.1% (0.1/111.7 GB)] (x5)
  Date : 2014/04/08 23:08:12
    OS : Windows 7 Ultimate Edition SP1 [6.1 Build 7601] (x64)
  

Wow! that sequential read is fast. But, the random write (QD=1) is still disappointing. What gives?

Post Script

Looks like someone else gets similar results, which don’t line up with what TweakTown measured.

Oh, well. The power loss protection is important, and it really only matters how the drive performs in the ZIL. Hopefully, I’ll see closer to 140MB/s rather than 65 MB/s.

Update 2014-04-09

Tried re-running with Intel’s RST driver:

-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

           Sequential Read :   481.219 MB/s
          Sequential Write :   142.877 MB/s
         Random Read 512KB :   430.158 MB/s
        Random Write 512KB :   144.619 MB/s
    Random Read 4KB (QD=1) :    27.370 MB/s [  6682.2 IOPS]
   Random Write 4KB (QD=1) :    77.415 MB/s [ 18900.1 IOPS]
   Random Read 4KB (QD=32) :   271.697 MB/s [ 66332.2 IOPS]
  Random Write 4KB (QD=32) :   142.077 MB/s [ 34686.8 IOPS]

  Test : 1000 MB [F: 0.1% (0.1/111.7 GB)] (x5)
  Date : 2014/04/09 22:28:03
    OS : Windows 7 Ultimate Edition SP1 [6.1 Build 7601] (x64)

Slightly better, but still no 120 MB/s. I have a feeling that something changed with the latest firmware. Wish I had run this before I upgraded.

Be the first to like.

More ZIL/SLOG comparisons

05-Apr-14

Suspicious of my previous tests using iozone, I wrote a small program (that follows) to measure synchronous write speed. I tested this with and without an SLOG device, and then used a really fast Plextor M5 Pro 256GB SSD as the SLOG. I wanted to make sure that something could achieve the fast SLOG speed that I hoped for. (Remember I am doing this in the interest of spending money on a new SSD.)

Summary

An SLOG device can degrade performance, by forcing synchronous writes to wait until they are committed to the SLOG. However, it cannot enhance performance. The max (steady-state) performance is determined by bandwidth to the main storage.
More…

Be the first to like.

SLOG tests on a 32GB Kingston SSD with iozone

02-Apr-14

This is a follow-up on my previous post. I ran iozone both with and without the -e flag. This flag includes a sync/flush in the speed calculations. This flush should tell ZFS to commit the in-process data to disk, thus flushing it to the SLOG device (or to the on-disk ZIL).

I ran 4 tests: one without the -e flag, and one with it; I then repeated the combination with the SLOG device removed.

Here are the results: More…

Be the first to like.

ZFS slog (ZIL) considerations

01-Apr-14

Running List of Alternatives

The Seagate PRO models have power-loss protection, and they seem to do very well on sequential read/writes–better than the Crucial m500.

Original Post

My latest obsession is what drive is best for a SLOG (dedicated ZIL device).

I plan on testing tonight whether sequential writes or random writes are more important. This DDRDrive marketing propaganda states that (at least for multiple producers) random IOPS are more important. But, I’m very unlikely to see multiple synchronous writes to my ZFS pool. (Maybe a ports build or something, but that’s unlikely).

I have numerous options. But, lets start with the power-loss protection. I found this cable online that has (on each of the 5V and 12V lines) a 2.2 mF capacitor. For those of you that don’t know, that’s really big as far as capacitors go. This should let the SLOG device maintain a charge for quite a bit longer during a power failure.

For example, the SanDisk Extreme II 120GB lists 3.4 W as its power consumption during write. If I assume that all of that comes from the 5.0 V rail (which is the conservative calculation), I get 0.68 A (aka 680 mA) as the current draw during write.

If I then assume that a +/- 10% voltage fluctuation is all that can be tolerated, the capacitor will maintain a voltage for:

ΔT = C*ΔV/I = 2.2 mF × 0.5V / 680 mA = 1.6 ms

So, the cable maintains a voltage for 1.6 ms. This does not seem like much, but if I assume that at most my producer (Samba) can generate 1 Gbps (or 125 MB/s), I can get 125MB/s * 1.6ms = 200 kB to disk. (Of course I’d be lucky to get 100 MB/s with the stock NIC cards in any of the machines on my LAN, but that’s a different story.)

But that’s not really the point. The point is that the SSD should remain active after the producer (the Samba server in this case) dies. This allows writes in progress to be completed. In that case, the real amount of data is how quickly the SSD can commit to disk. That, of course, depends on the write speed of the SSD.

I’ve looked at 4 main candidates for an SSD:

  • OCZ Vertex 2 (60 GB)
  • Crucial m500 (120 GB)
  • Sandisk Ultra II (120 GB)
  • Kingston V300 (120GB)

The benefits of each are as follows. Note that I’ve picked the smallest capacity available for each architecture.

OCZ Vertex 2

The main benefit is price. This thing can be had on eBay for $40. It’s also speedy for its generation. Its random write (4k) is pretty high at 65 MB/s. Its sequential write is 225 MB/s (this is probably compressed data; see below on the 120 GB being 146 MB/s). This means that it can keep up with the 125 MB/s Ethernet and a fair margin above it. For this guy, I probably need the capacitor cable.

Crucial m500

There are two main benefits here. First is speed. Both its sequential write and random write are really high, at 141 MB/s and 121 MB/s, respectively. The second is that this drive has power-loss protection. So, I don’t need a clunky cable with built-in capacitors. Whatever data has made it to the drive will get committed.

SanDisk Ultra II

Once again, speed is a huge benefit. This blows away anything else in both sequential (339 MB/s) and random (133 MB/s) writes. In addition, SanDisk also has a unique nCache non-volatile cache. This allows data to hit SLC (or really MLC operating in SLC mode) before it hits the MLC. This approximates the best of both worlds: the speed and (temporal) reliability of SLC, with the cheaper/denser MLC (for permanent storage).

It does, however, have DDR cache. I hope that if the controller issues a flush, this DDR gets flushed (at least to the nCache). But, to be safe, I’ll probably buy the capacitor cable to pair with this, if I go this route.

Kingston V300 (120GB)

This doesn’t have the fancy flash footwork of the SanDisk nCache, nor does it have power-loss protection of the m500. However, it tends to be cheaper. And the speed is quite sufficient at 133 MB/s random and 172 MB/s sequential. It’s basically an older generation SSD.

(Lack of) Conclusion

So, which will I go for? Well, before I make a decision, I want to test whether sequential or random write is the key metric. I plan on doing this tonight. I have a 32GB Kingston SV100 drive as SLOG on the system. I’ll use dd or something to issue synchronous writes of various sizes. The 32GB Kingston has a sequential write speed of 68 MB/s, but a dismal random write speed. If sequential writes are the key metric, I should see a throughput of around 68 MB/s. If not, I should see something much lower.

That’s the hypothesis, anyway. I’ll test it tonight.

Thoughts on DDRDrive

Warning: this section is mere conjecture.

The DDRDrive propaganda is interesting. They didn’t get a good result with just one producer, so they pushed it to multiple producers. To be fair, this is probably a good model of an enterprise situation–a database server, for example. But, their scenario is with multiple file systems on the same server. Is it likely to split up the database between multiple file systems? I’d posit than in an enterprise environment, one would just split up the multiple databases between multiple servers (or virtual servers) and be done.

In addition, they show the write speeds of their measured disks dropping off after some time. This is probably because the cache is empty to begin with, but further writes need to re-write over data (erase + write) rather than a simple write. However, this situation is unlikely in newer versions of FreeBSD, as the ZFS now supports TRIM. This should make freed sectors erased, so that the erase+write cycle does not occur.

Rejected Options

The Vertex 2 also has 120 GB, but it’s speed isn’t much faster (73 MB/s random and 146 MB/s sequential). I’d take it if were as cheap or cheaper than the 60GB.

The OCZ Agility 3 was also an option, but it isn’t any faster with non-compressible data.

 

Be the first to like.

Kingston SV100S2/32 and SV100S2/64 benchmarks

28-Mar-14

Update 2014-04-06: Tests on the 64GB with a $10 Sabrent USB 3.0 Enclosure
Added to the end

The 64GB SSD has been my solid-state workhorse in my ZFS pool for a while. First, it was the L2ARC, then it was the ZIL.

In fact, I used to have two, but I broke the connector off of one. Which is a different story.

Curiously, I never benchmarked these drives using Crystal Disk Mark. (I did benchmark them several times using dd.) I recently bought the 32GB drive on eBay, and so I took the opportunity to benchmark both.

First, I benchmarked the 32GB via a USB 3.0 dock:

-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

Sequential Read : 159.310 MB/s
Sequential Write : 73.194 MB/s
Random Read 512KB : 125.037 MB/s
Random Write 512KB : 49.578 MB/s
Random Read 4KB (QD=1) : 11.245 MB/s [ 2745.3 IOPS]
Random Write 4KB (QD=1) : 7.347 MB/s [ 1793.7 IOPS]
Random Read 4KB (QD=32) : 11.552 MB/s [ 2820.3 IOPS]
Random Write 4KB (QD=32) : 7.556 MB/s [ 1844.7 IOPS]

Test : 1000 MB [F: 0.3% (0.1/29.7 GB)] (x5)
Date : 2014/03/27 7:27:29
OS : Windows 7 Ultimate Edition SP1 [6.1 Build 7601] (x64)

This wasn’t much different than the native SATA performance:

-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

Sequential Read :   158.252 MB/s
Sequential Write :    68.284 MB/s
Random Read 512KB :   127.111 MB/s
Random Write 512KB :    51.429 MB/s
Random Read 4KB (QD=1) :    11.633 MB/s [  2840.1 IOPS]
Random Write 4KB (QD=1) :     7.923 MB/s [  1934.3 IOPS]
Random Read 4KB (QD=32) :    13.769 MB/s [  3361.5 IOPS]
Random Write 4KB (QD=32) :     7.806 MB/s [  1905.9 IOPS]

Test : 1000 MB [D: 0.3% (0.1/29.7 GB)] (x5)
Date : 2014/03/27 18:44:07
OS : Windows 7 Ultimate Edition SP1 [6.1 Build 7601] (x64)

The USB 3.0 and SATA results are the same, except that the queue depth 32 read gets slightly faster on SATA (presumably due to native command queuing).

And finally, here’s the 64 GB numbers (native SATA only):


-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

Sequential Read :   204.800 MB/s
Sequential Write :   108.335 MB/s
Random Read 512KB :   191.600 MB/s
Random Write 512KB :   104.423 MB/s
Random Read 4KB (QD=1) :    19.490 MB/s [  4758.3 IOPS]
Random Write 4KB (QD=1) :    62.145 MB/s [ 15172.2 IOPS]
Random Read 4KB (QD=32) :    87.811 MB/s [ 21438.2 IOPS]
Random Write 4KB (QD=32) :    91.051 MB/s [ 22229.2 IOPS]

Test : 1000 MB [C: 59.3% (66.2/111.7 GB)] (x5)
Date : 2014/03/27 22:57:44
OS : Windows 7 Ultimate Edition SP1 [6.1 Build 7601] (x64)

Update 2014-04-06

Tested the 64GB using a $10 Sabrent USB3.0 bus-powered 2.5″ enclosure. This beats any $30 USB 3.0 drive out there, although it is comparatively bulky (enclosure + cable).

-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

           Sequential Read :   207.701 MB/s
          Sequential Write :   152.898 MB/s
         Random Read 512KB :   158.479 MB/s
        Random Write 512KB :    96.748 MB/s
    Random Read 4KB (QD=1) :    11.088 MB/s [  2706.9 IOPS]
   Random Write 4KB (QD=1) :    21.789 MB/s [  5319.5 IOPS]
   Random Read 4KB (QD=32) :    11.507 MB/s [  2809.4 IOPS]
  Random Write 4KB (QD=32) :    20.872 MB/s [  5095.8 IOPS]

  Test : 1000 MB [F: 0.2% (0.1/59.6 GB)] (x5)
  Date : 2014/04/06 12:42:16
    OS : Windows 7 Ultimate Edition SP1 [6.1 Build 7601] (x64)

Be the first to like.

The quest for the perfect router

11-Mar-14

All I want is a router that supports QoS and IPv6.

I recently upgraded to my cable internet to DOCSIS 3.0, and with it IPv6.

I was using a WD N900 router that I got for cheap at Staples. But, I noticed that the QoS rules don’t apply to IPv6. It would (I assume) prioritize VOIP etc. but the IPv6 wasn’t rate-limited in any way. I could tell this because IPv4 traffic (via a speed test) would follow the upload/download limits I set for QoS, but IPv6 would not.

So, I decided to reuse the Atom D525 board I had and build a pfSense router. Note that this is the second time I’m using pfSense–the previous time was with an old Dell Inspiron laptop.

I went ahead and bought a refurbished mini-ITX case for $30. (The refurb is no longer available.)

Now, I’ve hit a snag: the case I bought won’t fit an expansion card. And, the USB gigabit Ethernet adapter I have periodically disappears from pfSense. This causes the LAN IPv6 (prefix-delegated) address to disappear, which then means all the computers on the LAN lose IPv6 access.

It would, indeed, be better to figure out why the ue ASIX chipset driver (ue0 is the device assigned to the USB Ethernet) loses its mind every now and then. But, I can’t spend that kind of time debugging the problem.

Instead, I plan on installing a PCI Ethernet card that I have laying around. Problem is (once again) that the case I bought does not have a slot for a PCI card. So, I’ll probably take the PCI bracket off the card, stick it in the case, and cut a hole in the case so I can get the Ethernet port out.

It probably won’t be pretty, but I don’t want to spend any more time/money on this.

And that’s my dilemma. I’m sort of a perfectionist about what I want (IPv6 + QoS), and that usually means spending either time or money.

Be the first to like.

Mobile VOIP Calculation (for PlatinumTel / ptel)

30-Jan-14

Update on 2014-01-29

It looks like I was off by a penny for the VOIP.ms fee. It’s really 1¢/minute, not 2¢/minute. This means that the overall calculation ends up being about 3.23¢/minute, not 4.23¢/minute. This is decently below the 5¢/minute for PlatinumTel. No idea if it performs well (in terms of latency and drop-outs).

Below is the calculation I did for myself around September of last year:

I switched to PlatinumTel on a pay-as-you-go plan. They are a T-Mobile MVNO. I like ‘em because they are pretty cheap (5 cents per minute, 2 cents per text), and also they have a pay-as-you-go data option (10 cents a megabyte). The pay-go data isn’t cheap, but we’re usually on WiFi, and it’s just nice to be able to get data if you’re in a pinch. (As far as I know, other pay-go operators don’t allow pay-go data option.)

I’ve been very happy with them. Their coverage is the same as T-Mobile, but (I’m guessing) they don’t support roaming.

From my calcs, the 10c/MB data rate even with an efficient vocoder (G.729) breaks even with the 5c/minute. And the latency isn’t that great. It may end up being cheaper, since I hear (no pun intended) that voice calls have a lot of silence, so you end up being ahead with the VOIP option.

Here are the grueling details of the calculation:
More…

Be the first to like.

Samba ZFS performance (sequential)

16-Jan-14

ZFS

Be the first to like.

Kingston SV100 64 GB (SV100S264G) Sequential Random Write Benchmark

03-Dec-13

This SSD caught my eye as a ZIL. I wonder how it compares to my previously benchmarked (a number of times) Kingston SV100 64 GB SSD.

I decided to remove it from the ZIL and try it out. I wanted to make sure I exercise the sequential writes (which is all I really care about in a ZIL–I think) with random data, as some SSD controllers get their speed from compression.

In order to do this test, I first wrote the random data to a memory file system. I then wrote from the memory file system to the SSD.

Poojan@server ~ >!df
df -h -t ufs
Filesystem            Size    Used   Avail Capacity  Mounted on
/dev/gpt/usbrootfs     42G    7.4G     31G    19%    /
/dev/md0              247M    8.0k    227M     0%    /var/tmp
Poojan@server ~ >dd if=/dev/random of=/var/tmp/random bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 1.048121 secs (100043413 bytes/sec)
Poojan@server ~ >dd if=/dev/random of=/var/tmp/random bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 1.048035 secs (100051629 bytes/sec)
Poojan@server ~ >dd if=/var/tmp/random of=/dev/null bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 0.012362 secs (8482249780 bytes/sec)
Poojan@server ~ >dd if=/var/tmp/random of=/dev/null bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 0.012305 secs (8521529347 bytes/sec)
Poojan@server ~ >dd if=/var/tmp/random of=/dev/null bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 0.012351 secs (8489781699 bytes/sec)
Poojan@server ~ >sudo dd if=/var/tmp/random of=/dev/gpt/tank_zil0 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 0.641571 secs (163438858 bytes/sec)
Poojan@server ~ >sudo dd if=/var/tmp/random of=/dev/gpt/tank_zil0 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 0.641162 secs (163543088 bytes/sec)
Poojan@server ~ >sudo dd if=/var/tmp/random of=/dev/gpt/tank_zil0 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 0.641077 secs (163564741 bytes/sec)
Poojan@server ~ >sudo dd if=/var/tmp/random of=/dev/gpt/tank_zil0 bs=10M count=10
10+0 records in
10+0 records out
104857600 bytes transferred in 0.643163 secs (163034263 bytes/sec)

With random data, we’re seeing around 155 MB/s. This SSD is great for sequential writes.

Be the first to like.