Skip to content

Arch Linux ARM on a Pogoplug Series 4

02-Jul-14

I followed the directions here: Pogoplug Series 4 | Arch Linux ARM.

It took only about 15 minutes, and the iperf scores are outstanding:

[root@alarm ~]# iperf -c server
------------------------------------------------------------
Client connecting to server, TCP port 5001
TCP window size: 43.8 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.241 port 50914 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   382 MBytes   321 Mbits/sec

The install was on an 2.5″ SSD disk I had laying around, which explains the fast install. But that’s the point–the pogoplug 4 has a SATA port, so I can use an SSD.

Be the first to like.

Making duplicity and ncftpput play nice with vsftpd FTP daemon

25-Jun-14

I’ve been setting up yet another remote backup lately (see associated problem here). For this purpose (on Unix), the duplicity solution looks ideal. However, I’ve tried it on a couple of lightweight FTP servers (a TL-WDR3600 and a Raspberry Pi) and neither of them work. I keep getting a “Permission Denied” message.

I did quite a bit of investigation, and found that it’s related to the way that duplicity’s FTP backend, ncftp/ncftpput sends files. Or really, a quirk of vsftpd, where it won’t let you put a file with an absolute path (and more specifically a file with a directory name). I’ve seen other reports of this on the Internet here and here.

Here’s a sample ncftpput debug log:

2014-06-25 00:56:34  331: Please specify the password.
2014-06-25 00:56:34  Cmd: PASS xxxxxxxx
2014-06-25 00:56:34  230: Login successful.
2014-06-25 00:56:34  Cmd: PWD
2014-06-25 00:56:34  257: "/"
2014-06-25 00:56:34  Logged in to 192.168.1.2 as Poojan.
2014-06-25 00:56:34  Cmd: FEAT
2014-06-25 00:56:34  211: Features:
2014-06-25 00:56:34        EPRT
2014-06-25 00:56:34        EPSV
2014-06-25 00:56:34        MDTM
2014-06-25 00:56:34        PASV
2014-06-25 00:56:34        REST STREAM
2014-06-25 00:56:34        SIZE
2014-06-25 00:56:34        TVFS
2014-06-25 00:56:34        UTF8
2014-06-25 00:56:34       End
2014-06-25 00:56:34  Cmd: PWD
2014-06-25 00:56:34  257: "/"
2014-06-25 00:56:34  Cmd: CWD folder1/duplicity/Public
2014-06-25 00:56:34  250: Directory successfully changed.
2014-06-25 00:56:34  Cmd: CWD /
2014-06-25 00:56:34  250: Directory successfully changed.
2014-06-25 00:56:34  Cmd: TYPE I
2014-06-25 00:56:34  200: Switching to Binary mode.
2014-06-25 00:56:34  Cmd: PASV
2014-06-25 00:56:34  227: Entering Passive Mode (192,168,1,2,225,89).
2014-06-25 00:56:34  Cmd: STOR folder1/duplicity/Public/duplicity-full.20140625T055614Z.vol1.difftar.gpg
2014-06-25 00:56:34  550: Permission denied.
2014-06-25 00:56:34  ncftpput folder1/duplicity/Public/duplicity-full.20140625T055614Z.vol1.difftar.gpg: server said: Permission denied.
2014-06-25 00:56:34  Cmd: QUIT
2014-06-25 00:56:34  221: Goodbye.

However, I ran a few tests, and I found that if one cd‘es to the directory and then puts (STOR‘es) the file, everything works. My guess this is a chroot-style feature. So, I created a small Python script that will parse the arguments that duplicity sends, and inserts cd (to the target directory) command before uploading the file. This works well. Here’s the script:


#!/usr/local/bin/python3
import argparse
import subprocess
import sys
import os

# ncftpput looks something like this:
# 'ncftpput -f /tmp/duplicity-yRftm3-tempdir/mkstemp-PiYFTL-1 -F -t 30 -o useCLNT=0,useHELP_SITE=0  -m -V -C '/tmp/duplicity-yRftm3-tempdir/mktemp-sWc1AA-3' 'folder1/duplicity/Public/duplicity-full.20140625T041147Z.vol1.difftar.gz
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='wrapper for ncftpput command, to allow absolute target paths on vsftpd')
parser.add_argument('-f', dest='credentials_file')
parser.add_argument('-F', action='store_true')
parser.add_argument('-t', dest='timeout')
parser.add_argument('-o', dest='ftp_options')
parser.add_argument('-m', action='store_true')
parser.add_argument('-C', action='store_true')
parser.add_argument('-V', action='store_true')
parser.add_argument('source')
parser.add_argument('dest')
args = parser.parse_args()

# print(args)

dest_dir = os.path.dirname(args.dest)
dest_file = os.path.basename(args.dest)

#cmd_args = ['/usr/local/bin/ncftpput', '-d', 'ftp-debug.log', '-W', 'cd {0}'.format(dest_dir)] +  sys.argv[1:-1] + [dest_file]
cmd_args = ['/usr/local/bin/ncftpput', '-W', 'cd {0}'.format(dest_dir)] +  sys.argv[1:-1] + [dest_file]
subprocess.Popen(cmd_args)

The script is called ncftpput, and is placed in a directory called ~/duplicity_bin. I then add this directory to my path before running duplicity. Here’s a shell script that does that:


#!/bin/sh
export FTP_PASSWORD="XXXYYYXXYXYZY"
export PATH="/home/Poojan/duplicity_bin:$PATH"
duplicity -v 9 --encrypt-key=DEADBEEF full /tank/Users/Public ftp://Poojan@192.168.1.2/folder1/duplicity/Public

Be the first to like.

Keeping FreeBSD TCP performance in the midst of a highly-buffered connection

21-Jun-14

I was perplexed recently, when I began an rsync job to a raspberry pi server. I know exactly what limits the bandwidth of this connection–it is the CPU (or network) on the Raspberry Pi, which cannot accept data fast enough.

So, even though my server is on a 1 Gbit/s interface, and the Raspberry Pi is on a 100 Mbit/s interface, the transfer rate is ~ 10 Mbit/s. Fair enough.

But what really perplexed me is that the presence of this rsync connection severly limited other connections–notably Samba. The Simpson’s show in the living room had audio that was noticeably stuttering.

So, I began to investigate. This same low-rate occurred with iperf. It seemed a little better from my basement computer than the living room machine. Here is an iperf from the basement to the FreeBSD server:


C:\Users\Poojan\Downloads\iperf-2.0.5-2-win32>iperf -c server
------------------------------------------------------------
Client connecting to server, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.20 port 64155 connected with 192.168.1.8 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.1 sec 25.1 MBytes 21.0 Mbits/sec

Whereas without rsync going, it would be around 670 Mbit/s or so.

I started playing around with buffers. Curiously, reducing sendbuf_max helped:


Poojan@server ~ >sudo sysctl net.inet.tcp.sendbuf_max
net.inet.tcp.sendbuf_max: 262144
Poojan@server ~ >sudo sysctl net.inet.tcp.sendbuf_max=65536
net.inet.tcp.sendbuf_max: 262144 -> 65536

Which yielded:

C:\Users\Poojan\Downloads\iperf-2.0.5-2-win32>iperf -c server
------------------------------------------------------------
Client connecting to server, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.20 port 64171 connected with 192.168.1.8 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 788 MBytes 661 Mbits/sec

I posited that maybe there’s some overall limit to the buffers, and rsync was stealing all of them, so making them smaller allowed more buffers to be available to iperf. I went hunting for this limit.

I tried doubling kern.ipc.maxsockbuf:


Poojan@server ~ >sudo sysctl -w kern.ipc.maxsockbuf=524288
kern.ipc.maxsockbuf: 262144 -> 524288

which yielded:

C:\Users\Poojan\Downloads\iperf-2.0.5-2-win32>iperf -c server
------------------------------------------------------------
Client connecting to server, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.20 port 64216 connected with 192.168.1.8 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 25.0 MBytes 20.9 Mbits/sec

No luck. Note: I realized that the above was with Jumbo frames enabled on both server & client. I disabled jumbo on client.

I then did a [cci]netstat -m[/cci], just in case:


1470/5175/6645 mbufs in use (current/cache/total)
271/2635/2906/10485760 mbuf clusters in use (current/cache/total/max)
271/2635 mbuf+clusters out of packet secondary zone in use (current/cache)
85/335/420/762208 4k (page size) jumbo clusters in use (current/cache/total/max)
1041/361/1402/225839 9k jumbo clusters in use (current/cache/total/max)
0/0/0/127034 16k jumbo clusters in use (current/cache/total/max)
10618K/11152K/21771K bytes allocated to network (current/cache/total)
1106/2171/531 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
361/1345/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile

This didn’t really show any indication that buffers were being over-subscribed, at least not during the tests.

But now, with a sendbuf_max size of 262144, and a maxsockbuf size of 524288, my iperf reading went down:


C:\Users\Poojan\Downloads\iperf-2.0.5-2-win32>iperf -c server
------------------------------------------------------------
Client connecting to server, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.20 port 64438 connected with 192.168.1.8 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.7 sec 2.88 MBytes 2.25 Mbits/sec

From reading this summary of FreeBSD buffers, it seems that kern.ipc.maxsockbuf operates at a different level than net.inet.tcp.sendbuf. And, in fact, both these being large is impacting the performance. So, maybe this is just pure buffer bloat.

But, then I realized that my better results were when the sendbuf was less than 64k. So, I disabled RFC1323 (which allows for buffers larger than 64k, in addition to time-stamps). And voila!


C:\Users\Poojan\Downloads\iperf-2.0.5-2-win32>iperf -c server
------------------------------------------------------------
Client connecting to server, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.20 port 65203 connected with 192.168.1.8 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 798 MBytes 669 Mbits/sec

Be the first to like.

worrbase – CrashPlan on FreeBSD 9.0, A HOWTO

15-Jun-14

I’m blogging this as a reminder to myself on what to do. I found it years ago, but that link doesn’t seem to exist anymore. Anyway, this is a good write-up on FreeBSD 9.0:

worrbase – CrashPlan on FreeBSD 9.0, A HOWTO.

The only difference betwen this and what I did was that I use Oracle JRE 1.8 [cci]/usr/ports/java/linux-oracle-jre18/[/cci], so my JAVACOMMON looks like:

JAVACOMMON=/usr/local/linux-oracle-jre1.8.0/bin/java

Be the first to like.

NFSv4 ACL history

28-May-14

Good summary of NFSv4 ACL’s (and their history):

Implementing Native NFSv4 ACLs in Linux (by Greg Banks at SGI).

Be the first to like.

Crucial m500 120GB as a ZFS slog

12-Apr-14

I re-did my benchmarks of my ZFS ZIL using a Crucial m500. The main reason I got this drive is that it has power-loss protection. And, it’s speeds are adequate to keep up with my gigabit Ethernet samba server.

Here are the results. My first run was decent, but not stellar:

2014-04-11 23:32:15,079 - 16777216000 bytes written in 192.07929635047913 seconds 87.345259581685 MB/s

I didn’t do a [ccie]zpool iostat[/ccie] during this run. This matches around what I was getting as a 4k random write. My second run was much better:

2014-04-11 23:35:22,221 - 16777216000 bytes written in 136.14709854125977 seconds 123.22859744907173 MB/s

And it matches a typical zpool iostat:

                        capacity     operations    bandwidth
pool                 alloc   free   read  write   read  write
-------------------  -----  -----  -----  -----  -----  -----
tank                 5.17T  2.71T      0  2.96K      0   262M
  raidz1             5.17T  2.71T      0  1.13K      0   137M
    gpt/TOSH-3TB-A       -      -      0    565      0  68.7M
    gpt/TOSH-3TB-B       -      -      0    565      0  68.7M
    gpt/SGT-USB-3TB      -      -      0    563      0  68.7M
logs                     -      -      -      -      -      -
  gpt/tank_log0      1.56G  30.2G      0  1.83K      0   125M
cache                    -      -      -      -      -      -
  gpt/tank_cache0    24.5G   150G      0    632  3.15K  78.5M
-------------------  -----  -----  -----  -----  -----  -----

Overall, I’m pretty happy. I was hoping to get at minimum 125 MB/s, which my gigabit Ethernet can’t really saturate.

I can now even try turning on forced sync writes in Samba.

Be the first to like.

Crucial m500 120GB benchmarks

09-Apr-14

The latest in my obsession with SSD’s. I jumped on a $70 deal at NewEgg. I bought it to use as a SLOG (ZIL) in my ZFS server, because of its write speeds and because of its power loss protection.

I immediately updated the MU03 firmware to MU05.

This may be with a SATA II (3 Gbps) cable:

-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

           Sequential Read :   268.487 MB/s
          Sequential Write :   144.591 MB/s
         Random Read 512KB :   252.690 MB/s
        Random Write 512KB :   144.619 MB/s
    Random Read 4KB (QD=1) :    23.558 MB/s [  5751.4 IOPS]
   Random Write 4KB (QD=1) :    60.558 MB/s [ 14784.8 IOPS]
   Random Read 4KB (QD=32) :   202.598 MB/s [ 49462.4 IOPS]
  Random Write 4KB (QD=32) :   122.029 MB/s [ 29792.2 IOPS]

  Test : 1000 MB [F: 0.1% (0.1/111.7 GB)] (x5)
  Date : 2014/04/08 22:45:00
    OS : Windows 7 Ultimate Edition SP1 [6.1 Build 7601] (x64)

Wha? The 4K random write is quite low. Let’s repeat:

-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

           Sequential Read :   268.109 MB/s
          Sequential Write :   146.449 MB/s
         Random Read 512KB :   248.750 MB/s
        Random Write 512KB :   145.732 MB/s
    Random Read 4KB (QD=1) :    23.962 MB/s [  5850.1 IOPS]
   Random Write 4KB (QD=1) :    76.037 MB/s [ 18563.8 IOPS]
   Random Read 4KB (QD=32) :   202.476 MB/s [ 49432.7 IOPS]
  Random Write 4KB (QD=32) :   119.411 MB/s [ 29153.0 IOPS]

  Test : 1000 MB [F: 0.1% (0.1/111.7 GB)] (x5)
  Date : 2014/04/08 22:50:44
    OS : Windows 7 Ultimate Edition SP1 [6.1 Build 7601] (x64)

Little better at 76 MB/s random 4k write, but not the 120 MB/s that I expected.

Let’s try the native SATA III port and a shorter cable:

-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

           Sequential Read :   477.494 MB/s
          Sequential Write :   143.072 MB/s
         Random Read 512KB :   434.218 MB/s
        Random Write 512KB :   143.168 MB/s
    Random Read 4KB (QD=1) :    24.571 MB/s [  5998.7 IOPS]
   Random Write 4KB (QD=1) :    76.281 MB/s [ 18623.2 IOPS]
   Random Read 4KB (QD=32) :   232.590 MB/s [ 56784.6 IOPS]
  Random Write 4KB (QD=32) :   131.644 MB/s [ 32139.6 IOPS]

  Test : 1000 MB [F: 0.1% (0.1/111.7 GB)] (x5)
  Date : 2014/04/08 23:08:12
    OS : Windows 7 Ultimate Edition SP1 [6.1 Build 7601] (x64)
  

Wow! that sequential read is fast. But, the random write (QD=1) is still disappointing. What gives?

Post Script

Looks like someone else gets similar results, which don’t line up with what TweakTown measured.

Oh, well. The power loss protection is important, and it really only matters how the drive performs in the ZIL. Hopefully, I’ll see closer to 140MB/s rather than 65 MB/s.

Update 2014-04-09

Tried re-running with Intel’s RST driver:

-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

           Sequential Read :   481.219 MB/s
          Sequential Write :   142.877 MB/s
         Random Read 512KB :   430.158 MB/s
        Random Write 512KB :   144.619 MB/s
    Random Read 4KB (QD=1) :    27.370 MB/s [  6682.2 IOPS]
   Random Write 4KB (QD=1) :    77.415 MB/s [ 18900.1 IOPS]
   Random Read 4KB (QD=32) :   271.697 MB/s [ 66332.2 IOPS]
  Random Write 4KB (QD=32) :   142.077 MB/s [ 34686.8 IOPS]

  Test : 1000 MB [F: 0.1% (0.1/111.7 GB)] (x5)
  Date : 2014/04/09 22:28:03
    OS : Windows 7 Ultimate Edition SP1 [6.1 Build 7601] (x64)

Slightly better, but still no 120 MB/s. I have a feeling that something changed with the latest firmware. Wish I had run this before I upgraded.

Be the first to like.

More ZIL/SLOG comparisons

05-Apr-14

Suspicious of my previous tests using iozone, I wrote a small program (that follows) to measure synchronous write speed. I tested this with and without an SLOG device, and then used a really fast Plextor M5 Pro 256GB SSD as the SLOG. I wanted to make sure that something could achieve the fast SLOG speed that I hoped for. (Remember I am doing this in the interest of spending money on a new SSD.)

Summary

An SLOG device can degrade performance, by forcing synchronous writes to wait until they are committed to the SLOG. However, it cannot enhance performance. The max (steady-state) performance is determined by bandwidth to the main storage.
More…

Be the first to like.

SLOG tests on a 32GB Kingston SSD with iozone

02-Apr-14

This is a follow-up on my previous post. I ran iozone both with and without the [ccie]-e[/ccie] flag. This flag includes a sync/flush in the speed calculations. This flush should tell ZFS to commit the in-process data to disk, thus flushing it to the SLOG device (or to the on-disk ZIL).

I ran 4 tests: one without the [ccie]-e[/ccie] flag, and one with it; I then repeated the combination with the SLOG device removed.

Here are the results: More…

Be the first to like.

ZFS slog (ZIL) considerations

01-Apr-14

Running List of Alternatives

2014-07-07

The Transcend MTS600 line is capable of 310 MB/s sequential write. It is ideal as an SLOG, as it is also available in a 32GB size. The downside is that it is an M.2 (NGFF) format. There is also an MTS400 line, which is only capable of 160 MB/s, and an MTS800 line, which has the same 310MB/s performance as the MTS600.

2014-05-10

The Seagate PRO models have power-loss protection, and they seem to do very well on sequential read/writes–better than the Crucial m500.

The new ADATA SP920 are the same as the Crucial m550 (except the NAND configuration is like the Crucial m500), which all have power-loss protection. The ADATA performs considerably better than the (older generation) Crucial m500.

And of course, the Crucial m550 itself is faster than the ADATA SP920 due to more parallel configuration of NAND.

Update on 2014-05-09

This rundown of SSD’s indicates that the power loss protection must last seconds, which invalidates the use of a cable to do this. Also, it’s not just the data in transit, it’s the mapping tables that need to be protected. (In fact, it seems that you can brick your SSD if you cut power to it—and that might explain why one of my SSD’s don’t work.)

Original Post

My latest obsession is what drive is best for a SLOG (dedicated ZIL device).

I plan on testing tonight whether sequential writes or random writes are more important. This DDRDrive marketing propaganda states that (at least for multiple producers) random IOPS are more important. But, I’m very unlikely to see multiple synchronous writes to my ZFS pool. (Maybe a ports build or something, but that’s unlikely).

I have numerous options. But, lets start with the power-loss protection. I found this cable online that has (on each of the 5V and 12V lines) a 2.2 mF capacitor. For those of you that don’t know, that’s really big as far as capacitors go. This should let the SLOG device maintain a charge for quite a bit longer during a power failure.

For example, the SanDisk Extreme II 120GB lists 3.4 W as its power consumption during write. If I assume that all of that comes from the 5.0 V rail (which is the conservative calculation), I get 0.68 A (aka 680 mA) as the current draw during write.

If I then assume that a +/- 10% voltage fluctuation is all that can be tolerated, the capacitor will maintain a voltage for:

ΔT = C*ΔV/I = 2.2 mF × 0.5V / 680 mA = 1.6 ms

So, the cable maintains a voltage for 1.6 ms. This does not seem like much, but if I assume that at most my producer (Samba) can generate 1 Gbps (or 125 MB/s), I can get 125MB/s * 1.6ms = 200 kB to disk. (Of course I’d be lucky to get 100 MB/s with the stock NIC cards in any of the machines on my LAN, but that’s a different story.)

But that’s not really the point. The point is that the SSD should remain active after the producer (the Samba server in this case) dies. This allows writes in progress to be completed. In that case, the real amount of data is how quickly the SSD can commit to disk. That, of course, depends on the write speed of the SSD.

I’ve looked at 4 main candidates for an SSD:

  • OCZ Vertex 2 (60 GB)
  • Crucial m500 (120 GB)
  • Sandisk Ultra II (120 GB)
  • Kingston V300 (120GB)

The benefits of each are as follows. Note that I’ve picked the smallest capacity available for each architecture.

OCZ Vertex 2

The main benefit is price. This thing can be had on eBay for $40. It’s also speedy for its generation. Its random write (4k) is pretty high at 65 MB/s. Its sequential write is 225 MB/s (this is probably compressed data; see below on the 120 GB being 146 MB/s). This means that it can keep up with the 125 MB/s Ethernet and a fair margin above it. For this guy, I probably need the capacitor cable.

Crucial m500

There are two main benefits here. First is speed. Both its sequential write and random write are really high, at 141 MB/s and 121 MB/s, respectively. The second is that this drive has power-loss protection. So, I don’t need a clunky cable with built-in capacitors. Whatever data has made it to the drive will get committed.

SanDisk Ultra II

Once again, speed is a huge benefit. This blows away anything else in both sequential (339 MB/s) and random (133 MB/s) writes. In addition, SanDisk also has a unique nCache non-volatile cache. This allows data to hit SLC (or really MLC operating in SLC mode) before it hits the MLC. This approximates the best of both worlds: the speed and (temporal) reliability of SLC, with the cheaper/denser MLC (for permanent storage).

It does, however, have DDR cache. I hope that if the controller issues a flush, this DDR gets flushed (at least to the nCache). But, to be safe, I’ll probably buy the capacitor cable to pair with this, if I go this route.

Kingston V300 (120GB)

This doesn’t have the fancy flash footwork of the SanDisk nCache, nor does it have power-loss protection of the m500. However, it tends to be cheaper. And the speed is quite sufficient at 133 MB/s random and 172 MB/s sequential. It’s basically an older generation SSD.

(Lack of) Conclusion

So, which will I go for? Well, before I make a decision, I want to test whether sequential or random write is the key metric. I plan on doing this tonight. I have a 32GB Kingston SV100 drive as SLOG on the system. I’ll use dd or something to issue synchronous writes of various sizes. The 32GB Kingston has a sequential write speed of 68 MB/s, but a dismal random write speed. If sequential writes are the key metric, I should see a throughput of around 68 MB/s. If not, I should see something much lower.

That’s the hypothesis, anyway. I’ll test it tonight.

Thoughts on DDRDrive

Warning: this section is mere conjecture.

The DDRDrive propaganda is interesting. They didn’t get a good result with just one producer, so they pushed it to multiple producers. To be fair, this is probably a good model of an enterprise situation–a database server, for example. But, their scenario is with multiple file systems on the same server. Is it likely to split up the database between multiple file systems? I’d posit than in an enterprise environment, one would just split up the multiple databases between multiple servers (or virtual servers) and be done.

In addition, they show the write speeds of their measured disks dropping off after some time. This is probably because the cache is empty to begin with, but further writes need to re-write over data (erase + write) rather than a simple write. However, this situation is unlikely in newer versions of FreeBSD, as the ZFS now supports TRIM. This should make freed sectors erased, so that the erase+write cycle does not occur.

Rejected Options

The Vertex 2 also has 120 GB, but it’s speed isn’t much faster (73 MB/s random and 146 MB/s sequential). I’d take it if were as cheap or cheaper than the 60GB.

The OCZ Agility 3 was also an option, but it isn’t any faster with non-compressible data.

 

Be the first to like.