I’ve just set up my new file server for Windows shares. Unfortunately, OpenSolaris doesn’t run on my PortWell machine, and I’ve decided that trying to run Windows within a VirtualBox VM (hosted by OpenSolaris) is a bit too flaky.
Here’s the set up:Partitioning
First, I had to configure my drives to support a raidz configuration. I have a 1TB hard drive (new) and a 250 GB hard drive (old). This was the tricky part. I want a good raidz configuration eventually. However, right now, I don’t have the money to buy 2 extra hard drives. So, I partitioned the 1 TB hard drive into (roughly) 250 GB slices, and set up a raidz configuration with the slices and the separate 250 GB hard drive. This step was a bit tricky. The FreeBSD fdisk
command wants to partition on cylinder boundaries. Additionally, fdisk
also thinks the hard drive is built up of 121601 cyls / 255 heads / 63 sectors. If I need to partition on a cylinder boundary, I want each partition to hold 121601/4 cylinders = 30 400 cylinders. Which comes out to 488 376 000 sectors. That’s all good, except fdisk
throws a wrench in there by adding an unused partition at the beginning of the first partition–I’m guessing for the partition table. So, I had to set that first partition to be a little larger (to account for the extra partition). I added 255*63=16065 sectors to the first partition, requesting 488392065 sectors. In the end, my partitions looked like this:
[ccp]
Disk name: da0 FDISK Partition Editor
DISK Geometry: 121601 cyls/255 heads/63 sectors = 1953520065 sectors (953867MB)
Offset Size(ST) End Name PType Desc Subtype Flags
0 63 62 – 12 unused 0
63 488392002 488392064 da0s1 4 ext2fs 131
488392065 488376000 976768064 da0s2 4 ext2fs 131
976768065 488376000 1465144064 da0s3 4 ext2fs 131
1465144065 488376000 1953520064 da0s4 4 ext2fs 131
1953520065 5103 1953525167 – 12 unused 0
[/cc]
ZFS RaidZ
Originally, I did this:
zpool create tank raidz2 da0s1 da0s2 da0s3 da0s4 da1s1
However, I realized a couple things: I’m creating a raidz-2 configuration. So, I’d need at least 3 hard drives for this to be of any use. Even with 3 separate 1 TB hard drives, I’d only have 1 TB of usable space (the other two hard drives are redundant parity storage). That’s a bit expensive. So, I decided to throttle down to raidz-1. If a hard drive fails, I should have enough time to replace it before a second hard drive fails. So, I did the following:
zpool create tank raidz da0s1 da0s2 da1s1
Yup: I’m not using da0s3 nor da0s4. So, there’s roughly 500 GB of space going unused. This isn’t that big of a deal, because when I need the space, it’s probably time to buy an extra hard drive. Also, at the rate my family consumes data, it could be a year before we miss the space. (In retrospect, I probably should have bought two 500 GB hard drives rather than the 1 TB hard drive in the first place.)
User Storage Areas
I created a place to house the Windows user directories:
zfs create /tank/Users
I then used the following script to create user ZFS filesystems under the /tank directory, for each user, and with compression on for the Documents
directory:
#!/bin/sh
u=”$1″
zfs create tank/Users/$u
chown $u:$u /tank/Users/$u
for d in “Documents” “Music” “Videos” “Pictures”; do
zfs create tank/Users/$u/$d
chown $u:$u /tank/Users/$u/$d
done
zfs set compression=gzip tank/Users/$u/Documents
Samba
I set up the samba shares to use this /tank/Users
directory rather then the default home directories. I edited the following in /usr/local/etc/smb.conf
:
[homes]
comment = Home Directories
browseable = no
writable = yes
path = /tank/Users/%u/
After that, a few adduser
and smbpasswd -a
commands and things were all set. I’m now using Allway Sync to synchronize my windows machines to the file server.
Kernel Tuning
ZFS requires a lot more memory from the kernel than the default FreeBSD provides. The tuning guide says that I need to rebuild the kernel with KVA_PAGES=512
. Perhaps. However, for right now, I’m going to try and get by with the following additions to loader.conf from http://www.daemonforums.org/showthread.php?t=4200:
newsystem# cat >> /boot/loader.conf << __EOF__
vfs.zfs.prefetch_disable=0 # enable prefetch
vfs.zfs.arc_max=134217728 # 128 MB
vm.kmem_size=536870912 # 512 MB
vm.kmem_size_max=536870912 # 512 MB
vfs.zfs.vdev.cache.size=8388608 # 8 MB
__EOF__
Post a Comment