Anyway,
I've been working with Solaris 10 u6 x86 and ZFS. I started dabbling in it about two years ago, but I wasn't too interested. I guess I just needed another couple of years worth of pain wrestling with competing volume managers like Sun Volume Manager, LVM, etc.
We have several clients with real interest in a Solaris/ZFS solution for fileservers between branch offices and for DR planning. To prepare for this, I've been testing Solaris 10u6 with a couple of ASUS barebones, 4GB RAM, 4x 320GB 7,200RPM 16MB cache 2.5" SATA drives, and a dual core celeron clocked at 2GHZ. The boxes also have an Intel PCI-E desktop NIC. Not terrible, but nothing like the "Thumpers" that all the blogging Sun engineers are using (128GB RAM!)
The testing has been going fairly well. I know the system could use more RAM (I'm working on getting a used Dell PE and adding 16GB RAM) - but this will be good for demos. The snapshotting alone should be able to hook quite a few people. I recently purchased an OCZ Vertex 30GB SSD to test out dedicated ZILs. For those that don't know already, the ZIL is the log device for a given pool - by default, stored across the drives in a pool. The ZIL is used in client caching. If you do use a separate device for the ZIL, it would be safest to use ZIL devices in pairs as mirrors.
Anyway, I added it to my little pool as a dedicated ZIL device. I noticed right away that it had zero impact on iSCSI performance (I guess I should have realized that the ZIL wasn't for use by the iSCSI targetd.) I was getting near full line speeds on a gigabit Windows Vista client using the windows iSCSI initiator, a ZFS backed iSCSI lun, and the ATTO disk benchmark.
I decided to create a regular ZFS volume and share it out in NFS (zfs create -osharenfs=root=my.esxi.box's.ip-address mypool01/vols/nfs1) so I could mount it in ESXi as the datastore. I set up an ESXi install on a USB stick and booted another machine with a Q9400 Intel processor (4 cores at 2.5GHZ) with 4GB RAM and only the USB stick for a hard disk. I then added the NFS share as a datastore, and proceeded to install FreeBSD 7.1 i386 on a new vm.
It changed everything in regard to the ZIL. Whereas I was seeing zero activity on the ZIL disk, I was now seeing heavy activity on the ZIL, with up to 20 seconds of no activity on the four disks. I was seeing between 2 and 5 thousand kps on the ZIL, plus around 600 transactions per second. The transactions bit was pretty interesting, as I was used to seeing about 30-40 tps for each of the regular drives in iSCSI testing.
I later compiled cvsup on the VM and I noticed that there was much less activity. Bonnie++ produced a heavier throughput (~12,000kps) on the ZIL, but less transactions per second (around 350.) The drives were writing every 10 to 15 seconds, and were sustaining a speed of about 12,000kps.
I'll probably do a make buildworld on the vm later to get a better feel for the performance... As you can tell, this testing isn't even remotely scientific or thorough.
No comments:
Post a Comment