- To change the network settings of the appliance (DNS, gateway, etc.)
- go to https://yourappliance:7072/operations-console
- go to administration -> networking -> configure network settings
Saturday, July 14, 2012
RSA Appliance: Administrative Tasks
I've decided to post a bunch of quick tips about navigating around the appliance - mainly because I don't do it that often and quickly forget. I'll add more as time permits.
Thursday, July 12, 2012
FreeBSD NAS/SAN with ZFS (part 1)
Part 1: Initial Setup
I had to set up a NAS/SAN recently using a Dell PowerEdge 1900. The ultimate goal is to have CIFS (through Samba 3.x), NFS, and iSCSI support. With so few drives, performance will not be amazing and 8GB of RAM is a bit low for ZFS, but I needed cheap, fairly reliable storage. I can recover from a system crash fairly easily, and I can restore the data from backups, if necessary. I generally advise against using large drives for RAID devices because of the long rebuild times in the event of failure. However, this system is designed to provide cheap storage, so RAIDZ2 will have to do.
The pros of this sytem:
It's important to note that Nexenta community edition is an option, as is FreeNAS (I've ruled out OpenFiler because I've had so many issues with it over the years.) However, I had the following issues:
The system is configured like so (it's not the greatest in terms of high availability):
It's important to note that the chassis really only supports 6 3.5" drives, so I had to use drive brackets to mount the remaining 2TB drive & SSD. It would be best to use a pair of SSD drives (the SSD drive is for the base OS) but I didn't have any free ports left on the SAS controller. I can highly recommend that you do a pair of drives for redundancy.
Alternatively, I could use the on board SATA, but I'm actually okay with the system functioning like an appliance (i.e., I'll back up base configuration (/etc /usr/local/etc) and the ZFS pools.) You can, of course, use a USB stick for the base OS, if you'd like. Ultimately, I'd probably want to use SSDs for the caches, but I ran out of drive
The steps were:
Your needs may vary. I enabled TRIM support on all the filesystems, but FreeBSD complains about that and claims that the drive does not support it. I'm pretty sure it does. I'll likely have to address this issue later.
After partitioning and installing the OS, I set up user accounts, used freebsd-update fetch and freebsd-update install to apply security patches, and finally used portsnap to create and update the ports tree.
3a. set up necessary components in /etc/rc.conf:
4. I set up all the necessary networking components I wanted, such as NTPD and NIC teaming/bonding (failover only.)
5. Zpool creation:
These 2TB drives have 4K sectors, but emulate 512bytes for maximum OS compatibility. I wanted to align the ZFS pool to match the 4K sectors for optimal performance. I found numerous discussions online, but this page was the most straightforward, at least for my purposes.
http://ivoras.net/blog/tree/2011-01-01.freebsd-on-4k-sector-drives.html
alternatively, there is this howto:
http://www.aisecure.net/2012/01/16/rootzfs/
basically, you create GNOP providers (man page) with 4096k sector sizes, export the ZFS pool, destroy the gnop providers and re-import the pool.
Here's the list of drives:
I created my pool like so:
I have a working pool:
I did some benchmarks. Performance was acceptable for my purposes. I'd really want to get the spindle count much higher if I were using for something like a DB server or a large hypervisor.
The next section in this series will cover the teaming/bonding in FreeBSD.
I had to set up a NAS/SAN recently using a Dell PowerEdge 1900. The ultimate goal is to have CIFS (through Samba 3.x), NFS, and iSCSI support. With so few drives, performance will not be amazing and 8GB of RAM is a bit low for ZFS, but I needed cheap, fairly reliable storage. I can recover from a system crash fairly easily, and I can restore the data from backups, if necessary. I generally advise against using large drives for RAID devices because of the long rebuild times in the event of failure. However, this system is designed to provide cheap storage, so RAIDZ2 will have to do.
The pros of this sytem:
- Cheap
- Easy to set up
- fairly reliable (though there are single points of failure) - RAIDZ2 will permit two failures before data loss
- reasonable performance for a fileserver
- flexible - the system has the entire ports tree
- at least two single points of failure (a single system drive, albeit a SSD and a single power supply, though I will be rectifying the power supply issue shortly)
- FreeBSD claims the system drive does not support TRIM. If I had chosen a traditional platter based drive, this wouldn't matter. This means I might see some performance issues on the system drives after a certain amount of time. Since I won't be doing much on the system drive, this may not be much of an issue eve
- Performance will be mediocre with so few spindles (I'd love to have 15 or more in the pool, but I don't have an external JBOD array available)
- rebuild time will be long, due to the 2TB drives
- no dedicated hot spare (no free ports on the SAS controller)
It's important to note that Nexenta community edition is an option, as is FreeNAS (I've ruled out OpenFiler because I've had so many issues with it over the years.) However, I had the following issues:
- Nexenta CE is not to be used for production systems
- FreeNAS is on an older revision of ZFS than FreeBSD proper
The system is configured like so (it's not the greatest in terms of high availability):
- 1x Intel Xeon E5335 processor (quad core @2GHz)
- 8GB RAM (FB-DIMMs)
- 1 Dell SAS 5i card (LSI based)
- 7x Hitachi 2TB 7,200RPM SATA drives
- 1x OCZ Vertex 30GB SSD
- 2x Intel Gigabit server NICs
It's important to note that the chassis really only supports 6 3.5" drives, so I had to use drive brackets to mount the remaining 2TB drive & SSD. It would be best to use a pair of SSD drives (the SSD drive is for the base OS) but I didn't have any free ports left on the SAS controller. I can highly recommend that you do a pair of drives for redundancy.
Alternatively, I could use the on board SATA, but I'm actually okay with the system functioning like an appliance (i.e., I'll back up base configuration (/etc /usr/local/etc) and the ZFS pools.) You can, of course, use a USB stick for the base OS, if you'd like. Ultimately, I'd probably want to use SSDs for the caches, but I ran out of drive
The steps were:
- Download FreeBSD 9.0 AMD64 and burn to a DVD
- Boot the machine from the DVD
- Install the system. I prefer to partition unix systems if possible. I really dislike have an errant log file filling up the single filesystem. It's certainly convenient to not have to carve out space, but if I got that route, I prefer to have a properly managed filesystem like ZFS so I can create new volumes and set quotas on them
Filesystem Size Used Avail Capacity Mounted on /dev/da0a 2G 356M 1.5G 19% / devfs 1.0k 1.0k 0B 100% /dev /dev/da0d 503M 4.1M 459M 1% /tmp /dev/da0e 4.9G 303M 4.2G 7% /var /dev/da0f 7.9G 2.5G 4.7G 35% /usr /dev/da0g 2G 16M 1.8G 1% /home
Your needs may vary. I enabled TRIM support on all the filesystems, but FreeBSD complains about that and claims that the drive does not support it. I'm pretty sure it does. I'll likely have to address this issue later.
After partitioning and installing the OS, I set up user accounts, used freebsd-update fetch and freebsd-update install to apply security patches, and finally used portsnap to create and update the ports tree.
3a. set up necessary components in /etc/rc.conf:
zfs_enable="YES"
4. I set up all the necessary networking components I wanted, such as NTPD and NIC teaming/bonding (failover only.)
5. Zpool creation:
These 2TB drives have 4K sectors, but emulate 512bytes for maximum OS compatibility. I wanted to align the ZFS pool to match the 4K sectors for optimal performance. I found numerous discussions online, but this page was the most straightforward, at least for my purposes.
http://ivoras.net/blog/tree/2011-01-01.freebsd-on-4k-sector-drives.html
alternatively, there is this howto:
http://www.aisecure.net/2012/01/16/rootzfs/
basically, you create GNOP providers (man page) with 4096k sector sizes, export the ZFS pool, destroy the gnop providers and re-import the pool.
Here's the list of drives:
sudo camcontrol devlist Password:at scbus0 target 0 lun 0 (da0,pass0) at scbus0 target 1 lun 0 (da1,pass1) at scbus0 target 2 lun 0 (da2,pass2) at scbus0 target 3 lun 0 (da3,pass3) at scbus0 target 4 lun 0 (da4,pass4) at scbus0 target 5 lun 0 (da5,pass5) at scbus0 target 6 lun 0 (da6,pass6) at scbus0 target 7 lun 0 (da7,pass7) at scbus2 target 0 lun 0 (cd0,pass8)
I created my pool like so:
# use a for loop, if you'd like # If I were provisioning many more than 7 drives, I'd probably just do a for loop, too sudo gnop create -S 4096 /dev/da1 sudo gnop create -S 4096 /dev/da2 sudo gnop create -S 4096 /dev/da3 sudo gnop create -S 4096 /dev/da4 sudo gnop create -S 4096 /dev/da5 sudo gnop create -S 4096 /dev/da6 sudo gnop create -S 4096 /dev/da7 sudo zpool create dpool1 raidz2 /dev/da1.nop /dev/da2.nop /dev/da3.nop \
/dev/da4.nop /dev/da5.nop /dev/da6.nop /dev/da7.nop sudo zpool export dpool1 sudo gnop destroy /dev/ /dev/da1.nop /dev/da2.nop /dev/da3.nop \
/dev/da4.nop /dev/da5.nop /dev/da6.nop /dev/da7.nop sudo zpool import dpool1
sudo zdb -C dpool1 | grep ashiftshould return 12...
I have a working pool:
sudo zpool
> sudo zpool status dpool1 pool: dpool1 state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Sat Jun 30 11:32:09 2012 config: NAME STATE READ WRITE CKSUM dpool1 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 da1 ONLINE 0 0 0 da2 ONLINE 0 0 0 da3 ONLINE 0 0 0 da4 ONLINE 0 0 0 da5 ONLINE 0 0 0 da6 ONLINE 0 0 0 da7 ONLINE 0 0 0 errors: No known data errors
I did some benchmarks. Performance was acceptable for my purposes. I'd really want to get the spindle count much higher if I were using for something like a DB server or a large hypervisor.
The next section in this series will cover the teaming/bonding in FreeBSD.
Subscribe to:
Posts (Atom)