Sunday, November 18, 2012

Coping with the "the directory service was unable to allocate a relative identifier" Error in Widnows

The error was:

 the directory service was unable to allocate a relative identifier


I had a client with this problem recently. He was trying to add an XP machine to his Windows 2003 based Active Directory domain.

Since he was unable to get a relative identifier from the RID master on the domain, his computer account could not be created. I tested and verified that no objects could be created, including new user accounts.

The corresponding technet article is:

KB 822053

There were several AD related errors in the event logs on the domain controllers, including event id 2042:


It has been too long since this machine last replicated with the 
named source machine. The time between replications with this source 
has exceeded the tombstone lifetime. Replication has been stopped 
with this source. 
The corresponding technet article is:
Event ID 2042

Looking at the domain controllers, it appeared that one of the domain controllers (call it domain controller B) seemed to be able to replicate to the other domain controller (domain controller A) , but not vice versa. The client said that they had discovered a firewall running on one of the domain controllers (DC B) and had turned it off. But, replication had worked from A to B (A was the RID master) in over a year.

  I checked replication with repadmin and did not find any lingering objects. After several false starts, I used the procedure (and I backed up the system state on both domain controllers, just to be safe) at the bottom of the technet article:


To restart replication following event ID 2042

  1. Click Start, click Run, type regedit, and then click OK.
  2. Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters
  3. In the details pane, create or edit the registry entry as follows:
    If the registry entry exists in the details pane, modify the entry as follows:
    1. In the details pane, right-click Allow Replication With Divergent and Corrupt Partner, and then click Modify.
    2. In the Value data box, type 1, and then click OK.
    If the registry entry does not exist, create the entry as follows:
    1. Right-click Parameters, click New, and then click DWORD Value.
    2. Type the name Allow Replication With Divergent and Corrupt Partner, and then press ENTER.
    3. Double-click the entry. In the Value data box, type 1, and then click OK.

Reset the Registry to Protect Against Outdated Replication

When you are satisfied that lingering objects have been removed and replication has occurred successfully from the source domain controller, edit the registry to return the value in Allow Replication With Divergent and Corrupt Partner to 0.


After restarting the ntfrs service on both domain controllers and forcing replication, replication between A and B started working correctly and I was able to join the machine to the domain.




Monday, November 12, 2012

VMware Workstation 9 and Windows 8 - Hyper-V issues

I recently installed Windows 8 on a laptop, as well as VMware workstation 9. Later on, I added Hyper-V roles. When I tried to power Workstation back up, I got the following error message:

VMware Workstation and Hyper-V are not compatible. Remove the Hyper-V role from the system before running VMware Workstation.


Removing the  Hyper-V role, entirely, seems to suffice. I tried removing the Hyper-V role and shutting down the Hyper-V management service without rebooting, but that did not help. After rebooting, the Hyper-V management service was gone and VMware Workstation could launch VMs properly again.


Monday, October 8, 2012

Updating Firmware on HP C7000 Blade Chassis

I recently had to update an HP C7000 series blade chassis. The onboard administrators were both on 1.3 firmware, and a new G7 blade required firmware in the late 2.x line of firmware. I had seen discussions of whether it was safe to upgrade only the OA firmware and not the blades. I contacted HP support, and they said that though while not ideal, you could safely update just the OA firmware.

I first ran the HP Proliant Service Pack ISO: http://h18004.www1.hp.com/products/servers/management/spp/index.html

The HP SUM utility was not able to upgrade the firmware, due to a web API error. I tried several times. The fix that worked for me was to download the last of the Smart Update ISO images, here:


http://h18004.www1.hp.com/products/blades/bladesystemupdate.html

I then used that version of HP SUM to update the C7000 OA to version 9.3. From there, I was able to update the firmware using the Proliant SP disk I linked above. I also used the SP disk to update the ILO's on the blades. I didn't update the firmware on the interconnect devices (in my case, a Cisco switch and a Brocade fibre switch) nor did I update the bios of the blades, although that was certainly a possibility.

Saturday, July 14, 2012

RSA Appliance: Administrative Tasks

I've decided to post a bunch of quick tips about navigating around the appliance - mainly because I don't do it that often and quickly forget. I'll add more as time permits.


  • To change the network settings of the appliance (DNS, gateway, etc.)
    • go to https://yourappliance:7072/operations-console
      • go to administration -> networking -> configure network settings

Thursday, July 12, 2012

FreeBSD NAS/SAN with ZFS (part 1)

Part 1: Initial Setup

I had to set up a NAS/SAN recently using a Dell PowerEdge 1900. The ultimate goal is to have CIFS (through Samba 3.x), NFS, and iSCSI support. With so few drives, performance will not be amazing and 8GB of RAM is a bit low for ZFS, but I needed cheap, fairly reliable storage. I can recover from a system crash fairly easily, and I can restore the data from backups, if necessary. I generally advise against using large drives for RAID devices because of the long rebuild times in the event of failure. However, this system is designed to provide cheap storage, so RAIDZ2 will have to do.

The pros of this sytem:

  • Cheap
  • Easy to set up
  • fairly reliable (though there are single points of failure) - RAIDZ2 will permit two failures before data loss
  • reasonable performance for a fileserver
  • flexible - the system has the entire ports tree
The cons:
  • at least two single points of failure (a single system drive, albeit a SSD and a single power supply, though I will be rectifying the power supply issue shortly)
  • FreeBSD claims the system drive does not support TRIM. If I had chosen a traditional platter based drive, this wouldn't matter. This means I might see some performance issues on the system drives after a certain amount of time. Since I won't be doing much on the system drive, this may not be much of an issue eve
  • Performance will be mediocre with so few spindles (I'd love to have 15 or more in the pool, but I don't have an external JBOD array available)
  • rebuild time will be long, due to the 2TB drives
  •  no dedicated hot spare (no free ports on the SAS controller)

It's important to note that Nexenta community edition is an option, as is FreeNAS (I've ruled out OpenFiler because I've had so many issues with it over the years.) However, I had the following issues:

  1. Nexenta CE is not to be used for production systems
  2. FreeNAS is on an older revision of ZFS than FreeBSD proper

The system is configured like so (it's not the greatest in terms of high availability):

  • 1x Intel Xeon E5335 processor (quad core @2GHz)
  • 8GB RAM (FB-DIMMs)
  • 1 Dell SAS 5i card (LSI based)
  • 7x Hitachi 2TB 7,200RPM SATA drives
  • 1x OCZ Vertex 30GB SSD 
  • 2x Intel Gigabit server NICs

It's important to note that the chassis really only supports 6 3.5" drives, so I had to use drive brackets to mount the remaining 2TB drive & SSD. It would be best to use a pair of SSD drives (the SSD drive is for the base OS) but I didn't have any free ports left on the SAS controller. I can highly recommend that you do a pair of drives for redundancy.

Alternatively, I could use the on board SATA, but I'm actually okay with the system functioning like an appliance (i.e., I'll back up base configuration (/etc /usr/local/etc) and the ZFS pools.) You can, of course, use a USB stick for the base OS, if you'd like. Ultimately, I'd probably want to use SSDs for the caches, but I ran out of drive

The steps were:

  1. Download FreeBSD 9.0 AMD64 and burn to a DVD
    1. DVD 1 ISO
    2. Or... use the memory stick version
    3. or... use the boot only version and do a net install
  2. Boot the machine  from the DVD
  3. Install the system. I prefer to partition unix systems if possible. I really dislike have an errant log file filling up the single filesystem. It's certainly convenient to not have to carve out space, but if I got that route, I prefer to have a properly managed filesystem like ZFS so I can create new volumes and set quotas on them
 I partitioned it like so:


Filesystem           Size    Used   Avail Capacity  Mounted on
/dev/da0a              2G    356M    1.5G    19%    /
devfs                1.0k    1.0k      0B   100%    /dev
/dev/da0d            503M    4.1M    459M     1%    /tmp
/dev/da0e            4.9G    303M    4.2G     7%    /var
/dev/da0f            7.9G    2.5G    4.7G    35%    /usr
/dev/da0g              2G     16M    1.8G     1%    /home


Your needs may vary. I enabled TRIM support on all the filesystems, but FreeBSD complains about that and claims that the drive does not support it. I'm pretty sure it does. I'll likely have to address this issue later.

After partitioning and installing the OS, I set up user accounts, used freebsd-update fetch and freebsd-update install to apply security patches, and finally used portsnap to create and update the ports tree.

3a. set up necessary components in /etc/rc.conf:



zfs_enable="YES"




 4. I set up all the necessary networking components I wanted, such as NTPD and NIC teaming/bonding (failover only.)

5. Zpool creation:

These 2TB drives have 4K sectors, but emulate 512bytes for maximum OS compatibility. I wanted to align the ZFS pool to match the 4K sectors for optimal performance. I found numerous discussions online, but this page was the most straightforward, at least for my purposes.

http://ivoras.net/blog/tree/2011-01-01.freebsd-on-4k-sector-drives.html

alternatively, there is this howto:

http://www.aisecure.net/2012/01/16/rootzfs/

basically, you create GNOP providers (man page) with 4096k sector sizes, export the ZFS pool, destroy the gnop providers and re-import the pool.

Here's the list of drives:




sudo camcontrol devlist Password:               at scbus0 target 0 lun 0 (da0,pass0)         at scbus0 target 1 lun 0 (da1,pass1)         at scbus0 target 2 lun 0 (da2,pass2)         at scbus0 target 3 lun 0 (da3,pass3)         at scbus0 target 4 lun 0 (da4,pass4)         at scbus0 target 5 lun 0 (da5,pass5)         at scbus0 target 6 lun 0 (da6,pass6)         at scbus0 target 7 lun 0 (da7,pass7)    at scbus2 target 0 lun 0 (cd0,pass8)  


 I created my pool like so:




# use a for loop, if you'd like # If I were provisioning many more than 7 drives, I'd probably just do a for loop, too sudo gnop create -S 4096 /dev/da1 sudo gnop create -S 4096 /dev/da2 sudo gnop create -S 4096 /dev/da3 sudo gnop create -S 4096 /dev/da4 sudo gnop create -S 4096 /dev/da5 sudo gnop create -S 4096 /dev/da6 sudo gnop create -S 4096 /dev/da7 sudo zpool create dpool1 raidz2 /dev/da1.nop /dev/da2.nop /dev/da3.nop \
  /dev/da4.nop /dev/da5.nop /dev/da6.nop /dev/da7.nop sudo zpool export dpool1 sudo gnop destroy /dev/ /dev/da1.nop /dev/da2.nop /dev/da3.nop \
  /dev/da4.nop /dev/da5.nop /dev/da6.nop /dev/da7.nop sudo zpool import dpool1




sudo zdb -C dpool1 | grep ashift
should return 12...

I have a working pool:


sudo zpool

> sudo zpool status dpool1   pool: dpool1  state: ONLINE  scan: scrub repaired 0 in 0h0m with 0 errors on Sat Jun 30 11:32:09 2012 config:         NAME        STATE     READ WRITE CKSUM         dpool1      ONLINE       0     0     0           raidz2-0  ONLINE       0     0     0             da1     ONLINE       0     0     0             da2     ONLINE       0     0     0             da3     ONLINE       0     0     0             da4     ONLINE       0     0     0             da5     ONLINE       0     0     0             da6     ONLINE       0     0     0             da7     ONLINE       0     0     0 errors: No known data errors


I did some benchmarks. Performance was acceptable for my purposes. I'd really want to get the spindle count much higher if I were using for something like a DB server or a large hypervisor.

The next section in this series will cover the teaming/bonding in FreeBSD.




Wednesday, March 28, 2012

Simple Pentaho Setup

I had to set up Pentaho reporting recently. I downloaded the following components (all community edition)

  • Pentaho Administration Console (PAC)
  • Pentaho BI Server
  • Pentaho Reporting (editor)
  • Pentaho Data Integration
In the end, I didn't end up using Data Integration. The work flow was something like:

  1. Set up Pentaho reporting with data connections
  2. Create the report in Pentaho Reporting and publish it to the BI Server
  3. Set up a global schedule using the PAC
  4. Connect to the BI server and schedule the job I created in step 2.
This worked fairly well. I gave the report an email recipient. The only gotchas I ran into were that I didn't realize you need to create a publish password.This is located at:

biserver-ce/pentaho-solutions/system/publisher_config.xml


Tuesday, February 7, 2012

CIFS Automount in Linux

Assuming that you have autofs installed and running, add an appropriate entry in /etc/auto.master like so:
/mnt/smb /etc/auto.smbfs --timeout=120

Inside /etc/auto.smbfs (make it executable (chmod +x /etc/auto.smbfs)):
* -fstype=autofs,-Dhost=& file:/etc/auto.smb.sub

And inside /etc/auto.smb.sub, you have a few choices:

If you want to set the uid and gid of the credentials accessing the CIFS share (this defaults to read only):

* -fstype=cifs,credentials=/etc/cifssecret.txt,uid=101,gid=101 ://${host}/& 

otherwise:

* -fstype=cifs,credentials=/etc/cifssecret.txt,rw ://${host}/&
This will let you connect to any samba server or windows server on the mount point
/mnt/smb/servername/share_name




Of course, if you want to hard code the CIFS server name and share:

* -fstype=cifs,credentials=/etc/cifssecret.txt,rw ://mycifsserver/the_share_you_want
 
Inside /etc/cifssecret.txt:

username=yoursambaOrCIFSuser
password=yourcifspassword

chown root /etc/cifssecret.txt
chmod 400 /etc/cifssecret.txt

restart autofs:

sudo /sbin/service autofs restart

And do a listing of some server on that share:

ls /mnt/smb/myWindowsServer/share1


Saturday, January 28, 2012

Patching Changes with ESXi 5.x

updated 12/14/2014

VMware has been pushing people to buy their update manager for a while, but they allowed command line updates through a perl utility in the vSphere cli bundle. To use that procedure on a 4.x ESXi server, look at this post:
Upgrading to ESXi 4.1 from 4.0

To update ESXi 5.x, download the patches from the VMware Patch Portal, upload the patch or patches to a datastore on the ESXi 5.x server. You'll need to update your VMware cli tools. The easiest way to download it is to browse to the web server on the management interface of your ESXi 5.x server (https://ip_address_or_hostname_of_your_esxi5_server)

You'll see it on the upper right:

vSphere Remote Command Line


You'll also need to enable ESXi cli on the ESXi server (it's in the console in the same location that you enable SSH) as well as set the server to maintenance mode.

You can then run the esxicli utility.

You need to know the name of the patch bundle and its location on the datastore. For my server, I was able to list the patch contents of the bundle like so:

G:\Program Files (x86)\VMware\VMware vSphere CLI>bin\esxcli.exe --server=my_server_ip --username=root software sources vib list --depot=/vmfs/volumes/datastore1/patches//ESXi500-201112001.zip


as well as a large list of patches

And I installed it:


G:\Program Files (x86)\VMware\VMware vSphere CLI>bin\esxcli.exe --server=my_server_address --username=root software vib update --depot=/vmfs/volumes/datastore1/patches/ESXi500-201112001.zip
Enter password:
Installation Result
   Message: The update completed successfully, but the system needs to be reboot
ed for the changes to be effective.
   Reboot Required: true

as well as a list of patches. Of course, after that you need to reboot and exit maintenance mode.

vSphere local patching over SSH

 There is no need to even install the esxcli tool. You can simply 1. copy the zip file over 2. enable ssh (security profile - just start the ssh service) and run esxcli after connecting via ssh as an admin user.

The command line is a little different as you have no need for a username or a server name, nor do you need .exe on the command, being a linux binary.

i.e.,

esxcli  software vib update --depot=/vmfs/volumes/datastore1/patches/ESXi550-201410001.zip