The latest admin trick I learned…

Although LOPSA is platform-agnostic, my current job has me solidly as a Solaris and EMC storage admin. I have experience on just about every other platform of at least a limited extent (DEC UNIX, SCO, HPUX, Windows 4.0, 2000, 2003, Linux mainly RedHat, FreeBSD, OSX, etc…). However all the new tricks I learn, most are going to be on the Solaris platform.

The latest two things I had to learn are how to handle shared memory allocations under Solaris 10, and the ‘supported’ way to make Veritas filesystems mounted in a zone.

Shared memory was interesting, I mainly googled and asked around until I got a couple of command lines. Then when that didn’t work quite as expected in a zone, I borrowed a copy of the new Solaris Internals books to see if I was doing something wrong. Turns out I wasn’t really, I just seemed to be running the box out of memory completely.

Basically, you use projects (Solaris 10 resource management containers) to give specific users the rights to have more than the normal maximum amount of shared memory. These are stored in /etc/project. The main commands I used were projadd, projmod and projdel. You can also use these to set specific values on zones, and other resource objects I imagine (zones, users, and groups is as far as I got).

In my mind this is a huge improvement over jumping through hoops in the /etc/system file and having to reboot the OS in order to make a change. Sure, I’d gotten to know /etc/system IPC parameters fairly well, but I’d also gotten annoyed by how they were (sometimes conflictingly) documented in scattered places over the Internet, and that Sun’s documentation doesn’t seem to include every possible parameter. Instead of that, you just tell it to give oracle 2 gig of memory –

projadd -U oracle -K “project.max-shm-memory=(priv,2048MB,deny)” user.oracle

And then if you have another user that needs more shared memory, give it to them. In my case, the user is named voyager –

projadd -U voyager -K “project.max-shm-memory=(priv,4096MB,deny)” user.voyager

The trick to doing it in a zone, is that there is no trick. I did run into some strangeness when I had the same user rules in the global zone and the local zone, but that might have also been just because I was literally running the system out of physical and swap memory.

Now, Veritas filesystems (hereafter referred to as vxfs) in zones. This was really more a case of being a PITA and RTFMing than a ‘cool trick’.

Problem was, I had tried my ‘first guess, brute force’ method of just having the global zone have entries in /etc/vfstab to mount it up. So, if the zone named “myzone”‘s root directory was at /export/home/zone/myzone, then I would mount it up at /export/home/zone/myzone/root/m1. The vfstab would look like this –
/dev/vx/dsk/dg01/lvol1 /dev/vx/rdsk/dg01/lvol1 /export/home/zone/myzone/root/m1 vxfs 3 yes –

Problem was, if I did it this way, the mount point would only appear in ‘df’ commands in the local zone, unless I unmounted and remounted it with the zone up – and then it wasn’t visible in the zone properly anymore (at least not every time). So I was looking at a weird dependency between the filesystems and the zone coming up (I have about 11 vxfs’s that need to be mounted in the zone).

So, some googling and Sun RTFMing later, I found out that the only supported way to do vxfs filesystems in zones is to mount them up in the global zone (not directly underneath the zone), and the configure a lofs in the zone to point at the real mount point as it’s device.
Relevant links were –
http://www.opensolaris.org/jive/thread.jspa?messageID=53016
http://www.mail-archive.com/zones-discuss@opensolaris.org/msg00577.html

So, in this case I made a localmounts heirarchy so each zone could have it’s mounts broken out –
/localmounts/myzone/m1
/localmounts/otherzone/mountpoint

And made the line in /etc/vfstab look like this –
/dev/vx/dsk/dg01/lvol1 /dev/vx/rdsk/dg01/lvol1 /localmounts/myzone/m1 vxfs 3 yes –

And then configured the zone. I know it’s possible to just edit the XML file (/etc/zones/*.xml), but I did it through the command line, just in case there was something I missed.

zoneadm -z myzone halt

zonecfg -z myzone
zonecfg:myzone> add fs
zonecfg:myzone:fs> set dir=/m1
zonecfg:myzone:fs> set special=/localmounts/myzone/m1
zonecfg:myzone:fs> set type=lofs
zonecfg:myzone:fs> end
zonecfg:myzone> commit
zonecfg:myzone> exit

(I then did this 11 more times…ugh, wished I was editing the xml file)

zoneadm -z myzone boot

And voila, the filesystem appears in the zone as it’s own mountpoint, that gets mounted up when the zone comes up, and unmounted when the zone comes down. This solved my original problem of having the box and zones come back up completely automagically after an unexpected “init 6”.

I also tested out the growing and shrinking the filesystem from global zone to see if it was immediately reflected in the localzone through the lofs (using vxresize -g dg01 (newsize) lvol1 – while I was doing reads and writes to the filesystem in the zone), and it was. It makes administering it a little trickier, as you work in the global zone to make changes to the local zones, but that’s how the paradigm works with zones I’ve been finding.

And for my next trick, Oracle 9i tablespace migrations…

-b