Thursday, October 05, 2006

Terrible tales of NIS, NFS, and automounting - II

Maps, Matey!

On the last installment we began to setup the NIS server for Cannelloni Inc, a performance kitcar manufacturing company. Now that we have the domain name defined and a home for the NIS maps we are going to use, how about creating some maps? We will go over that by first working on the NIS server, by creating and exporting the maps, and then reading them in the client.

Server setup

Ok, we need to create the NIS maps, but what are those maps anyway? Well, maps are the files NIS uses to keep the information it needs and passes around. Think of them as plain text databases where each entry is a pair (as in first column is defined by the remaining columns). I guess the best way to explain them is to show how they compare to some of the files used by Linux/Unix:

MapsEquivalent unix fileComments
hosts.byname, hosts.byaddr/etc/hostsMaps IP addresses to host names
passwd.byname, passwd.byuid/etc/passwdMaps UIDs to usernames (and passwords)
group.byname, group.bygid/etc/groupMaps Group IDs to group names

So, our /etc/src/auto.home would look something like this:

bob            -nosuid,intr
thetick        -nosuid,intr
heathcliff     -nosuid,intr
mccoy          -nosuid,intr

and so on.

The netgroup map, which we chose (when we edited the /var/yp/Makefile, remember?) to be stored in /var/yp/, is like the groups file but can be used to group not only users but also any combination of users, domains, and hosts. We have two printers, falbala and bonemine, so we create a group for them which will be called printers. So far, our /var/yp/netgroup file looks like this:

openwheel (assurancetourix,,) (alambix,,) (caiousbonus,,) (petisuix,,)
printers  (falbala,,) (bonemine,,)

Remember that once you ad a Linux box as a NIS client you should run /usr/sbin/gdm-restart so the login window knows of the changes and maps. For some reason, ssh and the text-based login screen have no problems being updated, but gdm does. Perhaps it is buffering the user data.

For Linux, create an /etc/exports file that looks like this:


For Solaris, set up your /etc/dfs/dfstab this way:

share -F nfs -o rw=@,root=@ /home/sunpci/linux

Once you have finished with /etc/exportfs, you need to make the changes take place by typing


# exportfs -a


# share all

Tuesday, September 19, 2006

Terrible tales of NIS, NFS, and automounting

NFS and NIS have been around for a while, way before someone decided to network two Windows boxes. They have a lot of neat features.

The Network Information Service (NIS) is a directory service protocol created by Sun. It is not as elegant as, say, LDAP with Kerberos, but can get the work done if due care is taken to keep it as safe as possible.

I am going to present the steps necessary to setup a NFS/NIS system that would server a bunch of users and the unix boxes they connect to. Originally I was going to make this fit one single post but I realized that it (hopefully) would be easier if I broke down into sessions and dealt with just one aspect at a time. I will also create a fake company so it has that professional look to it. Sounds like a plan? Great! Let's get busy then. In this example, we work at Cannelloni LLC, a performance kitcar manufacturing company. It is primarily a Linux shop all the way to the desktops. Recently it has grown enough to need a centralized directory service and file sharing systems. Since we are talking about NIS, Cannelloni chose to use NIS for now. Later on the show we will talk about NIS limitations.


I would start this mess with the main/master NIS server because I want to have the authentication side of the business out of the way. First of all, we need a NIS domain name. This really needs not to have anything to do with the DNS domain, but should make sense to you. Think of it as a logical group or unit. You talked to your boss and after a few beers it was decided to divide the current network mess into the following groups:

  • Management
  • Office
  • Accounting
  • HR
  • Development
  • Production

Probably you could have come up with better names, but that is what you get by trying to work drunk. You can always change them later. If nothing else, just to piss off accounting. Since doing every single group would bore me to death, we will assume that development decides to take the lead. If it works there, the same concept will be generalized across the entire company. So, development choses idefix (it is not very powerful but that really does not matter) as its NIS server; another machine, obelix, which has a nice hotswappable RAID 5 will be the fileserver which will export fileshares through NFS.

First, we start by finding out a bit about the company's network and which part of it belongs to development. Careful research indicates the entire company is behind a router, so it has only a handful of public IPs (webserver, mail, and so on) while the LAN uses the private network (it is a small company). The IPs assigned to development are to, and all the IPs have not been assigned yet. This is very important to know because we can limit which machines can see the NIS maps. How do we do that? Well, we are getting a bit ahead; let's first create a place to save all the configuration files we will be creating.

We Need a Home

NIS stores a lot of important stuff in general in /var/yp Go take a look at it; it should look kinda like this:

dalek@idefix-> ls /var/yp
binding  Makefile  nicknames

Kinda boring I know but we are just starting with it. The Makefile you see there is used to generate the NIS maps. By default it will use /etc/passwd, /etc/shadow, /etc/group and a lot of other files that are in /etc. I honestly do not like that. /etc for me is kinda of an important directory and I would rather have its contents not being passwd all over the universe. Instead, I prefer to feed NIS my own passwd, group, and any other map I want to share. Not only that makes it a bit safer but also easier to manage/move around as everything is contained in a single location you can simply tar and move to the next machine. So, we need to do some editing in Makefile. So, I create two directories: /var/yp/src and /var/yp/src/pw. Then, I edit the Makefile as follows (you will need to search within that file for those definitions):

  • Linux
    # YPSRCDIR = /etc
    YPSRCDIR = /var/yp/src
    # YPPWDDIR = /etc
    YPPWDDIR = /var/yp/src/pwd
  • Solaris
    # DIR =/etc
    DIR =/var/yp/src
    # PWDIR =/etc
    PWDIR =/var/yp/src/pwd

The next step is to create those directories and make sure they can only be read/accessed by root, specially /var/yp/src/pwd as it will host the password file that will be shared through NIS. Next we will create a file called securenets in /var/yp which will tell which machines can see these maps:

dalek@idefix-> cat /var/yp/securenets
# /var/yp/securenets
# Restrict access to the NIS maps to the machines defined in this file
# allow connections from local host -- necessary
# same as
# allow connections from any host on the development network
host  # asterix
host  # obelix
host  # idefix
host  # panoramix
host  # abraracourcix
host  # bonemine
host  # agecanonix
host  # assurancetourix
host  # cetautomatix
host  # ordralfabetix
host  # lelosubmarine
host  # falbala
host  # aplusbegalix
host  # amerix
host  # caiusbonus
host  # caiusmalosinus
host  # tragicomix
host  # alambix
host  # petisuix
host  # jolitorax
host  # beaufix
host  # barberouge

Do notice we chose to specify each host in /var/yp/securenets. Since the number of machines in this list is not a power of 2, we could not use IP of the first machine and a carefully chosen subnet mask to cover them all. Also, spelling out every machine we plan on using allows us to later on comment out the ones we do not need.

A Domain by Any Name

Now that we have that taken care of, we need to come up with a name for our NIS domain. Since this name does not need to be remotely related to our DNS domain, we will call it development as it is the NIS domain for the development group. I know, I know, I am very original...

Ok, you ask, now how to make NIS know we chose a NIS domain name? Well, the domain name is stored in /etc/defaultdomain (for solaris) or /etc/domainname (for Linux). If you write the domain name you want to use in that file (i.e. there is just one single line in the file and all it has is your domain name, in this case, development) and reboot, idefix will then know the name of the domain. You can check it by using the command domainname as follows:

dalek@idefix>cat /etc/defaultdomain

Now, before you go rebooting the machine, let's see if we can change its domain name without rebooting, shall we? In both Solaris and Linux, you can set the runtime domain name to development by saying

dalek@idefix>domainname development

How about checking it?


Of course, since we had already defined the domain name in /etc/defaultdomain (Solaris) or /etc/domainname, we could have said

  • Solaris
    dalek@idefix>domainname `cat /etc/defaultdomain`
  • Linux
    dalek@idefix>domainname `cat /etc/domainname`

Do note the back quotes; they are rather important.

Ok, I am going to take a break for now. Next time we will talk about the wonderful world of maps. Stay tuned!

Sunday, August 27, 2006

Backing up

It has been said that a business will not survive if it does not have a good backup system. I personally agree with it. I also have noticed that many companies only consider backup solutions after the lack of it bit them in their behinds, resulting in data and productivity losses, credibility issues, and, above all, losses where it counts the most: the wallet. So, why would they avoid it like most of us, excluding characters like a certain guy from Little Shop of Horrors, would avoid a visit to the dentist? Well, sometimes it has to do with perceived cost. Since backup does not directly translates to making profit, it is seen as waste of good money which could be used in more productive manners. Kinda like boats.

Then, we have the old adage, do not change a winning team. In other words, if the system is working, do not mess with it by adding this newflanged and unproven backup thingamajig. That also implies that a properly configured system should not need backup. So, if the system has a problem -- something that is to be expected as no systems are perfect (yes, I know, there are systems that are less perfect than others) and hardware will go boink before tea-time -- someone did not do his/hers/its share. As a result, heads in the IT group will roll.

Anywhoo, the point is that files will be lost (either by malice or by mistake) and machines will crash and hard drives will fail and other things will happen to make your original copy of the data unusable. At that point, if you have backup you only lose a few hours worth of work if that. If you do not, well, you may have lost years of data. It happened to me and I can tell you it is not a nice feeling.

All this is nice and boring. How hard is to have a backup? And, how expensive it can be? It depends on what do you want to do, how much effort you are willing to put on it, and how nice you want it to be. You have price, speed, and quality, but can only pick two of them. That is fine. How complex a backup system do you need? Sometimes not that much if your needs are small. Let me present an example based on a real story. I was asked by a company I consult for to come up with a simple, hands-off backup of their most important application (in this case a program they use to keep track of what happens in their shop). One of the most important issues is that they want to be able to continue working in case their application server goes on a holiday. And, they run Windows XP. Yes, I know this blog is supposed to be about unix, but the principle is what matters. I promise I will show later an unix example.

My approach was first to understand the program. Because it is dos-based (that is a choice done by the developer, and they do go over great lengths to justify it. It really boils down to: it is works nicely and does not need much resources to run) program, it is placed on a directory which is then exported to the machines that are authorized to mount it (sounds familiar? Can you say NFS + NIS/LDAP? I bet you can). Interestingly enough, for it to work the entire system disk of the server machine that hosts the program must be exported with permissions set for everybody who access the program be able to read and write to the disk for it to work. Why? I do not know but do think that is a safety issue. But, I digress...

When the client said he wanted to be able to continue working in case of system crash, what he really meant was that if the server crashed, he would be able to go to another PC in his network and run off the backup copy. So, it was decided the most convenient backup media was an USB hard drive formated with one NTFS partition. To keep this quite Windows-centric, we chose to use robocopy, which is found in the Microsoft website, as the underlining program we create a little script (or batch file for those of you who are anal about terminology), that would be called to do out daily backups.

:: Backs up a single directory from one location to another.  Since we are using robocopy,
:: the locations can be in local or network drives.
@echo off

:: What to copy and where to copy to
set dirname=shopmanager
set sourcepath=z:set targetpath=f:set sourcedir=%sourcepath%%dirname%
set targetdir=%targetpath%%dirname%

:: Just to be on the safe side, I am defining where robocopy has been 
:: installed. BTW, this is the default path.
set COPYPATH=C:\Program Files\Windows Resource Kits\Tools\robocopy
set COPYARGS=/e /copyall

:: Add today's day number to targetdir
set today=%date:~7,2%
set targetdir=%targetpath%_%today%

echo %COPYPATH% %sourcedir% %targetdir% %COPYARGS%


Wednesday, July 12, 2006

On Windows

Some people would think that if I am going to write another post bashing Microsoft. Actually I am not. To understand why Microsoft does things the way they do, one must understand their motives. As I am not Mr. Spock and live far from Redmond, I can only conjecture. Here is my take: Microsoft is a company. As any good company in a capitalist society, its objective is to make money. Nothing wrong with that; I myself like the good ol' little green pieces of paper myself. But, what does that mean? Well, they have to make choices based on cost. Take bug tracking. I would imagine they have a list of bug reports this big. But, which bugs get taken care of first and which ones stay on the pile? It has to do with the perceived cost and benefit of solving that specific problem. Sometimes the bug is major but it would cost a lot of man-hours to solve and it is currently not causing enough costly problems to be put on the top of the list, so it bubbles down on the pile until that changes. Outraged? Other industries do the very same thing. Take automakers, just to pick one. How do you think recalls come about? Many recalls cost a lot to the company so it compares how much it costs to do the recalls vs. how much it would cost to take care of any litigation caused by the problem. The same goes for computer companies. If a given security hole is not seen as causing enough damage, financially speaking (bad rep is indirectly related to financial concerns, so...), it may take a bit to get solved. Remember those companies do not have an unlimited budget or number of programmers they can throw at a problem. So, they have to make a choice. Microsoft's Bill Hilf said that his employer designs Windows to reach a broad range of customers. As a result, it may not cover specialized markets with specialized issues. So, that bug you are so concerned about may not be seen as having enough of a broad reach to be placed on the top of Redmond's todo list. They have to pick their battles and priorities. Open source software is a different bag of cats alltogether: since you have the source code, you have the opportunity to take care of that bug yourself and then submit the source code changes. Of course that could lead to some source control issues, but the point is that if open source code is not doing exactly what you want, you can do something about that instead of complaining that the program sucks.

Wednesday, May 10, 2006

The case for /export/home

Sun really wants us to place user home directories in /export/home instead of /home, which is the traditional way that even Linux adopts. I know I am sometimes biased towards Solaris but this time I think they actually make sense. Here is the reason: let's say you are going to export the homespace to other machines, say, using NFS. Now, most Unix/Linux installations and programs assume the user account is in /home/user (I know, I know, you can get the path if you ask nicely to the OS, but if you are lazy that is still a good assumption). If you are automounting to a Linux box, you can set your auto.home to put it right where it expects. But, what if someone wants to login to the server machine? Easy: have it automount the shares it is exporting, using its auto_home (yes, Solaris chose to rename that file; do not ask me why. I guess they wanted to be cute), right back to /home! This way, the experience as far as the user is concerned will be the same (path in the prompt will be the same and any code written that unfortunately assumes the absolute path is /home/user will work.

Tuesday, May 02, 2006


One of the main problems I will have with setting up a zone for mail is that postfix wants to install aliases in /sbin. So, I cannot make a zone that sucks parts of the global zone's OS. It is a bit easier to create an empty zone and install all the packages I need in it. What I have in mind then, as my first test, is to put the entire mail zone in an external drive. For now that means to a 33GB external USB drive I have. I will probably set it up as a LOFS because that preserves the filesystem name space and to the local zone the path will start at the zone's root. Ok, I am not making sense yet. I will put my thoughts a bit clear later on.