In the June 2006 issue of Sys Admin magazine, I discussed some commonly used as well as a few more advanced features of the automounter. One of these advanced features is the "autofs" file system that makes the automounter possible. I illustrated the dynamic use of the "autofs" file system in conjunction with the -fstype mount option. This combination can support VOB automounting by using cascading indirect automount maps.
In the June 2006 issue of Sys Admin magazine, I discussed some commonly used
as well as a few more advanced features of the automounter. One of these
advanced features is the "autofs" file system that makes the
automounter possible. I illustrated the dynamic use of the
"autofs" file system in conjunction
with the -fstype mount option. This combination can support VOB automounting by using
cascading indirect automount maps.
The article in the June issue provided a foundation
and reference for this technique and included excellent diagrams to
illustrate how cascading works. I recommend starting there if you are not
familiar with this concept. In this follow-up article, I will dig deeper
into the unsupported but useful marriage of ClearCase and the automounter,
building on more than 2 years experience using this method in a global
environment. In this article, I will provide all the information needed to
implement the VOB (Versioned Object Base) automounter solution.
Terminology
I use the term "autofs" many times in this
article, and I've made every attempt to make it clear which
"autofs" I am speaking of. First, there is the Linux
automounter package that supports a Sun-like indirect-map syntax. Second,
there is the file system type called "autofs". The
"autofs" file system is what makes the automounter possible.
When I am speaking of the file system, I will address it as such.
I also use terminology specific to ClearCase. I
provide no definitions and am confident that ClearCase administrators will
know what I am writing about. When I refer to Linux, I am specifically referring to Red Hat Linux distributions from
about 7.2 through RHEL3-U6 (Enterprise Linux) with which I have experience.
Any Linux distribution that uses the Linux "autofs" package
(version 3 and 4) should find the information
applicable.
Overview
I have divided the VOB automounting topic into five
logical sections that facilitate quick reference to specific points a
reader might be looking for. Here is a brief description of each section:
AUTOFS + MVFS Review -- I review the
indirect map syntax including the usage of the -fstype=autofs and -fstype=mvfs options and how this produces
cascading automounter maps. You will see how together these features will
support any array of VOB tags.
Use Models -- Your environment and
use model of ClearCase plays an important role in its configuration. The
configuration of an automounter solution should be constructed with these
in mind. I discuss typical environments in generic terms of use models and
suggest pros and cons of the potential configurations.
Automounter Behaviors -- The
"autofs" and automounter of each operating system produce a
unique set of problems. Each behaves
differently while mounting an MVFS file system (ClearCase VOB). I point out
the trouble spots and possible solutions and
workarounds.
Cascading Map Creation -- I
suggested in the June article a few tips about producing the required
cascading indirect maps for mounting a significant number of VOBS. When the
solution requires flexibility, the number of maps dramatically increases,
and an automated map creation tool is required. In this section, I include
further details for the configuration and setup of this solution.
Resources -- Web, man pages,
documentation, further reading, and research.
Autofs and MVFS Review
As a ClearCase administrator, you already know that
ClearCase is made possible by adding a new file system type to your system.
This new file system is named "mvfs". The "mvfs"
file system, like other file systems, requires actions such as mount and
unmount to take place. In this section, I will quickly review how to
configure "autofs" and "mvfs" to work together.
While the technique shown is not the only one, I think it is the most
flexible and reliable I have tested.
Disclaimer: IBM/Rational does not support the
automounter. However, I have used this solution for more than 2 years. It
has not solved every issue but has helped to overcome the resource,
performance, and limit problems of mounting all VOBS. I have also heard
through the support grapevine that IBM/Rational has been working on
"mvfs" improvements that will increase its interoperability
with the Linux "autofs" and/or the automounter. This is all
unofficial, of course, but it's a good sign that they are serious
about their customer requirements. One of the motivations for writing this
article is to share my experience not just with you but with IBM/Rational
as well. Perhaps it will add to their motivation to officially support the
automounter.
Next, I'll introduce a simple ClearCase site
example with only a few "mvfs" file systems. Each VOB or
"mvfs" file-system path (mount point) is defined by a
"tag". I have used tags in this example with varying directory
depths to show how one would use the "autofs" file system,
indirect maps, and "mvfs" together. These examples are legal
and common to Unix ClearCase environments.
In this example, I will use syntax specific to Linux,
but the Solaris format should be easy to extract from this information. I
cover this in more detail later in the installation section, including
using the output of "cleartool lsvob" to collect the required
data used to build the map set.
Using the above set of VOB tags and matching global
paths produces the following set of indirect map files (which could be
uploaded into NIS or LDAP, etc). This syntax allows "autofs" and "mvfs" to work together:
Once the set of indirect maps has been configured, it
will resemble
Figure 1
while in operation. The VOBS will mount at the mount
points so named on demand.
Quick analysis of this solution should render the
conclusion that the number of required automounter maps can become
unmanageable. This is driven by the number of indirect maps needed when you
have many hundreds of diverse VOB tags. VOB tags should have local naming
standards placed on them to minimize the number of maps or meet other local
requirements; however, much flexibility in the name and number of
sub-directories in each tag should be allowed and supported. This
flexibility is what adds to the total number of indirect maps required. You
will quickly determine that one map is needed for each unique directory of
your complete VOB tag set. The most manageable solution is to write a
simple program (a good task for Perl) to convert your list of VOB tags
(output of "cleartool lsvob") into all of the required indirect
maps needed for your site. These maps can easily be scripted for automatic
and dynamic creation and updating into your service of choice.
The installation section of this article covers how to
use this information to complete the setup of this automounter solution.
The Use Model's Role
In this section, I will be referring to Linux and
Solaris systems. HP systems can be included in this set as well. ClearCase
is widely used on WinTEL systems; however, the need for an automounter in
conjunction with "mvfs" on WinTEL is not believed to be useful.
The default ClearCase mode of operation on WinTEL is to mount only the VOBS
that were last mounted and then only if the user requested this action.
This is a very nice behavior. WinTEL also is most likely to serve only one
user at a time when ClearCase is involved. Linux and Solaris systems are
used as workstations and in multi-user environments. When ClearCase is used
in these environments system resources are stressed. In large ClearCase
environments, available system resources and performance fail under the
load of design requirements, users, and the default ClearCase configuration
that mounts all VOBS.
The design environment, whether it's used for
software, hardware or both, is accomplished on systems that are divisible
into two broad categories. These are systems that allow direct login or
those that are accessible only via the job-dispatching software used to
manage pools or clusters of systems. These
systems may also be classified as standalone (such as desktop workstations)
or shared servers (where groups of users or teams login to work on common
projects). In your environment, you can divide systems into similar
classes. In each case, a solution should be deployed making distinct
classes of systems all look the same and all produce the same reliable
service.
Dedicated Systems
Systems such as designer-dedicated workstations
provide or allow the most flexibility. Often the solution chosen can be
selected by the designer. Keeping the designer in the decision process
helps empower him or her to be more productive. Each designer's use
model, tools, preferences, and desired system behavior should be taken into
account for each system. Typically, there are three configurations to
deploy.
The three configurations are described as:
1. Mounting of all "mvfs" file-systems at
boot is disabled, and the VOB automounter is installed.
2. Mounting of all "mvfs" file-systems at
boot is disabled but no VOB automounter is configured. Users may find this
solution useful. The user would simply run a small script to mount the
small VOB set, in some cases only one.
3. Creating a ClearCase Region for a class or subset
of systems and using the automounter solution or perhaps no automounter
solution. This solution, however, creates additional ClearCase
administration and slows performance of the Registry server. This is a
function of a larger ClearCase Registry server DB due to TAG counts for
objects in multiple regions.
Note: Some ClearCase commands download the entire
Registry DB (or nearly all of the DB) and the client does the filtering
before presentation -- perhaps only a few bytes left for presentation
while the remainder is tossed. In some cases this happens more than once
for a command. Using additional regions to segment clients only serves to
turn a performance issue into a real problem for the largest of sites. This
issue alone can bring the Registry server to its knees.
Shared Resources -- Compute Farms
Large and platform-diverse compute farms are
mandatory. Many design problems are solved by these large compute
environments. When ClearCase is involved, making the environment as sleek
and productive as possible is essential. I have indicated more than once
that the ClearCase "mvfs" file system can produce resource
strain and performance problems indirectly. One form of indirect
performance loss is caused by more than one EDA tool that stats every
mounted file system including ClearCase even when the task taking place is not directly or indirectly using ClearCase. In this
environment, the mounting of "mvfs" file systems should be
limited only to those currently or recently in use. The use of an
"mvfs" automounter solution in these environments is very
attractive. Of all the environments, the cookie cutter environment of the compute farm and its automated server setup (Kickstart, Jumpstart, post-installation, and
configuration scripts, etc.) make the installation and setup of the automounter solution simple.
Software vs. Hardware Design
Design environments produce intellectual property
(IP) and, where computer systems are used to create this IP, it normally
comes in some form of an electronic file format. Often we refer to these IP
data files as software (source/code) or hardware (circuit descriptions,
verification, etc.). What makes each file type
special is its size, the applications used to produce it, and the resources
needed to execute the design flow.
Digital hardware designers typically require the
largest compute and file resources for long-running simulations and circuit
description databases, while analog designers have many shorter-running
jobs, such as spice. Software designers use the smallest files and run
builds on source code. Each has different needs for shared systems and
system behaviors. Each set of tools and use model will interact with
ClearCase and the automounter solution differently. Each tool and flow
should be tested to determine whether automounting "mvfs" is
compatible. I realize I have over-simplified these classes of designers,
but the point is that each class of designer has a unique set of
requirements that interacts with the environment differently. It is very
important to understand these differences when introducing the VOB
automounter technique I propose here.
Recap of Configuration Points
You may find that on some systems the
best option is to just turn off the mounting of all "mvfs" file
systems and leave it to the designer to mount a small required VOB set.
This option works well on dedicated desktop workstations.
When designers do not want to manually
mount a VOB set, the "mvfs" automounter solution is ideal.
Another time that it's useful is when tools and flows expect specific
resources to be available and the exact VOB set may not be known to the
designer. This is often true of systems in compute farms where the
application being run is somewhat random based on assignment from a queue
of batched jobs from a diverse set of designers.
Nearly all
shared systems such as compute-farm resources will need the
"mvfs" automounter solution. You may find additional
automounter behaviors that help to segment systems into classes based on
the type of design work required.
In the next section, I discuss the behaviors of
automounting "mvfs" file systems. Consider these behaviors as
well as the points made in this section when determining the right solution
for a group of designers and their systems. For instance, special attention
should be given to the behavior of "clearmake" while using an
"mvfs" VOB automount solution.
Behaviors -- Differences between Linux and Solaris
The Linux automounter in conjunction with the
environment can be a frustrating support aspect. Sites that have any
significant experience with clusters (large numbers of Linux blade servers)
know that the combination of Linux and EDA tools is a mixed bag of
blessings. We employ this type of system resource for two primary reasons
-- cheaper hardware and faster performance. A number of EDA tools are
resource intensive and regularly cause system failures. This resource
intensity is inherent in the nature of today's hardware design
environment and only grows with each passing month of design complexity.
Adding ClearCase "mvfs" to this mix only
adds to the challenge. Large multiple-thousand Linux node server farms may
expect as many as half a dozen systems a day to hang or crash depending on
the EDA tool types, frequency of use, and other system demands. Typical
causes are remote resources, including NFS and ClearCase
"mvfs", or the EDA tool and kernel resource depletion. In some
cases the system has not hung but portions of the automounter quit
responding and no longer service new mount requests, potentially making it
useless to new jobs. I am not convinced that any one piece of this
environment is at fault. Often it is simply how a designer used a tool and
exploited a combination of weaknesses.
To Browse or Not to Browse
By default, the Solaris automounter provides
"autofs" mount directory browsing. Most designers find this
useful on systems that provide direct login. Often the designer finds the
desired "mvfs" resource by changing directory within the
directory hierarchy produced by the cascading set of indirect automount
maps. The Linux "autofs" automounter did not support browsing
until recently and by default is turned off. I have not tested its
compatibility with the VOB automounter solution in our environment; I
prefer to use its default behavior.
When browsing is turned off, designers may find it
difficult to find what they are looking for. You should be prepared to
explain the system more than once as a designer may forget and ask,
"Why can't I see my VOB in the directory?" Batch resource
servers within server farms are less of an issue, and turning off browsing
is likely to improve reliability and performance over time. In this case,
jobs are batched and paths are predefined and not needed for
"interactive looking".
Intermittent Mounting
While VOB automounting is enabled, a small number of
your compute-farm designers that use ClearCase may have jobs fail because
their VOB did not mount when the job started. This will happen on Solaris
when the VOB was not already mounted before the job starts. The desired
behavior is to have the VOB mount the moment the job attempts to
"change directory" into the VOB. In cases where the batch
script attempts to change the directory not to the top level of a VOB, but
rather immediately into any sub-directory below the top of the VOB, issues
can arise. The change directory will fail, thus causing the job to fail.
The mounting of the VOB will take place, and a second attempt to the same
directory will work.
Two solutions have been provided to our designers.
The first technique is annoyingly simple, but it works. Changing directory
to the top-level VOB directory (tag path) first and then into any
sub-directory provides satisfactory results. The second solution involves
using your compute-farm management software. This software should provide
for the use of pre-execution scripts that test and ensure that impending
job resources are available, including VOBs and mounting them as needed.
This system of pre-checking also fixes other
system issues and diverts jobs to working systems.
Pre-checking is the preferred solution and is deployed at our
sites where possible.
Linux has not produced this specific behavior. This
may be due to its hyper-sensitive ability to detect possible mount
requirements. However, this hyper-sensitivity can cause a different set of
problems as described next.
Hyper-Sensitive Automounter
The Linux automounter can be a bit over-sensitive.
Occasionally the act of doing a simple "list directory long" on
an "autofs" file system causes the map keys to magically appear
and mount all the "mvfs" file systems referenced in the
attached map. This same behavior can cause all VOBS to mount when using at
least one ClearCase command. An example is the cleartool command. This command can be used to list available
"mvfs" file systems.
While this command lists available "mvfs"
file systems by VOB tag, it checks whether the VOB is mounted using a stat function. The output of cleartool lsvob will display
a preceding asterisk in the listing of each tag and indicates a mounted
condition. However, on Linux, the stat function is sufficient to trigger a
mount of the VOB and thus defeats having the automounter by once again
mounting "all" public "mvfs" file systems placed
under the control of the automounter.
We work through this issue with a multi-pronged
approach. To begin, we ask users not to use the cleartool lsvob sub-command without
using the -short or -long options. The
short option only lists the tags, runs much faster, and is typically the
only information the user cares about. The long option is slow and lists
lots of information, but it does not stat each VOB, so mounts are not
triggered. This is normally not an issue on server farms where batch jobs
have no need to run cleartool lsvob without the -short option.
Auto Unmount or No Unmount
In a typical environment users would mount or unmount
"mvfs" file systems using cleartool sub-commands. This is still possible when the VOB
automounter is installed and manages these "mvfs" file systems.
This is very useful. The Linux automounter is quick to mount a VOB, but it
is not quick to unmount it. By default, a mounted "mvfs" file
system always looks busy to the Linux "autofs" automounter.
Over time, the number of mounted "mvfs" file systems will grow.
On Linux systems, we use a scheduled script (run on a per-client random
schedule) to find and unmount unused "mvfs" file systems. This
tool should run during off hours if there is such a time in your shop. It
is also import for this job not run on every client at once; it should be
randomly staggered.
The behavior of not unmounting VOBs may sound
incompatible with your requirements, but I recommend that you stay your
decision until you read about "clearmake". Solaris does not
produce this behavior by default. When a VOB is truly not busy on Solaris,
where "not busy" is defined as no process sitting within a
given VOB directory, the automounter can and will unmount the VOB
successfully. Be sure to read about "clearmake" and issues with
automounted VOBS.
When you consider developing a script to clear unused
"mvfs" file systems from Linux systems, make sure you test your
solution thoroughly. At a glance, you could create a simple script that
runs the cleartool umount -all command as root to make sure the job gets done.
However, this action could produce undesirable results. When run as root,
some or all versions of ClearCase will also unmount the /view
"mvfs" file system, thus rendering ClearCase useless or broken
if it were first successful in unmounting all "mvfs" file
systems.
Source Building with "clearmake"
Software designers of ClearCase may use the
"clearmake" tool. It is an IBM/Rational tool and is part of
ClearCase. In short, it provides some ClearCase performance advantages
during compiling and the management of derived objects (things like .o
files). It does this in a number of ways. One of clearmake's actions
is important to this discussion. During software builds it is common to use
more than one VOB. This can be an issue on a Solaris system while using the
automounter solution.
At the start of a build the user has normally checked
and confirmed that all required VOB resources have been mounted. The clearmake command for
performance can and does bypass certain file system access methods and
communicates with ClearCase servers directly. Such activity does not
constitute a "BUSY" VOB. If the user's shell or some
other process is not sitting atop a VOB that is otherwise in use by
"clearmake", the VOB can become unavailable when the
automounter attempts to unmount the VOB and succeeds. The clearmake command will then
summarily fail further access to the VOB. Clearly, this is not a desirable
behavior. In this case the Linux automounter has some advantage. Later I
provide a solution to make a Solaris system behave the same as Linux and
always make a VOB "look" busy. In both cases, the cleartool umount command can be
used to unmount a VOB even when the automounter cannot. If a VOB is truly
busy, cleartool umount will continue to work correctly and not unmount the
requested VOB.
Additional Installation Notes
The installation of the VOB automounter solution
requires more than just maps. Below I explore a few installation details as
well as the art map making.
Disabling Mounting of All VOBS
Part of the VOB automounter solution requires the
disabling of all VOB mounting at boot time. The normal behavior of the
ClearCase startup script is to mount all "public" VOBS. This
script must be edited to change the default behavior. To begin, find the mount -all command and
disable it. The startup script is located at
"/opt/rational/clearcase/etc/clearcase".
When the VOB automounter is enabled and correctly
configured, a user needs to enter into a ClearCase view (set into a view)
to use a VOB. Once one is set into a view, it is only a matter of changing
directory to one of the defined VOB tags to automatically trigger its
mounting. The example I illustrated previously
is rather simple and, if you have only six VOBs, there may be no reason for using the automounter. However, I know
from experience that it is not hard to have many hundreds of such
"mvfs" file systems in a development environment, and the automounter quickly becomes necessary or at least
helps make the problem smaller.
Blocking Automounter Unmount Requests of "mvfs"
Inhibiting the automatic unmounting of the
"mvfs" file system on Solaris is a simple task. Each supported
file system has a directory that includes some of the binaries that provide
support for the matching file-system type. This is also true of
"mvfs" once "mvfs" support is installed as part of
a ClearCase installation. The "mvfs" file system support
directory on Solaris is "/usr/lib/fs/mvfs/". The typical
support commands found in such a directory are "mount" and
"umount" and would be used to take these actions on the
matching file-system type. In the case of "mvfs" it has no
"umount" provided in this directory. ClearCase does not require
an additional "umount" support command for normal operation.
The behavior of unmounting can be changed by adding a
custom "umount" command. One behavior we might control is the
automounter request to "umount" a VOB. This automounter request
can be blocked while a user requested unmount can be granted. This is done
by testing the current working directory and returning an EBUSY or
executing the "umount" to complete the request. The version I
wrote uses the current working directory to know where the request came
from. It is also fewer than 30 lines of code and should be reproducible by
a good systems programmer. Such a program would be installed as
/usr/lib/fs/mvfs/umount on Solaris. If the source to my version were
released, it may be pulled from the Sys Admin Web site.
Is the number of VOBS at your site growing unchecked?
I believe a modified version of the "umount" blocker could be
used to log data to a DB. This data has the potential of helping to
identify unused "mvfs" file systems. Such data might help to
drive VOB archival if local policy allowed such an action.
Enabling "mvfs" Automounting on Solaris
The installation of the automounter solution will not
work by default on Solaris. Creating a set of automounter maps and
configuring the automounter is not enough. Attempting to trigger a VOB
mount using the automounter or explicitly trying to mount the VOB using the
"cleartool mount" sub-command will return an ugly error
message. The Solaris automounter passes a mount option that is incompatible
with the "mvfs" mount command. The MVFS mount command does not
drop or ignore the bad option but rather fails the mount request with an
error code. This can be corrected using the following script and
installation steps. Perform these steps as root:
1. Change directory:
# cd /usr/lib/fs/mvfs
2. Remove the link named "mount", but
remember where it pointed to:
# rm -f mount
3. Copy the mount_mvfs command locally (the file the link was pointed to
before):
# cp /opt/rational/clearcase/etc/mount_mvfs ./
4. Install the mount wrapper script:
# vi mount
...
# chmod 0555 mount
The mount wrapper script is shown in
Listing 1
. I
would like to give credit to Chris Barrera of Texas Instruments. He was the
original author of this script when he developed an automounter solution
using a single direct mount map style that worked on Solaris. It was a very
good solution that did not support other platforms. I have included this
code with his permission.
Improved Reliable Mounting
While testing and observing the Solaris mount helper
script, I used a version that logged every invocation and its arguments. I
found that there was much inconsistency in the number and type of arguments
passed to the mount command. I also observed that the frequency of certain
options were different between Linux and Solaris. Most of this is not
particularly interesting, except that one option that regularly did not
show was the VOB UUID. By experimentation, I found that the only argument
that could be used by itself was the VOB tag.
The VOB tag alone apparently works because the mount
command can query the Registry server for the remaining needed information
including the global path. I have speculated that it may be ideal and
improve the mounting process if the automounter maps could be populated
with the UUID information as a mount option. If this data could always be
passed to the mount command, the solution would be improved. None of this
has been tested and only assumed. To the naked eye, mounting appears faster
when the UUID is supplied. It is not clear why the number or types of
options that are passed to the mount command are so irregular. This would
be a good topic to take up with IBM/Rational support. We will explore this
more as we look for ways to further improve performance in our environment.
Map Creation and Maintenance
The creation of the needed indirect automounter maps
should not be a manual task subject to errors during hand editing. At the
time of this writing, my site employs nearly 100 dynamically created,
destroyed, and maintained indirect maps that support the VOB automounting
solution. Sounds ugly, does it not? This is just at my site. Other sites
have more or fewer maps, but the process is all automated.
As I suggested in my previous article, there are two
schools of thought that come to mind to distribute these maps. The first is
to use the local service of choice. Any resource the automounter supports
is suitable. With a little work, all of the maps could be placed in NIS,
NIS+, or LDAP, just to name the most common. This should be tested with a
significant number of clients to ensure the added tables and clients will
not overload your servers. The second issue is how to deal with the
auto-creation of new maps as VOB tags demand. This could be manual or
automatic. A tool for uploading would be custom to your site and therefore
left to your exercise.
We have sites that have chosen to upload all required
maps into their local service, such as NIS. They use a tool called DMAPD
(Dynamic MAP Daemon). This tool will read input from one or more sources
(e.g., files, NIS, NIS+, LDAP, exec-map) and produce a set of indirect maps
suitable for the host on which it is executed. You will recall that the
indirect map syntax on Linux has minor differences. These are understood
and compiled unto the tool when built for each platform. After map creation, these sites upload the maps into the local
service NIS, LDAP, etc. I think they have automated the process of adding
new maps into the local service; if not, it is a manual process. The DMAPD
tool only creates the maps; it cannot upload
them, because this is site specific.
The second school of thought of map delivery uses a
single automounter map. This map is installed in a file, NIS, NIS+, or
LDAP, etc. The DMAPD used above is installed as a daemon (service) on every client. At boot time, the daemon is started and periodically maintains a set of local
map files (only those needed for mounting "mvfs") from the
configured source. The auto_master, regardless of delivery method, includes
one entry for each of the top-level cascading indirect maps. The best way
to understand this is to read the June 2006 article and the review at the
beginning of this article. An example of an auto_master entry would
resemble the following:
/vobs /etc/automap/auto_vobs
The source map is never directly used by the
automounter and is only placed in a service like NIS as a delivery method
for the DMAPD installed on every client system. The format of the source
map would only be useful on Solaris (not important because we will not use
it this way). The DMAPD converts this file into as many maps as required
for use by the local system. This source map should include one entry for
every PUBLIC VOB tag in the client's default ClearCase region. I have
provided an example of one such entry. The entry is made up of the VOB tag
followed by the defined "global path". Do not forget the
leading colon ":" before the "global path". It is
stored in the source map as a key and value:
/vobs/sub1/myVob :/clearcase/dskX/grpY/projZ.vbs
Anyone handy with Perl could make light work of this
map conversion. The advantage of DMAPD is that
it has built-in support for the typical
sources the automounter supports. The code set
is not huge, but it is not small either, and there is also a great deal
more to say about its installation and configuration. It may be useful to
save this topic for a dedicated article.
Resources and Self-Help
The autofs source RPM is an excellent source of
information. I found that reading the source provided the definitive guide
to how autofs really works on Linux. Here are a couple URLs to the source:
Here is the list of the autofs versions that came with
specific older Red Hat releases:
Red Hat 7.2 autofs-3.1.7-21
Red Hat 7.3 autofs-3.1.7-28
Red Hat 8.0 autofs-3.1.7.33
Red Hat 9.0 autofs-3.1.7-36
Autofs-4.xxx was included starting with RHEL3-U3.
Table 1
shows the location of specific autofs files
when installed. A fresh list of autofs files on your system may be obtained
using the rpm command:
rpm -q -l autofs
Table 2
provides the location of Solaris automounter
files and other useful resources.
Wrap-Up
I hope you find this information helpful. I have seen
many references to the use of autofs within indirect maps on the Internet.
Even so, I believe it is often overlooked as the useful and powerful
feature that it is. I also think that most administrators do not know that
this technique exists. I hope I have been instrumental in creating more
awareness of the use of autofs.
Victor served 4+ years in the USAF servicing
electronic equipment for the Airborne Command Post (missile launching and
satellite communications systems). During his 21+ years of employment at
Texas Instruments, he has been an ASIC Designer, Programmer, and Unix
Network Administrator. He has also been involved as a BSA (Boy Scouts of
America) leader and Merit Badge counselor for more than 18 years. Victor
thanks his wonderful wife and six children for their support in all that he
does. Victor can be reached at: [email protected].
Figure 1 autofs and mvfs working together
Listing 1 Mount helper
#!/bin/ksh
set -- ${*}
NEWARGS=''
for i in $@ ; do
if [ "${i}" != '-q' ] ; then
NEWARGS="${NEWARGS} ${i}"
fi
done
/usr/lib/mvfs/mount_mvfs ${NEWARGS}
#END
Table 1 Location of autofs files when installed
Table 2 Location of Solaris Automounter files and other useful resources