SysAdminMag.com
IBM/Rational ClearCase VOB Automounting
Victor Burns
In the June 2006 issue of Sys Admin magazine, I discussed some commonly used as well as a few more advanced features of the automounter. One of these advanced features is the "autofs" file system that makes the automounter possible. I illustrated the dynamic use of the "autofs" file system in conjunction with the -fstype mount option. This combination can support VOB automounting by using cascading indirect automount maps.
The article in the June issue provided a foundation and reference for this technique and included excellent diagrams to illustrate how cascading works. I recommend starting there if you are not familiar with this concept. In this follow-up article, I will dig deeper into the unsupported but useful marriage of ClearCase and the automounter, building on more than 2 years experience using this method in a global environment. In this article, I will provide all the information needed to implement the VOB (Versioned Object Base) automounter solution.
Terminology
I use the term "autofs" many times in this article, and I've made every attempt to make it clear which "autofs" I am speaking of. First, there is the Linux automounter package that supports a Sun-like indirect-map syntax. Second, there is the file system type called "autofs". The "autofs" file system is what makes the automounter possible. When I am speaking of the file system, I will address it as such.
I also use terminology specific to ClearCase. I provide no definitions and am confident that ClearCase administrators will know what I am writing about. When I refer to Linux, I am specifically referring to Red Hat Linux distributions from about 7.2 through RHEL3-U6 (Enterprise Linux) with which I have experience. Any Linux distribution that uses the Linux "autofs" package (version 3 and 4) should find the information applicable.
Overview
I have divided the VOB automounting topic into five logical sections that facilitate quick reference to specific points a reader might be looking for. Here is a brief description of each section:
- AUTOFS + MVFS Review -- I review the indirect map syntax including the usage of the -fstype=autofs and -fstype=mvfs options and how this produces cascading automounter maps. You will see how together these features will support any array of VOB tags.
- Use Models -- Your environment and use model of ClearCase plays an important role in its configuration. The configuration of an automounter solution should be constructed with these in mind. I discuss typical environments in generic terms of use models and suggest pros and cons of the potential configurations.
- Automounter Behaviors -- The "autofs" and automounter of each operating system produce a unique set of problems. Each behaves differently while mounting an MVFS file system (ClearCase VOB). I point out the trouble spots and possible solutions and workarounds.
- Cascading Map Creation -- I suggested in the June article a few tips about producing the required cascading indirect maps for mounting a significant number of VOBS. When the solution requires flexibility, the number of maps dramatically increases, and an automated map creation tool is required. In this section, I include further details for the configuration and setup of this solution.
- Resources -- Web, man pages, documentation, further reading, and research.
As a ClearCase administrator, you already know that ClearCase is made possible by adding a new file system type to your system. This new file system is named "mvfs". The "mvfs" file system, like other file systems, requires actions such as mount and unmount to take place. In this section, I will quickly review how to configure "autofs" and "mvfs" to work together. While the technique shown is not the only one, I think it is the most flexible and reliable I have tested.
Disclaimer: IBM/Rational does not support the automounter. However, I have used this solution for more than 2 years. It has not solved every issue but has helped to overcome the resource, performance, and limit problems of mounting all VOBS. I have also heard through the support grapevine that IBM/Rational has been working on "mvfs" improvements that will increase its interoperability with the Linux "autofs" and/or the automounter. This is all unofficial, of course, but it's a good sign that they are serious about their customer requirements. One of the motivations for writing this article is to share my experience not just with you but with IBM/Rational as well. Perhaps it will add to their motivation to officially support the automounter.
Next, I'll introduce a simple ClearCase site example with only a few "mvfs" file systems. Each VOB or "mvfs" file-system path (mount point) is defined by a "tag". I have used tags in this example with varying directory depths to show how one would use the "autofs" file system, indirect maps, and "mvfs" together. These examples are legal and common to Unix ClearCase environments.
In this example, I will use syntax specific to Linux, but the Solaris format should be easy to extract from this information. I cover this in more detail later in the installation section, including using the output of "cleartool lsvob" to collect the required data used to build the map set.
A Set of VOB Tags
/vtop/vob1 :/vob_storage/vob1.vbs /vtop/vob2 :/vob_storage/vob2.vbs /vtop/sub1/vob3 :/vob_storage/vob3.vbs /vtop/sub1/vob4 :/vob_storage/vob4.vbs /vtop/sub1/sub2/vob5 :/vob_storage/vob5.vbs /vtop/sub1/sub2/vob6 :/vob_storage/vob6.vbsUsing the above set of VOB tags and matching global paths produces the following set of indirect map files (which could be uploaded into NIS or LDAP, etc). This syntax allows "autofs" and "mvfs" to work together:
# auto.master /vtop file:/etc/auto_vtop # /etc/auto_vtop vob1 -fstype=mvfs :/vob_storage/vob1.vbs vob2 -fstype=mvfs :/vob_storage/vob2.vbs sub1 -fstype=autofs file:/etc/auto_vtop_sub1 #/etc/auto_vtop_sub1 vob3 -fstype=mvfs :/vob_storage/vob3.vbs vob4 -fstype=mvfs :/vob_storage/vob4.vbs sub2 =fstype=mvfs file:/etc/auto_vtop_sub1_sub2 #/etc/auto_vtop_sub1_sub2 vob5 -fstype=mvfs :/vob_storage/vob5.vbs vob6 -fstype=mvfs :/vob_storage/vob6.vbsOnce the set of indirect maps has been configured, it will resemble Figure 1 while in operation. The VOBS will mount at the mount points so named on demand. Quick analysis of this solution should render the conclusion that the number of required automounter maps can become unmanageable. This is driven by the number of indirect maps needed when you have many hundreds of diverse VOB tags. VOB tags should have local naming standards placed on them to minimize the number of maps or meet other local requirements; however, much flexibility in the name and number of sub-directories in each tag should be allowed and supported. This flexibility is what adds to the total number of indirect maps required. You will quickly determine that one map is needed for each unique directory of your complete VOB tag set. The most manageable solution is to write a simple program (a good task for Perl) to convert your list of VOB tags (output of "cleartool lsvob") into all of the required indirect maps needed for your site. These maps can easily be scripted for automatic and dynamic creation and updating into your service of choice. The installation section of this article covers how to use this information to complete the setup of this automounter solution. The Use Model's Role In this section, I will be referring to Linux and Solaris systems. HP systems can be included in this set as well. ClearCase is widely used on WinTEL systems; however, the need for an automounter in conjunction with "mvfs" on WinTEL is not believed to be useful. The default ClearCase mode of operation on WinTEL is to mount only the VOBS that were last mounted and then only if the user requested this action. This is a very nice behavior. WinTEL also is most likely to serve only one user at a time when ClearCase is involved. Linux and Solaris systems are used as workstations and in multi-user environments. When ClearCase is used in these environments system resources are stressed. In large ClearCase environments, available system resources and performance fail under the load of design requirements, users, and the default ClearCase configuration that mounts all VOBS. The design environment, whether it's used for software, hardware or both, is accomplished on systems that are divisible into two broad categories. These are systems that allow direct login or those that are accessible only via the job-dispatching software used to manage pools or clusters of systems. These systems may also be classified as standalone (such as desktop workstations) or shared servers (where groups of users or teams login to work on common projects). In your environment, you can divide systems into similar classes. In each case, a solution should be deployed making distinct classes of systems all look the same and all produce the same reliable service. Dedicated Systems Systems such as designer-dedicated workstations provide or allow the most flexibility. Often the solution chosen can be selected by the designer. Keeping the designer in the decision process helps empower him or her to be more productive. Each designer's use model, tools, preferences, and desired system behavior should be taken into account for each system. Typically, there are three configurations to deploy. The three configurations are described as:
1. Mounting of all "mvfs" file-systems at boot is disabled, and the VOB automounter is installed. 2. Mounting of all "mvfs" file-systems at boot is disabled but no VOB automounter is configured. Users may find this solution useful. The user would simply run a small script to mount the small VOB set, in some cases only one. 3. Creating a ClearCase Region for a class or subset of systems and using the automounter solution or perhaps no automounter solution. This solution, however, creates additional ClearCase administration and slows performance of the Registry server. This is a function of a larger ClearCase Registry server DB due to TAG counts for objects in multiple regions. Note: Some ClearCase commands download the entire Registry DB (or nearly all of the DB) and the client does the filtering before presentation -- perhaps only a few bytes left for presentation while the remainder is tossed. In some cases this happens more than once for a command. Using additional regions to segment clients only serves to turn a performance issue into a real problem for the largest of sites. This issue alone can bring the Registry server to its knees. Shared Resources -- Compute Farms Large and platform-diverse compute farms are mandatory. Many design problems are solved by these large compute environments. When ClearCase is involved, making the environment as sleek and productive as possible is essential. I have indicated more than once that the ClearCase "mvfs" file system can produce resource strain and performance problems indirectly. One form of indirect performance loss is caused by more than one EDA tool that stats every mounted file system including ClearCase even when the task taking place is not directly or indirectly using ClearCase. In this environment, the mounting of "mvfs" file systems should be limited only to those currently or recently in use. The use of an "mvfs" automounter solution in these environments is very attractive. Of all the environments, the cookie cutter environment of the compute farm and its automated server setup (Kickstart, Jumpstart, post-installation, and configuration scripts, etc.) make the installation and setup of the automounter solution simple. Software vs. Hardware Design Design environments produce intellectual property (IP) and, where computer systems are used to create this IP, it normally comes in some form of an electronic file format. Often we refer to these IP data files as software (source/code) or hardware (circuit descriptions, verification, etc.). What makes each file type special is its size, the applications used to produce it, and the resources needed to execute the design flow. Digital hardware designers typically require the largest compute and file resources for long-running simulations and circuit description databases, while analog designers have many shorter-running jobs, such as spice. Software designers use the smallest files and run builds on source code. Each has different needs for shared systems and system behaviors. Each set of tools and use model will interact with ClearCase and the automounter solution differently. Each tool and flow should be tested to determine whether automounting "mvfs" is compatible. I realize I have over-simplified these classes of designers, but the point is that each class of designer has a unique set of requirements that interacts with the environment differently. It is very important to understand these differences when introducing the VOB automounter technique I propose here. Recap of Configuration Points
- You may find that on some systems the best option is to just turn off the mounting of all "mvfs" file systems and leave it to the designer to mount a small required VOB set. This option works well on dedicated desktop workstations.
- When designers do not want to manually mount a VOB set, the "mvfs" automounter solution is ideal. Another time that it's useful is when tools and flows expect specific resources to be available and the exact VOB set may not be known to the designer. This is often true of systems in compute farms where the application being run is somewhat random based on assignment from a queue of batched jobs from a diverse set of designers.
- Nearly all shared systems such as compute-farm resources will need the "mvfs" automounter solution. You may find additional automounter behaviors that help to segment systems into classes based on the type of design work required.
# cd /usr/lib/fs/mvfs2. Remove the link named "mount", but remember where it pointed to:
# rm -f mount3. Copy the mount_mvfs command locally (the file the link was pointed to before):
# cp /opt/rational/clearcase/etc/mount_mvfs ./4. Install the mount wrapper script:
# vi mount ... # chmod 0555 mountThe mount wrapper script is shown in Listing 1 . I would like to give credit to Chris Barrera of Texas Instruments. He was the original author of this script when he developed an automounter solution using a single direct mount map style that worked on Solaris. It was a very good solution that did not support other platforms. I have included this code with his permission. Improved Reliable Mounting While testing and observing the Solaris mount helper script, I used a version that logged every invocation and its arguments. I found that there was much inconsistency in the number and type of arguments passed to the mount command. I also observed that the frequency of certain options were different between Linux and Solaris. Most of this is not particularly interesting, except that one option that regularly did not show was the VOB UUID. By experimentation, I found that the only argument that could be used by itself was the VOB tag. The VOB tag alone apparently works because the mount command can query the Registry server for the remaining needed information including the global path. I have speculated that it may be ideal and improve the mounting process if the automounter maps could be populated with the UUID information as a mount option. If this data could always be passed to the mount command, the solution would be improved. None of this has been tested and only assumed. To the naked eye, mounting appears faster when the UUID is supplied. It is not clear why the number or types of options that are passed to the mount command are so irregular. This would be a good topic to take up with IBM/Rational support. We will explore this more as we look for ways to further improve performance in our environment. Map Creation and Maintenance The creation of the needed indirect automounter maps should not be a manual task subject to errors during hand editing. At the time of this writing, my site employs nearly 100 dynamically created, destroyed, and maintained indirect maps that support the VOB automounting solution. Sounds ugly, does it not? This is just at my site. Other sites have more or fewer maps, but the process is all automated. As I suggested in my previous article, there are two schools of thought that come to mind to distribute these maps. The first is to use the local service of choice. Any resource the automounter supports is suitable. With a little work, all of the maps could be placed in NIS, NIS+, or LDAP, just to name the most common. This should be tested with a significant number of clients to ensure the added tables and clients will not overload your servers. The second issue is how to deal with the auto-creation of new maps as VOB tags demand. This could be manual or automatic. A tool for uploading would be custom to your site and therefore left to your exercise. We have sites that have chosen to upload all required maps into their local service, such as NIS. They use a tool called DMAPD (Dynamic MAP Daemon). This tool will read input from one or more sources (e.g., files, NIS, NIS+, LDAP, exec-map) and produce a set of indirect maps suitable for the host on which it is executed. You will recall that the indirect map syntax on Linux has minor differences. These are understood and compiled unto the tool when built for each platform. After map creation, these sites upload the maps into the local service NIS, LDAP, etc. I think they have automated the process of adding new maps into the local service; if not, it is a manual process. The DMAPD tool only creates the maps; it cannot upload them, because this is site specific. The second school of thought of map delivery uses a single automounter map. This map is installed in a file, NIS, NIS+, or LDAP, etc. The DMAPD used above is installed as a daemon (service) on every client. At boot time, the daemon is started and periodically maintains a set of local map files (only those needed for mounting "mvfs") from the configured source. The auto_master, regardless of delivery method, includes one entry for each of the top-level cascading indirect maps. The best way to understand this is to read the June 2006 article and the review at the beginning of this article. An example of an auto_master entry would resemble the following:
/vobs /etc/automap/auto_vobsThe source map is never directly used by the automounter and is only placed in a service like NIS as a delivery method for the DMAPD installed on every client system. The format of the source map would only be useful on Solaris (not important because we will not use it this way). The DMAPD converts this file into as many maps as required for use by the local system. This source map should include one entry for every PUBLIC VOB tag in the client's default ClearCase region. I have provided an example of one such entry. The entry is made up of the VOB tag followed by the defined "global path". Do not forget the leading colon ":" before the "global path". It is stored in the source map as a key and value:
/vobs/sub1/myVob :/clearcase/dskX/grpY/projZ.vbsAnyone handy with Perl could make light work of this map conversion. The advantage of DMAPD is that it has built-in support for the typical sources the automounter supports. The code set is not huge, but it is not small either, and there is also a great deal more to say about its installation and configuration. It may be useful to save this topic for a dedicated article. Resources and Self-Help The autofs source RPM is an excellent source of information. I found that reading the source provided the definitive guide to how autofs really works on Linux. Here are a couple URLs to the source:
ftp://ftp.redhat.com/pub/redhat/linux/enterprise/4/en/os/ \ i386/SRPMS/autofs-4.1.3-67.src.rpm ftp://ftp.redhat.com/pub/redhat/linux/enterprise/4/en/os/ \ x86_64/SRPMS/autofs-4.1.3-67.src.rpmHere is the list of the autofs versions that came with specific older Red Hat releases:
- Red Hat 7.2 autofs-3.1.7-21
- Red Hat 7.3 autofs-3.1.7-28
- Red Hat 8.0 autofs-3.1.7.33
- Red Hat 9.0 autofs-3.1.7-36
rpm -q -l autofsTable 2 provides the location of Solaris automounter files and other useful resources.
Wrap-Up
I hope you find this information helpful. I have seen many references to the use of autofs within indirect maps on the Internet. Even so, I believe it is often overlooked as the useful and powerful feature that it is. I also think that most administrators do not know that this technique exists. I hope I have been instrumental in creating more awareness of the use of autofs.
Victor served 4+ years in the USAF servicing electronic equipment for the Airborne Command Post (missile launching and satellite communications systems). During his 21+ years of employment at Texas Instruments, he has been an ASIC Designer, Programmer, and Unix Network Administrator. He has also been involved as a BSA (Boy Scouts of America) leader and Merit Badge counselor for more than 18 years. Victor thanks his wonderful wife and six children for their support in all that he does. Victor can be reached at: [email protected].