Total Pageviews

Search: This Blog, Linked From Here, The Web, My fav sites, My Blogroll

07 April 2010

Ubuntu --- Setting Up an NFS Server



The Network File System that is the de facto standard for sharing 
directories between Unix-like systems over a network. 
NFS is simple, lightweight, and fast — and its 
implementation has been freely available 
for any Unix-like system 
since the early 1980s.


NFS

Sharing groups of files that multiple people need access to is standard operating procedure in business today and, thanks to home networking, is getting to be SOP for home use as well. Providing centralized access to a collection of audio materials that you’ve extracted from your CD collection or the vacation photos from your most recent trips is just as important to the home user as providing centralized access to your procedure manuals and software source repository is to the business user or SOHO developer. Luckily, Linux systems provide several ways of sharing directories over a network, some oriented primarily toward Linux and other Unix-like systems, including Apple’s Mac OS X, and others oriented more toward Microsoft Windows systems (which Linux and Mac OS X systems can also access, of course). This text discusses how to set up one of your Ubuntu Linux systems so that other systems can access its directories over the network using NFS(Network File System), which is popularly used on all Linux and Unix-like systems. (For information on setting up your Ubuntu system to share directories with Microsoft Windows systems, see “Setting Up a Samba Server.”
    Sun Microsystems’ Network File System, better known simply as NFS, is the most common networked filesystem in use today, largely because it comes preintalled and for free with almost every Unix and Unix-like system. NFS clients and servers are also available for almost every type of modern computer system, including those running Microsoft Windows and Apple’s Mac OS X.
    Here i provide an overview of NFS, discuss different versions of NFS and their capabilities, and discuss the various applications associated with NFS. Beyond this background material, this text focuses on explaining how to set up your Ubuntu system to be an NFS file server — how to access NFS file servers from other systems is also discussed. I conclude by discussing NIS(Network Information System), a distributed authentication mechanism that is commonly used in conjunction with NFS.


Overview of the Network File System
NFS is a network filesystem that provides transparent access to files residing on remote disks. Network filesystems are often commonly referred to as distributed filesystems, because the files and directories that they provide access to may be physically located on many different computer systems that are distributed throughout your home, academic environment, or business. Developed at Sun Microsystems in the early  1980s, the NFS protocol has been revised and enhanced several times between then and now, and is available on all Linux, Unix, and Unix-like systems and even for Windows systems from many third-party software vendors. The specifications for NFS have been publicly available since shortly after it was first released, making NFS a de facto standard for distributed filesystems.
    NFS is the most common distributed filesystem in use today, largely because it is free and available for almost every type of modern computer system. NFS enables file servers to export centralized sets of files and directories to multiple client systems. Good examples of files and directories that you may want to store in a centralized location but make simultaneously available to multiple computer systems are users’ home directories, site-wide sets of software development tools, and centralized data resources such as mail queues and the directories used to store Internet news bulletin boards. The following are some common usage scenarios for using NFS:
  • Sharing common sets of data files: Sharing files that everyone on your network wants to access, whether they are audio files, business data, or the source code for tomorrow’s killer app, is the most common use of any type of networked filesystem.
  • Explicitly sharing home directories: Suppose that the home directories for all of your users are stored in the directory /export on your NFS file server, which is automatically mounted on all of your computer systems at boot time. The password file for each of your systems would list your user’s home directories as /export/user-name. Users can then log in on any NFS client system and instantly see their home directory, which would be transparently made available to them over the network.
An alternative to the previous bullet is to automatically mount networked home directories using an exported NFS directory that is managed by an NFS automount daemon. Whenever access to a directory managed by an automount daemon is requested by a client, the daemon automatically mounts that directory on the client system. Automounting simplifies the contents of your server’s  /etc/exports file by enabling you to export only the parent directory of all home directories on the server, and letting the automounter manage that directory (and therefore its subdirectories) on each client.
 See the quote at the end of this text for general information on automounting, a complete discussion of which is outside the scope of this text.
  • Sharing specific sets of binaries across systems: Suppose that you want to make a specific set of GNU tools available on all of the systems in your computing environment, but also wanted to centralize them on an NFS server for ease of maintenance and updating. To ensure that configuration files were portable across all of your systems, you might want to make these binaries available in the directory /usr/gnu regardless of the type of system that you were using. You could simply build binaries for each type of system that you support, configuring them to be found as /usr/gnu but actually storing them in directories with names such as /export/gnu/ubuntu, /export/gnu/solaris8, and so on. You would then configure each client of a specified type to mount the appropriate exported directory for that system type as /usr/gnu. For example, /export/gnu/ubuntu would be mounted as /usr/gnu on Ubuntu systems, /export/gnu/solaris8 would be mounted as /usr/gnu on Solaris systems, and so on. You could then simply put /usr/gnu/bin in your path and the legendary “right thing” would happen regardless of the type of system that you logged in on.
As you’ll see in a little, NFS is easy to install, easy to configure, and provides a flexible networked filesystem that any Ubuntu, other Linux, Unix, or Unix-like system can quickly and easily access. In some cases, it’s easy to trip over a few administrative gotchas, but Ubuntu provides powerful and easy-to-use tools that simplify configuring NFS file servers to “do the right thing.”


Understanding how NFS Works
If you simply want to use NFS and aren’t too concerned about what’s going on under the hood, you can skip this section. However, this section provides the details of many internal NFS operations because some enquiring minds do indeed want to know and because, frankly, it’s just plain interesting to see some of the hoops that NFS clients and servers have to use to successfully communicate between different types of computer systems, often with different types of processors. So, if you’re interested, read on, McDuff!
    The underlying network communication method used by NFS is known as Remote Procedure Calls (RPCs), which can use either the lower level Universal Datagram Protocol (UDP) as their network transport mechanism (NFS version 2) or TCP (NFS version 3). For this reason, both UDP and TCP entries for port 2049, the port used by the NFS daemon, are present in the Linux /etc/services file. UDP minimizes transmission delays because it does not attempt to do sequencing or flow control, and does not provide delivery guarantees — it simply sends packets to a specific port on a given host, where some other process is waiting for input.
    The design and implementation of RPCs make NFS platform-independent, interoperable between different computer systems, and easily ported to many computing architectures and operating systems.
RPCs are a client/server communication method that involves issuing RPC calls with various parameters on client systems, which are actually executed on the server. 
The client doesn’t need to know whether the procedure call is being executed locally or remotely — it receives the results of an RPC in exactly the same way that it would receive the results of a local procedure call.
    The way in which RPCs are implemented is extremely clever. RPCs work by using a technique known as marshalling, which essentially means packaging up all of the arguments to the remote procedure call on the client into a mutually agreed-upon format. This mutually agreed-upon format is known as eXternal Data Representation (XDR), and provides a sort of computer Esperanto that enables systems with different architectures and byte-orders to safely exchange data with each other. The client’s RPC subsystem then ships the resulting, system-independent packet to the appropriate server. The server’s RPC subsystem receives the packet, and unmarshalls it to extract the arguments to the procedure call in its native format. The RPC subsystem executes the procedure call locally, marshalls the results into a return packet, and sends this packet back to the client. When this packet is received by the client, its RPC subsystem unmarshalls the packet and sends the results to the program that invoked the RPC, returning this data in exactly the same fashion as any local procedure call. Marshalling and unmarshalling, plus the use of the common XDR data representation, make it possible for different types of systems to transparently communicate and execute functions on each other. RPC communications are used for all NFS-related communications, including:
  1. communications related to the authentication services used by NFS (NIS or NIS+)
  2. managing file locks
  3. managing NFS mount requests
  4. providing status information, and 
  5. requests made to the NFS automount daemon
To enable applications to contact so many different services without requiring that each communicate through a specific, well-known port, NFS lets those services dynamically bind to any available port as long as they register with its central coordination service, the portmapper daemon. The portmapper always runs on port 111 of any host that supports RPC communications, and serves as an electronic version of directory assistance. Servers register RPC-related services with the portmapper, identifying the port that the service is actually listening on. Clients then contact the portmapper at its well-known port to determine the port that is actually being used by the service that they are looking for.
    Communication failures occur with any networked communication mechanism, and RPCs are no exception. As mentioned at the beginning of this section, UDP does not provide delivery guarantees or packet sequencing. Therefore, when a response to an RPC call is not received within a specific period of time, systems will resend RPC packets. This introduces the possibility that a remote system may execute a specific function twice, based on the same input data. Because this can happen, all NFS operations are idempotent, which means that they can be executed any number of times and still return the same result — an NFS operation cannot change any of the data that it depends upon.
    Even though NFS version 3 uses TCP as its network transport mechanism, the idea of idempotent requests is still part of the NFS protocol to guarantee compatibility with NFS version 2 implementations. As another way of dealing with potential communication and system failures, NFS servers are stateless,meaning that they do not retain information about each other across system restarts.
If a server crashes while a client is attempting to make an RPC to it, the client continues to retry the RPC until the server comes back up or until the number of retries exceeds its configured limit, at which time the operation aborts. 
Stateless operation makes the NFS protocol much simpler, because it does not have to worry about maintaining consistency between client and server data. The client is always right, even after rebooting, because it does not maintain any data at that point.
    Although stateless operation simplifies things, it is also extremely noisy, inefficient, and slow. When data from a client is saved back to a server, the server must write it synchronously, not returning control to the client until all of the data has been saved to the server’s disk. As described in the next section, “Comparing Different Versions of NFS,” newer versions of NFS do some limited write caching on clients to return control to the client applications as quickly as possible. This caching is done by the client’s rpciod process (RPC IO Daemon), which stores pending writes to NFS servers in the hopes that it can bundle groups of them together and thus optimize the client’s use of the network. In the current standard version of NFS (NFS version 3), cached client writes are still essentially dangerous because they are only stored in memory, and will therefore be lost if the client crashes before the write completes.
    In a totally stateless environment, a server crash would make it difficult to save data that was being modified on a client back to the server once it is available again. The server would have no way of knowing what file the modified data belonged to because it had no persistent information about its clients. To resolve the problem, NFS clients obtain file handles from a server whenever they open a file. File handles are data structures that identify both the server and the file that they are associated with. If a server crashes, clients retry their write operations until the server is available again or their timeout periods are exceeded. If the server comes back up in time, it receives the modified data and the file handle from the client, and can use the file handle to figure out which file the modified data should be written to.
    The lack of client-side caching also has a long-term operational impact because it limits the type of dependencies that NFS clients can have on NFS servers. Because clients do not cache data from the server, they must re-retrieve any information that they need after any reboot. This can definitely slow the reboot process for any client that must execute binaries located on an NFS server as part of the reboot process. If the server is unavailable, the client cannot boot. For this reason, most NFS clients must contain a full set of system binaries, and typically only share user-oriented binaries and data via NFS.


Comparing Different Versions of NFS
NFS has been around almost since the beginning of Unix workstation time, appearing on early Sun Microsystems workstations in the early 1980s. This section provides an overview of the differences between the four different versions of NFS, both for historical reasons, and to illustrate that NFS is by no means a done deal. NFS 4 resolves the biggest limitations of NFS 3, most notably adding real client-side data caching that survives reboots. The most common version of NFS used on systems today is NFS version 3, which is the version that i focus here.  The following list identifies the four versions of NFS and highlights the primary features of each:
  • Version 1: The original NFS protocol specification was used only internally at Sun during the development of NFS, and I have never been able to find any documentation on the original specification. This would only be of historical interest.
  • Version 2: NFS version 2 was the first version of the NFS protocol that was released for public consumption. Version 2 used UDP exclusively as its transport mechanism, and defined the 18 basic RPCs that made up the original public NFS protocol. Version 2 was a 32-bit implementation of the protocol, and therefore imposed a maximum file size limitation of 2GB on files in NFS and used a 32-byte file handle. NFS version 2 also limited data transfer sizes to 8KB
  • Version 3: NFS version 3 addressed many of the shortcomings and ambiguities present in the NFS version 2 specification, and took advantage of many of the technological advances in the 10+ years between the version 2 and 3 specifications. Version 3 added TCP as a network transport mechanism, making it the default if both the client and server support it; increased the maximum data transfer size between client and server to 64KB; and was a full 64-bit implementation, thereby effectively removing file size limitations. All of these were made possible by improvements in networking technology and system architecture because the NFS version 2 was released. Version 3 also added a few new RPCs to those in the original version 2 specification, and removed two that had never been used (or implemented in any NFS version that I’ve ever seen). To improve performance by decreasing network traffic, version 3 introduced the notion of bundling writes from the client to the server, and also automatically returned file attributes with each RPC call, rather than requiring a separate request for this information as version 2 NFS had done.
  • Version 4: Much of the NFS version 4 protocol is designed to position NFS for use in Internet and World Wide Web environments by increasing persistence, performance, and security. Version 4 adds persistent, client-side caching to aid in recovery from system reboots with minimal network traffic, and adds support for ACLs and extended file attributes in NFS filesystems. Version 4 also adds an improved, standard API for increased security through a general RPC security mechanism known as Remote Procedure Call Security - Generic Security Services (RPCSEC_GSS). This mandates the use of the Generic Security Services Application Programming Interface (GSS-API, specified in RFC 2203) to select between available security mechanisms provided by clients and servers.


Installing an NFS Server and Related Packages

To install the packages required to run and monitor an NFS server on your Ubuntu system, start the Synaptic Package Manager from the System ➪ Administration menu, and click Search to display the search  dialog. Make sure that Names and Descriptions are the selected items to look in, enter nfs as the string to search for, and click Search. After the search completes, scroll down until you see the nfs-common and  nfs-kernel-server packages, right-click each of these packages and select Mark for Installation to select that package for installation from the pop-up menu.
As you can see the Ubuntu repositories provide two NFS servers: one that runs in the Linux kernel and another that runs in user space.
  1. The kernel-based NFS server is slightly faster, provides some command-line utilities, such as the exportfs utility, that you may want to use to explicitly share directories via NFS (known as exporting directories in NFS-speak) and monitor the status of directories that you share using NFS.
  2. The user-space NFS server is slightly easier to debug and control manually.
This & explains how to install and use  the kernel-based NFS server — if you have problems sharing directories using NFS, you may want to subsequently install the user-space NFS server to help with debugging those problems.
    Depending on what software you have previously installed on your Ubuntu system and what you select in Synaptic, a dialog may display that lists other packages that must also be installed, and ask for confirmation. When you see this dialog, click Mark to accept these related (and required) packages. Next, click Apply in the Synaptic toolbar to install the kernel-space NFS server and friends on your system.
    Once the installation completes, you’re ready to share data on your system with any system that supports NFS.



Using the Shared Folder Tool to Export Directories
At this point in it should come as no surprise that Ubuntu Linux provides an easy-to-use graphical tool (shares-admin) that simplifies the process of defining and configuring the directories that you want to export via NFS from your Ubuntu system. To start this tool, select System ➪ Administration ➪
Shared Folders
. After supplying your password in the administrative authentication dialog that displays, the Shared Folder tool starts. To define a directory that you want to share via NFS, click Add to display the dialog.
    To export a directory using NFS, click the Share with item and select NFS as the sharing protocol that you are working with. This displays the settings that are relevant for NFS. As you can see the default exported/shared directory that is initially selected when you start the Shared Folder admin tool is your home directory. In this example, I’m going to share the directory that contains my online audio collection. To specify another directory for sharing, click the Path item and select Other from the drop-down menu to display the directory selec-
tion dialog.
    To select a directory somewhere on your system, click root and navigate through the directory tree on your system to select the directory that you want to export, which in this example is my /opt2 directory. Click Open to select that directory (or whatever directory you want to export) and return to the dialog  which now displays the name of the newly selected directory in the Path field.
    Next, you’ll need to identify the hosts that you want to be able to access (i.e., mount) this directory over the network. To define these, click the Add host button to display the dialog.  This dialog provides several ways to identify the hosts that can mount and access the directory that you are sharing. The Allowed hosts drop-down menu provides four choices:
  • Hosts in the eth0 network: Enables anyone who can reach your machine via your system’s eth0 network interface to mount and access the shared directory.
  • Specify hostname: Enables you to identify the name of a specific host that can mount and access the shared directory. Selecting this item displays an additional field on the basic dialog in which you can enter the fully-qualified or local hostname of a machine that you want to be able to mount and access the shared directory.
  • Specify IP address: Enables you to identify the IP address of a specific host that can mount and access the shared directory. Selecting this item displays an additional field on the basic dialog in which you can enter the IP address of a machine that you want to be able to mount and access the shared directory.
  • Specify network: Enables you to identify the IP specification for a subnet that can mount and access the shared directory. All hosts with IP addresses that are on this subnet will be able to mount and access the shared directory. Selecting this item displays two additional fields on the basic dialog in which you can enter the subnet and netmask of the network whose hosts that you want to be able to mount and access the shared directory.
If you are identifying authorized hosts who can mount and access your shared directory by hostname, IP address, or subnet, you can always explicitly allow multiple hosts to mount and access the shared directory by using Add hosts button multiple times to define a specific set of hosts.
    In this example, I’ll enable access to all hosts on the 192.168.0.0 subnet to my shared directory. Note that this dialog enables you to grant read-only access to a shared directory by selecting the Read only checkbox. This provides a convenient way to give others access to shared data but prevents them from modifying anything in the shared directory. There is also slightly less overhead in exporting a directory to other systems as a read-only directory, so you may want to consider doing this if others need access to the shared data but you’re sure that they’ll never want to change anything there (or you don’t want them to change anything there).
    Clicking OK in the dialog returns you to the dialog shown previously which is now updated to show the /opt2 directory that I am sharing in this example. To continue, click OK to redisplay the dialog originally which now contains the settings for our newly defined NFS shared directory.
    Almost done! To subsequently modify or update the settings for any shared directory, you can right-click its name in the Shared Folder tool and click Properties to display the specific settings for that shared folder. To begin sharing the folder, click OK to start the specified type of file sharing and close the Shared Folder tool.


Verifying NFS Operations
The kernel NFS server package includes a utility called exportfs that you can use to list the directories that an NFS server is currently exporting from your system and reexport any new directories that you have just added to your system’s NFS configuration, which is stored in the file /etc/exports. After you follow the instructions in the previous section, the contents of the /etc/exports file on your Ubuntu system are the following:
# /etc/exports: the access control list for filesystems which may
#                        be exported to NFS clients. See exports(5).
/opt2                    192.168.0.0/255.255.0.0(rw)
Any line in this file that does not begin with a hash mark is an entry that defines a directory that is being exported by NFS, and is commonly referred to as an export specification. To verify that the /opt2 directory is being exported from your system (and to reexport it if necessary), you can use the exportfs –av command, which exports all available directories in a verbose fashion as shown in the following example:
$ sudo exportfs -a
exportfs: /etc/exports [3]: No ‘sync’ or ‘async’ option specified \
for export “192.168.0.0/255.255.0.0:/opt2”.
Assuming default behavior (‘sync’).
NOTE: this default has changed from previous versions
exporting 192.168.0.0/255.255.0.0:/opt2

This output demonstrates that the directory /opt2 is being exported to all hosts whose IP addresses match 192.168.0.0.
                  NFS Users and Authentication
NFS uses the user ID (UID) and group ID (GID) of each user from a system’s password file (/etc/passwd) to determine who can write to and access exported files and directories, based on the UID and GID that owns those directories on the file server. This means that all of your users should have the same user ID and group ID on all systems to which NFS directories such as home directories are exported.
  • In small networks, it is often sufficient to make sure that you create the same user and groups on all of your systems, or to make sure that the password and group files on your file server contain the correct entries for all of the user and groups who will access any directory that it exports.
  • In larger networks, this is impractical, so you may want to consider network-oriented authentication mechanisms, such as the Network Information System (NIS), which was developed by Sun Microsystems specifically for use with NFS. Unfortunately, discussing NIS installation and setup is outside of the scope of this text, but you can find a variety of excellent information about it online in documents such as the NIS HOWTO. This document is available in more languases.
You’ll not that the exportfs also complains about a missing option in the export specification for the /opt2 directory. In the /etc/exports file shown earlier in this section, you’ll notice that the last entry in the /opt2 export specification ends with “(rw)”. This final section of an export specification specifies any options associated with a specific exported directory. In this case, the only option  specified is rw, which means that the directory is being exported as read/write so that authorized users can write to that directory, as well as read from it. (See the quote later in this section entitled “NFS Users and Authentication” for more information about how NFS identifies users.)
    The warning message displayed by the exportfs command has to do with whether changes to files in a read/write directory are immediately written to the remote file server (sync, for synchronous operation), or are written lazily, whenever possible (async, for asynchronous operation). Synchronous operation  is slower, because your system has to wait for writes to the remote file server to complete, but is safer because you know that your changes have been written to the file server (unless the network connection goes down, in which case all bets are off). Older versions of NFS simply assumed synchronous operation, but nowadays, NFS likes you to explicitly specify which option you want to use. To eliminate this error message, you can therefore edit the /etc/exports file directly to change rw to rw,async, which I generally recommend because it is faster than synchronous operation. After you make this change, the /etc/exports file looks like the following:
# /etc/exports: the access control list for filesystems which may be exported
#                        to NFS clients. See exports(5).
/opt2                    192.168.0.0/255.255.0.0(rw,async)
You can now reexport this directory for asynchronous updates, and the exportfs utility is much happier, as in the following example:
       $ sudo exportfs -av
       exporting 192.168.0.0/255.255.0.0:/opt2
The nfs-common package provides a utility called showmount, which you can also run on an NFS server to display the list of directories exported by that file server, but which will not reexport them or change them in any way. Using the showmount command with its –e option (to show the list of exported directories on the test system used here) provides output like the following:
$ sudo showmount -e
 Export list for ulaptop:
 /opt2 192.168.0.0/255.255.0.0
For complete information about the exportfs and showmount utilities, see their online reference information, which is available by typing man exportfs or man showmount from any Ubuntu command line, such as an xterm or the GNOME Terminal application.


Manually Exporting Directories in /etc/exports
Although everyone loves graphical tools, it’s sometimes nice to simply edit the underlying files that these tools manipulate — it can be much faster, and can be done from any device on which you can start a text editor.
    As mentioned in the previous section, the file that contains exported directory information for NFS file servers is /etc/exports. Entries in this file have the following form:
full-path-name-of-exported-directory hosts(mount-options)
Each such entry in the /etc/exports file is referred to as an export specification. Hosts can be listed by IP address, hostname, or subnet to state that only those hosts can access a specific directory exported by NFS. Entries such as 192.168.6.61 would limit access to a specific NFS directory from that host, while entries such as 192.168.6.* or 192.168.6.0 would limit access to a specific NFS directory to hosts on that subnet. By default, all hosts that can reach an NFS server have access to all exported directories (which is represented by a * preceding the mount options).
    As you’d expect, many mount options are available. Some of the more commonly used mount options are the following:
  • all_squash: Maps all NFS read or write requests to a specific user, usually “anonymous.” This option is often used for public resources such as directories of USENET news, public FTP and download areas, and so on. All files written to an NFS directory that is exported with the all_squash mount option will be assigned the UID and GID of the user anonymous or some other UID and GID specified using the anonuid and anongid mount options. The default is no_all_squash, which preserves all UIDs and GIDs.
  • insecure: Enables access to NFS directories by NFS clients that are running on non-standard NFS network ports. By default, this option is off, and NFS requests must originate from ports where the port number is less than 1024. The insecure option may be necessary to enable access from random PC and Macintosh NFS clients. If you need to use this option, you should limit machines using the NFS option to a home network or secure corporate Intranet. You should not use this option on any machines that are accessible from over the Internet, because it introduces potential security problems.
  • no_root_squash: Lets root users on client workstations have the same privileges as the root user on the NFS file server. This option is off by default.
  • ro: Exports directories that you don’t want users to be able to write to because it is read-only. The default is rw, which enables read/write access.
  • sync: Forces writes to the NFS server to be done synchronously, where the client waits for the writes to complete before returning control to the user. This is the default — as explained in the previous section, you can also specify asynchronous operation (async), which is slightly faster.
See the man page for /etc/exports (by using the man 5 exports command) for complete information on the options that are available in this file. Once you have created an entry for a new exported directory in your /etc/exports file, you can export that directory by rerunning the exportfs command with the –r option, which tells the NFS server to reread the /etc/exports file and make any necessary changes to the list of directories that are exported by that NFS server.

              Automounting NFS Home Directories
Automounting is the process of automatically mounting NFS filesystems in response to requests for access to those filesystems. Automounting is controlled by an automount daemon that runs on the client system.
    In addition to automatically mounting filesystems in response to requests for access to them, an automount daemon can also automatically unmount volumes once they have not been used for a specified period of time.
    Using an automount daemon prevents you from having to mount shared NFS directories that you are not actually using at the moment. Mounting all NFS directories on all clients at all times causes a reasonable amount of network traffic, much of which is extraneous if those directories are not actually being used. Using the NFS automount daemon helps keep NFS-related network traffic to a minimum.
    At the moment, two different automount daemons are available for Linux.
  1. The amd automount daemon runs in user space on client workstations and works much like the original SunOS automounter. The amd automounter is configured through the file /etc/amd.conf. For more information about amd, see the home page for the Automount Utilities
  2. The other automount daemon is called autofs and is implemented in Linux kernel versions 2.2 and greater. The kernel automounter starts one user-space automount process for each top-level automounted directory. The autofs automounter daemon is configured through the file /etc/auto.master or through NIS maps with the same name. Because the autofs daemon is part of the kernel, it is faster and the automounter of choice for many Linux distributions (such as Ubuntu).
Both of these packages are available in the Ubuntu repositories, but discussing all of the nuances of automounting is outside the scope of this text.


Getting More Information About NFS and Related Software
Not surprisingly, the Web provides an excellent source of additional information about NFS and NIS. For more information, consult any of the following:

    2 comments:

    1. is there impossible if i make this as my final report,, i mean as my thesis?? how to make easier scenario??...

      thanks.. :)

      ReplyDelete
    2. what's your grade? what's thesis argument precisely

      ReplyDelete