This chapter describes how to use SGI InfiniteStorage Appliance Manager to configure the various components of your system and perform general system administration:
“Network Interface Configuration” describes how to configure and modify the network interfaces for the system
“Storage Configuration” describes how to configure filesystems, filesystem snapshots, and iSCSI targets
“DMF Configuration” describes the Data Migration Facility (DMF) tasks that you can perform
“User and Group Configuration” describes how to configure a name service client, local users, local groups, and user and group quotas
“NFS Configuration” describes how to configure NFS to share filesystems
“CIFS Configuration” describes how to configure CIFS to share filesystems
“CXFS Configuration” describes how to configure CXFS client-only nodes and manage the CXFS cluster
“NDMP Configuration” describes how to configure Network Data Management Protocol (NDMP) for backups
“SNMP Configuration” describes how to configure basic Simple Network Management Protocol (SNMP)
“Global Configuration” describes how to perform various general administration functions
“Operations” describes how to save changes to the configuration files and restore them, how to gather support and performance data, and shut down the system
Figure 3-1 shows the top level Management screen.
You can use Appliance Manager to configure and modify the network interfaces for the system. When configuring the system, you must consider the difference between the management interface and the remainder of the interfaces in the system.
The management interface is the first interface in the machine (eth0), which is dedicated for use by Appliance Manager. On a NAS system, the remainder of the interfaces in the system are used for fileserving. On a SAN system, the remainder of the interfaces are preconfigured for the CXFS private network and connection to the Fibre Channel switch.
Caution: Changing the network interface configuration for a SAN system can leave the CXFS cluster inoperative. If you are required to change the configuration, you must do so carefully by using the cxfs_admin command or the CXFS GUI. For more information, see Appendix B, “How Appliance Manager Configures the CXFS Cluster” and CXFS 5 Administration Guide for SGI InfiniteStorage. |
You can configure these ports as individual standalone ports or you can group these ports together into a bonded network interface .
Bonding interfaces together gives you the aggregated bandwidth for multiple clients of all of the interfaces that constitute the bonded interface. For most systems, this can significantly increase performance over a system in which all of the interfaces are configured as individual network ports.
For further information, see:
Caution: Ensure that the hardware settings are correct before you configure the network interfaces. For information on hardware setting, see the Quick Start Guide for your system. |
When the system is shipped from the factory, the management interface has a preconfigured IP address. When using the Setup Wizard, you connect a laptop to that interface in order to perform the initial setup tasks. For information on the Setup Wizard, see Chapter 2, “Initial System Setup”.
The management interface is always configured as an individual network interface and cannot be part of a bonded interface.
You can modify the management interface by selecting eth0 from the following screen:
Management -> Resources -> Network Interfaces -> Modify
For information on the network configuration parameters you can modify, see “Ethernet Network Interfaces”.
Caution: If you configure an incorrect IP address for the management interface, you can make Appliance Manager inaccessible. |
To see the available Ethernet network interfaces and change their parameters, select the following:
Management -> Resources -> Network Interfaces -> Modify
To modify an interface, select it. You can change the following fields:
To see the available InfiniBand network interfaces and change their parameters, select the following:
Management -> Resources -> Network Interfaces -> Modify
To modify an interface, select it. You can change the following fields:
Enabled | Enables the interface. | |||
Automatic discovery by DHCP | Specifies that dynamic host configuration protocol (DHCP) will be used to configure the Ethernet interface. (Another system must be the DHCP server.) | |||
Static | Specifies that a particular IP address is required for the network interface. If you select this, you must provide the IP address and subnet mask. | |||
Dedicated | Specifies the local and remote IP address for a dedicated network connection between the storage server and another host, for example a dedicated VLAN network or single point-to-point network cable. A dedicated network interface is an interface that has been configured to use a point-to-point connection with a single remote host. All network traffic to and from that server will go via the local dedicated network interface and no other traffic will appear on that interface. Dedicated network interfaces can be useful when there may be a large amount of network traffic to a specific host and you wish to prevent interference with other network traffic to other hosts.
|
A bonded interface is a virtual network interface that consists of real interfaces working in tandem. You use bonded interfaces on NAS systems to increase bandwidth to NFS and CIFS clients. (It does not apply to CXFS clients because they are connected via Fibre Channel.)
A virtual interface can provide the aggregated bandwidth of all of the interfaces that you used to create it.
Note: Any single client can achieve the bandwidth of only a single interface at a time. A bonded interface increases the aggregate bandwidth for multiple clients. |
For example, if you have three interfaces each with a bandwidth of 10, the aggregate bandwidth is 30. For an individual client, however, the maximum bandwidth remains 10. When additional clients access the bonded interface, the clients are assigned to the subinterfaces, and up to three clients can use a bandwidth of 10 at the same time. Thus multiple clients accessing the system increase the aggregate bandwidth, improving the performance to a maximum bandwidth of 30.
For example, Figure 3-2 shows a configuration in which all clients connect to a single IP address (192.168.0.3 ). The switch is responsible for sharing the load across 4 bonded interfaces (eth1-eth4). Therefore, 4 times as many clients can communicate with the same server without a loss in overall performance.
Output load balancing controls how the server chooses which subinterface to send replies. Input load balancing controls how clients are assigned to subinterfaces, and how and when clients are moved from one subinterface to another. Load balancing happens on a per-packet basis. When a client sends a packet, it traverses a switch, which determines at which subinterface the packet arrives. Input load balancing ensures that each client arrives at a different subinterface. The clients see only one interface because the balancing is done by the system.
In addition to configuring a bonded interface in Appliance Manager, you must configure the ports on the switch so that they use either static trunking or 802.3ad dynamic trunking. For more information, refer to the user manual for your switch.
To create a bonded interface, select the following:
Management -> Resources -> Network Interfaces -> Create a bonded interface
The available interfaces are displayed for selection.
When you configure a bonded interface, you specify the following:
Available interfaces | Specifies the interfaces to be used. | |||
Bonding mode | Selects a bonding mode that governs the relation of the subinterfaces to a switch and defines the protocol that is used for assigning network switch ports to a bonded interface:
Your choice depends upon what your switch supports:
| |||
Output Load Balancing | Specifies how the server chooses which subinterface to send replies:
|
IP address | Specifies the IP address of the new bonded interface. The IP address for a bonded interface must be configured statically. Appliance Manager does not support DHCP and dedicated IP addresses for bonded interfaces. |
Subnet mask | Specifies the subnet mask of the new bonded interface. All configured network interfaces should be on different subnets. |
Click Apply Changes to create the bond.
You can use Appliance Manager to configure the following:
XFS filesystems (CIFS/NFS)
CXFS clustered filesystems (license required)
iSCSI targets
XVM filesystem snapshots (license required)
These features are available under the following menu selection:
Management -> Resources -> Storage
The following sections describe these features:
This section describes the following:
For background information about how Appliance Manager works, see Appendix A, “How Appliance Manager Configures Filesystems”.
To display a brief description of the RAID to which Appliance Manager is connected, use the List option:
Management -> Resources -> Storage -> Filesystems -> List
This includes the worldwide name (WWN) of the RAID device and an indication of the RAID status, which will be ONLINE unless a hardware or software failure mode has prevented communication between Appliance Manager and the array firmware (such as if the array is powered down or a cable has been pulled out).
Appliance Manager will list filesystems under the following categories, depending on their current state:
Configured Filesystems | Filesystems created by Appliance Manager and filesystems that are able to be managed by Appliance Manager | |
Unconfigured Filesystems | Filesystems that are able to be managed by Appliance Manager but are not currently fully configured | |
Unmanaged Filesystems | Filesystems that are not manageable by Appliance Manager, such as manually created filesystems |
The Type field on this screen indicates whether the listing is a filesystem, a snapshot repository, iSCSI storage, or available space.
Note: Unconfigured filesystems and unmanaged filesystems will show an approximate capacity (indicated by the ~ character) if they are not currently mounted. |
If you have created a snapshot repository but have not scheduled any snapshots to be taken and stored on that repository, its size will appear as 0 on this display.
To discover unconfigured filesystems, click the Reconfigure Unconfigured Filesystems link on this page. See “Discovering Filesystems”.
Note: To create a filesystem, all the storage arrays chosen to contain the filesystem must be supported by Appliance Manager. For best results, SGI recommends that the arrays are symmetrical with respect to the number of drives and trays installed as well as the type of drives installed -- such as Serial Attached SCSI (SAS), Serial ATA (SATA) or Fibre Channel (FC) -- and the speed/size of the drive. |
The Create option steps you through a filesystem creation wizard. The steps that the wizard will take are listed in a box to the left of the screen, with the current step highlighted.
The filesystem creation procedure is mostly automatic. You provide the name, size, and general characteristics of the filesystem to create and Appliance Manager determines the underlying layout of the filesystem on the disk devices. For information on how Appliance Manager calculates the allocation of disk resources, see Appendix A, “How Appliance Manager Configures Filesystems”.
There is a limit to the number of filesystems on a particular array. This limit is fewer than 30 filesystems for a 4-tray array, but it can be smaller on large arrays (because each filesystem will use 2 or 3 of the total 254 LUN numbers per tray of disks in the array). The number of filesystems and repositories that you can create depends on the make and model of the storage arrays that are connected. Some arrays are capable of supporting up to 254 LUN numbers, but others support only 31 or fewer. The number of LUN numbers consumed by a filesystem/repository depends upon the number of disks and the size of the disks and trays that are connected to the storage array. SGI recommends that you create as few filesystems as possible in order to save LUN numbers (which can later be utilized to grow the filesystem) and because the storage subsystem performs better with fewer filesystems configured.
Note: When you create the filesystem, the system detects whether the disk configuration is supported and issues a warning if it is not. You can continue to create the filesystem under these circumstances, but the filesystem will not be an efficient one. |
You can grow an XFS filesystem after you have created it, by whatever size you choose. It is most efficient, however, if you create a filesystem that fills the disk array and add additional disks if you need to grow the filesystem, filling those disks when you do.
Perform the following steps to create a filesystem:
Select the Create option:
Management -> Resources -> Storage -> Filesystems -> Create
Appliance Manager searches for the RAID arrays on the system and displays them on the Arrays screen. If you have more than one storage array, a list of arrays will be presented and you can chose on which arrays the filesystem should be created. Selecting more than one array will result in a filesystem that spans the selected arrays. Spanning filesystems across multiple arrays is possible only for external storage arrays (the SGI InfiniteStorage series). Click Next.
The Options screen displays the filesystem configuration options. These are based on the devices that are available to the system and include the following categories:
Drive type | Specifies the drive type: Serial Attached SCSI (SAS), Serial ATA (SATA) or Fibre Channel (FC). You cannot create a filesystem that spans multiple types of disks. | |
Goal | Specifies the goal of the filesystem optimization. You can select a filesystem optimized for performance or capacity (if appropriate for your system). If you select for capacity, Appliance Manager will use all the available disk space to create the filesystem, although this will usually come at the cost of slower performance. | |
Workload | Selects the workload type. You can select a filesystem optimized for bandwidth or for I/O per second (IOPS). Select Bandwidth when you will have a small set of files and you must perform streaming reads and streaming writes as fast as possible. Select IOPS when you will be performing random reads and writes to different sets of files. Normally, IOPS will be the better choice. If you are optimizing for IOPS, it is best to build one large filesystem. In general, there is a cost to having multiple filesystems. | |
Available Space | Displays the available space in gigabytes ( GiB, 1024 megabytes). |
Click Next.
On the Purpose screen, select whether the filesystem will be a clustered CXFS filesystem or an XFS filesystem. The Purpose screen will appear if Appliance Manager is managing a SAN (CXFS) system or if DMF is installed. Depending on the existence of CXFS and DMF, you will be asked if you want to create a clustered CXFS filesystem or a local XFS filesystem, and with or without DMF support. The DMF filesystem option will create the filesystem with 512-byte inodes, and the dmapi and mtpt mount options as required for DMF support. (It will not add the filesystem to the DMF configuration file; you must do this later manually.) For more information about DMF, see “DMF Configuration”.
On the Name & Size screen, enter the following:
Filesystem mount point (must be begin with /mnt/ as shown).
Filesystem size in gigabytes.[1]The default filesystem size is the size of a filesystem that will completely fill the disk devices. If you choose less than this maximum size, the filesystem will be divided up among the disks. For example, if you create a filesystem that is 20% of the maximum size, it will be spread out among the first 20% of each disk. If you create a second filesystem that is also 20% of that maximum size, it will be spread out among the second 20% of each disk.
Note: If you plan
to use the XVM snapshot feature, you should ensure that the filesystem
capacity entered will leave enough remaining free capacity to create a
snapshot repository. For further information, see “XVM Snapshots” in Chapter 1.
XVM snapshots are not available on DMF or CXFS filesystems. |
Optional snapshot repository size. The size of the repository that you will need depends on several factors:
The size of the filesystem for which you are creating a snapshot. A repository that is approximately 10% of this size is a reasonable starting estimate.
The volatility of the data in the volume. The more of the data that changes, the more room you will need in the repository volume.
The snapshot frequency. (More frequent snapshots results in smaller individual snapshots.)
Click Next.
The Confirmation screen summarizes the filesystem options you have selected. Click Next to confirm your choices and create the filesystem.
The Create filesystem screen displays a "please wait" message and transitional status during the filesystem creation process. Click Next after the operation is finished and the completion message displays.
The Create repository screen (if you have chosen to create a snapshot repository) displays a "please wait" message and transitional status during the filesystem creation process. Click Next after the operation is finished and the completion message displays.
The NFS and CIFS screen lets you configure the filesystem so that it can be exported with NFS or CIFS network protocols. (If you NFS export and/or CIFS share a CXFS filesystem, it will only be exported/shared from the CXFS metadata server, not from CXFS clients.) [2]For information, see “NFS Configuration” and “CIFS Configuration”. Click Next.
The Finished screen indicates that the filesystem has been created. Click Done .
Note: You cannot use Appliance Manager to grow a CXFS filesystem. |
You can use a filesystem normally as you grow it. (You do not need to disable access or unmount it, or take any other special actions before growing the filesystem.)
To increase the size of an existing XFS filesystem, do the following:
Select the Grow option:
Management -> Resources -> Storage -> Filesystems -> Grow
The Filesystem screen lists the current filesystems along with their usage and size. Select the filesystem you want to grow and click Next.
The Size screen lets you enter the size in gigabytes[3]by which the filesystem should be grown. Click Next.
The Confirmation screen displays the current size of the filesystem and the amount to grow the filesystem. Click Next.
The Growing screen displays a "please wait" message during the growing process. Click Next after the operation is finished and the completion message displays.
The Finished screen indicates that the larger filesystem is available. Select Done.
To delete a filesystem, do the following:
Select Destroy:
Management -> Resources -> Storage -> Filesystems -> Destroy
This screen displays a list of the existing filesystems.
Select a filesystem from the list. A message indicates that all data on the specified filesystem will be destroyed.
Confirm that you want to destroy the filesystem and select Yes, destroy the filesystem.
On completion, a SUCCEEDED message appears.
To discover lost or unconfigured filesystems, select Discover :
Management -> Resources -> Storage -> Filesystems -> Discover
The disk names of configured filesystems are shown in italics.
To reconfigure an unconfigured filesystem, select its check box from the list of detected volumes and click Configure Selected .
After the discovery process has completed, configuration results are displayed for each filesystem configured. Newly discovered filesystems that were successfully configured are now available for use.
Internet Small Computer Systems Interface (iSCSI) is a protocol that is used to transport SCSI commands across a TCP/IP network. This allows a system to access storage across a network just as if the system were accessing a local physical disk. To a client accessing the iSCSI storage, the storage appears as a disk drive would appear if the storage were local.
In an iSCSI network, the client accessing the storage is called the initiator and runs iSCSI Initiator software. The remote storage that the client accesses is called the target, which is what appears to the initiator as a disk drive.
A common application of an iSCSI network is to configure an Exchange Server as an iSCSI initiator that uses an iSCSI target as its mail store.
Figure 3-3 illustrates iSCSI storage. Each client (initiator) is configured to connect to a specific iSCSI target (an area allocated in the RAID iSCSI storage pool), and views this target as if it were a local disk. The lines in Figure 3-3 indicate data flow.
You can use Appliance Manager to create iSCSI targets on the RAID storage. An iSCSI initiator will be able to connect to the system and access those targets, format them, and use the targets as it would use a disk drive.
You cannot configure Appliance Manager itself as an initiator, and you cannot re-export iSCSI targets with NFS, CIFS, or CXFS. In addition, you cannot export existing filesystems that you have created with Appliance Manager as iSCSI targets; you can create filesystems and configure them to be exported by NFS, CIFS, or CXFS, but you must configure iSCSI targets separately on the RAID device.
Note: Due to the nature of iSCSI as a block-level protocol (as distinct from file-level protocols such as NFS and CIFS), particular care must be taken in the event of a system crash, power failure, or extended network outage. See “Power Outage and iSCSI” in Chapter 5. |
This section discusses the following:
You create iSCSI targets with a creation wizard, just as you create filesystems.
Perform the following steps to create an iSCSI target:
Select the Create Target option:
Management -> Resources -> Storage -> iSCSI -> Create Target
If this is the first target, the system will display a message indicating that you must create the iSCSI storage pool before you can create a target.
Note: Although you can grow this storage pool at a later time when you create additional targets, SGI recommends that you create a storage pool that is large enough to contain all of the targets that you will need. Creating the iSCSI storage pool can be a slow process, but once you have created the pool, creating the targets themselves is a fast process. |
If you have previously created iSCSI storage, you can grow the storage at this time; in this case, the screen displays how much storage you have available.
To create or grow iSCSI storage, click Next and proceed to step 3. If you do not need to create or grow iSCSI storage, select Skip this step and proceed to step 8.
Appliance Manager searches for the RAID arrays on the system and displays them on the Arrays screen. Click Next.
The Options screen displays the iSCSI storage configuration options. For information, see “Creating Filesystems”.
In the Size screen, enter the size in gigabytes[4]for the iSCSI storage pool. Click Next.
The Confirmation screen summarizes the options you have selected. Click Next to confirm your choices and create the pool.
The Creating screen displays a "please wait" message during the target creation process. Click Next after the operation is finished and the completion message displays.
The Target Name screen lets you specify the target information. Enter the domain and optional identifier for the iSCSI name and the size of the target in the following fields:
Domain | Specifies an iSCSI qualified name (which is a unique name that starts with iqn), then a year and month, then an internet domain name in reverse order. A default name appears based on the current system configuration. If in doubt, leave this field as is. | |
Identifier | Specifies a string that will be used to uniquely identify the target. If you create only one target, this is optional. If you create more than one target, each must have a unique identifier. By default, a unique target identifier is provided for you. | |
Target Size (GiB) |
Click Next.
The Target Options screen defines access to the target. You must specify at least one authentication option:
Note: If more than one initiator were to write to the same target at the same time, there is a high risk of data loss. By using one or more authentication options, you ensure that only one client (initiator) can access an individual target at a time. |
Authentication:
Initiator IP Address | Specifies the IP addresses of the initiators that will be allowed access to this target |
Challenge Handshake Authentication Protocol (CHAP) authentication, in which the initiator will supply the following information to the target:
Target Username | Specifies the username that the initiator must supply to connect to the target using CHAP authentication. (This is not the username with which you logged in to Appliance Manager; it is specific to the iSCSI target that you are defining.) | |
Target CHAP Secret | Specifies the password that the initiator must supply to connect to the target using CHAP authentication. It must be in the range from 12 through 16 characters. (This is not the password with which you logged in to Appliance Manager; it is specific to the iSCSI target you are defining.) | |
Re-enter Target CHAP Secret | Verifies the CHAP secret. |
Mutual CHAP authentication, in which the target will supply the following information to the initiator:
Mutual Username | Specifies the target username for mutual CHAP authentication. With mutual CHAP authentication, after the initiator supplies a username, the target must supply a username and password back to the initiator. If you leave the Mutual Username field blank, it defaults to the target username. The mutual name is usually ignored by initiators, which only care about the mutual secret. When the client connects to a target, the iSCSI initiator software verifies that the mutual secret specified in Appliance Manager matches the secret specified in the initiator. | |||
Mutual CHAP Secret | Specifies the mutual CHAP secret.
| |||
Re-enter Mutual CHAP Secret | Verifies the mutual CHAP secret. |
You must enter the CHAP username and secret specified on this screen in the iSCSI initiator software on the client in order for the initiator to be able to authenticate with and connect to the target. For a Windows client, this is the username and secret you enter in Microsoft's iSCSI Initiator program.
The Confirm screen summarizes the target options you have selected. Click Next to confirm your choices and create the iSCSI target.
The Finished screen indicates that the iSCSI target has been created. Select Done .
After you have created iSCSI targets, select the following to see what initiators are connected to what targets:
Monitoring -> Clients -> iSCSI
Appliance Manager lets you configure iSCSI targets for use by an iSCSI initiator, such as the Microsoft iSCSI Software Initiator or the iSCSI initiator included with various Linux and UNIX distributions.
After you have created an iSCSI target, you must configure the initiator on the client system that will connect to the target. You must specify the following:
Hostname of the storage server
Target identifier
Any CHAP authentication details you configured when creating the target (for specific instructions, see the documentation supplied with your iSCSI initiator)
After the iSCSI initiator has connected to the target, the target will appear as a disk drive on the client system and can then be formatted using the tools supplied with the client operating system.
The following is an example of configuring a Windows client (it assumes that you have already created a target or targets):
Download the iSCSI Initiator from Microsoft's web site (http://www.microsoft.com/ ) and install it on the Windows client.
Open the iSCSI Initiator Control Panel applet.
Add the storage server to the list of Target Portals.
Select the iSCSI target to connect to from the Targets list and click Log On.
Specify CHAP authentication details in the Advanced settings.
Use the following tool to partition and format the target and assign a drive letter:
Start Menu -> Administrative Tools -> Computer Management -> Disk Management
The iSCSI menu also provides the following management options:
List Targets | ||
Modify Target | Modifies the authentication settings you defined on the Target Options screen when you created an iSCSI target. | |
Destroy Target | ||
Destroy Storage Pool | Destroys the iSCSI storage pool on the RAID device and all existing targets. | |
Stop/Start | Stops or starts the iSCSI service. If you are backing up the system, taking iSCSI services offline ensures that the data is in a consistent state. |
This section discusses the following:
To schedule how often the system will create a snapshot of a filesystem, do the following:
Select the Schedule Snapshots menu:
Management -> Resources -> Storage -> Snapshots -> Schedule Snapshots
Select the filesystem for which you want to schedule snapshots.
Specify the following options:
Scheduled? | ||||
Specifies that a snapshot will take place for the filesystem. | ||||
Scheduled Snapshot Times | ||||
Specifies the hours at which a snapshot should take place. You can select multiple boxes. | ||||
Custom Time Specification | ||||
Specifies the times and frequency that a snapshot should take place (the minimum interval is 30 minutes). You can specify this value using one of the following forms:
| ||||
Maximum number of snapshots | ||||
Specifies the maximum number of snapshots that will be retained in the repository before the oldest snapshot is deleted when a new snapshot is taken. By default, the system will retain 32 snapshots. The maximum number is 256. SGI recommends that you use the default.
|
Click Schedule snapshots to apply your settings.
Verify that you want to update the snapshot schedule by clicking Yes. (To return to the previous screen, click No.)
Note: The system will delete the oldest snapshot if it determines that repository space is running low. |
Snapshots are made available in the /SNAPSHOTS directory of the base filesystem. They are named according to the date and time at which they are taken. For example, a snapshot might be named as follows:
/mnt/data/SNAPSHOTS/2006_07_30_113557_Sun |
Windows clients can access snapshots using the Windows Shadow Copy Client. This feature allows a Windows client to right-click a file or directory, select Properties, and access previous snapshot version of the file. Windows 2000 and Windows XP users should download and install the ShadowCopyClient.msi installer, which is discussed at:
http://support.microsoft.com/kb/832217
Users with Windows 2003, Windows Vista, or later will already have this software installed on their systems.
To take a snapshot, do the following:
Select the Take Snapshot menu:
Management -> Resources -> Storage -> Snapshots -> Take Snapshot
Click on the filesystem name.
Confirm that you want to take the snapshot.
To display whether or not snapshots have been enabled for a given filesystem and the number currently available, select the List Snapshots menu:
Management -> Resources -> Storage -> Snapshots -> List Snapshots
To list all of the snapshots for a given filesystem, click on the filesystem name.
The DMF Resources screens let you do the following:
Stop/start DMF and tape daemons
Enable/disable tape drives
Import/export volumes from an OpenVault library (but not the Tape Migration Facility, TMF)
Empty a lost or damaged DMF tape
Alter DMF configuration parameters
Audit the databases
This section discusses the following:
Appliance Manager supports most common DMF configurations. There are some limitations to this support, however. Specifically, the following are assumed to be true:
The OpenVault mounting service is preferred. Ejection and injection of tape volumes from and into a tape library is disabled if TMF is in use, but the other functions are supported for both OpenVault and TMF.
All tapes that are ejected and injected using Appliance Manager are for use by a DMF volume group or allocation group. Other tapes may reside in the library, but they cannot be managed by Appliance Manager.
Each DMF library server manages only a single tape library. Appliance Manager refers to the library by using the name of the library server. Use of more than one tape library is not supported.
Each DMF drive group is associated with an OpenVault drive group or a TMF device group of the same name.
The Empty Tape Volume screen uses the herr, hvfy, and hlock DMF database flags to record the progress of the emptying procedure. If you use the dmvoladm(8) command to inspect the database entry for a tape while it is being emptied, you may see unexpected settings of these flags. Appliance Manager's use of these flags does not interfere with DMF's.
Appliance Manager does not make any use of the VOL database flags reserved for site use, although the Import and Export screens do allow you to manipulate them.
The Empty Tape Volume screen's Empty Volume, Remove Volume, and Reuse Volume options cannot remove soft-deleted files from a tape volume, unlike the Merge Volume button. You must wait until they have been hard-deleted by the scheduled run_hard_deletes.sh task or by the dmhdelete(8) command.
Also, these three buttons may need access to the output file from the previous run of the scheduled run_filesystem_scan.sh task or the dmscanfs(8) command. If it cannot be found or is older than the files remaining on the tape, some files may be misreported in the Alerts screen as soft-deleted and remain on the tape as described above. Trying again after the next run of run_filesystem_scan.sh is likely to succeed in this case.
For more information, see the dmemptytape (8) man page for more information.[5]
You can use the DMF Configuration screens to inspect and modify various DMF parameters.
For initial configuration of DMF, use the Edit link:
Management -> Resources -> DMF -> Configuration -> Edit
This link allows you to directly modify the configuration file or import another configuration file.
Caution: You must ensure that the changes you make are safe. For more information, see the dmf.conf(5) man page and the DMF 4 Administrator's Guide for SGI InfiniteStorage.) |
The Check link allows you to perform syntax and sanity checks on the current configuration of DMF:
Management -> Resources -> DMF -> Configuration -> Check
SGI recommends that you use the Check link after making any modification to ensure that the changes are safe.
The Global link displays parameters for all of DMF:
Management -> Resources -> DMF -> Configuration -> Global
If you click Switch to Expert Mode on the Global page, Appliance Manager presents more parameters. You should use expert mode with care. To return to normal mode, click Switch to Normal Mode. Excluded from both modes are parameters that are:
Deprecated
Specific to the Resource Scheduler or Resource Watcher stanzas
To work around these restrictions, the Edit link allows you to edit the DMF configuration file directly.
The other links provide quick access to commonly altered parameters of already-configured features. You should make changes with care. Parameters that can be dangerous to change are displayed but may not be altered; this includes those parameters that control the search order of volume groups and media-specific processes (MSPs) when recalling files.
Note: On the DMF Configuration screens, disk sizes use multipliers that are powers of 1000, such as kB, MB, and GB. This is for consistency with the DMF documentation and log files. However, the rest of Appliance Manager, including the DMF Monitoring screens, use multipliers that are powers of 1024, such as kiB, MiB, and GiB. |
Appliance Manager lets you configure local users, local groups, and user and group quotas:
Appliance Manager can create and add local user and group accounts to access the storage server locally. This is a local database only; these users and groups do not interact with the users and groups provided by the name server. If you search the site directory and do not find the user or group data you are looking for, the system searches this local database. The local user accounts will be used for authentication for CIFS shares if you are not using LDAP or Active Directory authentication.
Caution: If you create a local user and subsequently add that user in the sitewide directory, access problems may result. For example, if you create local user Fred with a UID of 26, Fred will be able to create local files. But if you subsequently add a user Fred on a sitewide name services directory with a different UID, user Fred will be unable to access those local files because the system will use the sitewide name and UID first. |
If you are using LDAP or Active Directory as a name service client, a user must be present in LDAP or Active Directory and you will not be able to authenticate local users and groups. In this case, adding local users and groups may be useful for ID mapping, but authentication does not use the local password files.
When you select the Import option for either Local Users or Local Groups , you can choose among the following actions:
Add the new users and groups. If there is an existing user or group with one of the names you are adding, keep the existing user or group.
Add the new users. If there is an existing user or group with one of the names you are adding, replace the existing user or group with the new user or group.
Replace all current unrestricted users or groups with the new users or groups.
Accounts with a UID or GID of less than 1000 are considered restricted and are not imported or replaced.
If you use a shadow file, which is a file that is protected from all access by non-root users and stores the encrypted passwords, then you can use the Import Users screen to import this file as well as the password file itself.
Appliance Manager will create new filesystems with both user and group quotas enabled by default.
This section discusses the following:
You can use the following screen to specify the user for whom you want to modify quotas:
Management -> Resources -> Users & Groups -> User Quotas
Enter the name of the user and click Submit. (To modify the default for user quotas, leave the field blank.) The following screen displays the current amount of disk space that can be used (disk limits, in KiB) and the number of files that can be owned (file limits):
The soft limit is the number of 1-KiB blocks or the number of files that the user is expected to remain below. If a user hits the soft limit, a grace period of 7 days will begin. If the user still exceeds the soft limit after the grace period expires, the user will not be able to write to that filesystem until he or she removes files in order to reduce usage.
The hard limit is the number of 1-KiB blocks or the number of files that the user cannot exceed. If a user's usage reaches the hard limit, he or she will be immediately unable to write any more data.
Note: The administrator can set quotas for the root user. However, instead of enforcing these quotas against the root user specifically, they will apply to all users that do not have their own quotas set. In other words, setting quotas for the root user will set the default quotas for all normal users and groups. (The actual root user is exempt from quota limits.) |
You can use the following screen to specify the group for which you want to modify quotas:
Management -> Resources -> Users & Groups -> Group Quotas
Enter the name of the group and click Submit. (To modify the default for group quotas, leave the field blank.) The following screen displays the current amount of disk space that can be used (disk limits, in KiB) and the number of files that can be owned (file limits):
The soft limit is the number of 1-KiB blocks or the number of files that the group is expected to remain below. If any user in that group hits the soft limit, a grace period of 7 days will begin. If the user still exceeds the soft limit after the grace period expires, the user will not be able to write to that filesystem until he or she removes files in order to reduce usage.
The hard limit is the number of 1-KiB blocks or the number of files that the group cannot exceed. If the usage for a user in that group reaches the hard limit, he or she will be immediately unable to write any more data.
Note: The administrator can set quotas for the root group. However, instead of enforcing these quotas against the root group specifically, they will apply to all groups that do not have their own quotas set. In other words, setting quotas for the root group will set the default quotas for all normal groups. (The actual root user is exempt from quota limits.) |
If you want to apply quotas to filesystems created with earlier versions of Appliance Manger, do the following:
Use the ssh command to log in to the system.
Edit the /etc/fstab file.
For example, suppose you originally have the following:
/dev/lxvm/data /mnt/data xfs rw,logbufs=8,logbsize=64K 0 0 |
You would change it to the following:
/dev/lxvm/data /mnt/data xfs rw,uquota,gquota,logbufs=8,logbsize=64K 0 0 |
Reboot the system to apply your changes.
To configure filesystems so that they are available for network clients by means of the NFS network protocol, select the following:
Management -> Services -> NFS
This screen displays a link for Global Options and all of the filesystems that have been created with Appliance Manager, whether or not they have been enabled for export.
To specify NFSv4 options, select Global Options. To change the export options, select an individual filesystem name or All Filesystems. See:
Note: Reverse lookup for NFS clients must be properly configured in the DNS server. |
The Global Options screen lets you specify the following:
Enable NFSv4 | Specifies whether NFSv4 is enabled (checked) or not. If enabled, an NFS exported filesystem will be accessible via both NFSv3 and NFSv4. The following fields are only relevant if you have enabled NFSv4. | |||
NFS serving domain | Specifies the serving domain. If NFSv4 is enabled, the mapping of user/group IDs between the client and server requires both to belong to the same NFS serving domain. | |||
Enable Kerberos | Specifies whether Kerberos is enabled (checked) or not. Enabling Kerberos forces encrypted authentication between the NFS client and server. Furthermore, the NFS exported filesystems will only be accessible to a Kerberos enabled client via NFSv4. The following fields are only relevant if you have enabled Kerberos.
| |||
Realm | Specifies the Kerberos realm in which the NFSv4 server operates. | |||
Domain | Specifies the DNS domain name that corresponds to the realm. | |||
KDC | Specifies the key distribution center (KDC). In most cases, the KDC will be the same system as the Kerberos admin server. However, if the admin server in your Kerberos environment is not used for granting tickets, then set the KDC to the system that grants tickets. | |||
Admin Server | Specifies the server containing the master copy of the realm database. | |||
Keep Existing Keytab | Select this radio button to keep the existing keytab without changes. | |||
Update Keytab | Select this radio button to change the principal user and password for the existing keytab. | |||
Principal | Specifies a user that belongs to the Kerberos server with sufficient privileges to generate a keytab for the NFS server. | |||
Password | Specifies the principal's password. | |||
Upload Keytab | Copies the selected file to /etc/krb5.keytab on the NFS server. Click Browse to see a list of available files. | |||
Verify Keytab | Specifies that the keytab should be verified. This is not supported by Active Directory. |
You can choose to export or not export a filesystem by clicking the check box. When you enable a filesystem for export, you can do one of the following:
After specifying the configuration parameters, click Apply changes.
If you select Use export options, you must specify the following:
Read-only | Specifies that the client has access to the filesystem but cannot modify files or create new files. | |||||||
Asynchronous writes | Specifies whether or not to use asynchronous writes. Data that is written by the client can be buffered on the server before it is written to disk. This allows the client to continue to do other work as the server continues to write the data to the disk. By default, writes are performed synchronously, which ensures that activity on the client is suspended when a write occurs until all outstanding data has been safely stored onto stable storage. | |||||||
Allow access from unprivileged ports | Allows access for Mac OS X clients or other NFS clients that initiate mounts from port numbers greater than 1024. If there are no such clients on your network, leave this option unchecked. | |||||||
All hosts | Allows connections from anywhere on a network. | |||||||
Local subnet | Allows connections from the indicated subnet. You can select any subnet from those that have been defined for the network interfaces. | |||||||
Kerberos aware clients (krb5) | Allows connections only from those systems that are Kerberos aware (if Kerberos is enabled in “Global Options”) over NFSv4. | |||||||
Kerberos with Integrity support aware clients (krb5i). | Allows connections only from those systems that are Kerberos with Integrity support aware (if Kerberos is enabled in “Global Options”) over NFSv4 | |||||||
Restrict to hosts | Specifies the set of hosts that are permitted to access the NFS filesystem. You can specify the hosts by hostname or IP address; separate values with a space or tab. For example, you could restrict access to only the hosts on a Class C subnet by specifying something like the following:
To allow hosts of IP address 150.203.5.* and myhost.mynet.edu.au, specify the following:
You can also specify hosts by network/subnet mask pairs and by netgroup names if the system supports netgroups. To allow hosts that match the network/subnet mask of 150.203.15.0/255.255.255.0 , you would specify the following:
To allow two hosts, hostA and hostB, specify the following:
|
If you select Use custom definition, you can enter any NFS export options that are supported in the Linux /etc/exports file.
For example, the following entry gives 192.168.10.1 read-write access, but read-only access to all other IP addresses:
192.168.10.1(rw) *(ro) |
Note: There cannot be a space between the IP address and the export option. |
For information on the /etc/exports file, see the exports(5) man page. [6]
To configure filesystems so that they are available for network clients by means of the CIFS network protocol, select the following:
Management -> Services -> CIFS
All of the filesystems created with Appliance Manager are displayed on this screen, whether or not they have been enabled for sharing. To share a file, select it and click the Shared? box.
Specify the following Share Options:
Share name | Specifies the name under which the filesystem will appear to a Windows client, as displayed in its Network Neighborhood. | ||||||||
Comment | Specifies an arbitrary string to describe the share. | ||||||||
Read-only | Specifies that the client has access to the filesystem but cannot modify files or create new files. | ||||||||
Allow guest users | Specifies that users can gain access to the CIFS filesystem without authenticating. Uncheck this option to allow connections only to valid users. By default, the CIFS protocol requires a password for authentication. If you are configured as an Active Directory client, then the authentication is distributed. See “Active Directory”. | ||||||||
Always synchronize writes | Ensures that write activity on the client is suspended when a write occurs until all outstanding data has been safely stored onto stable storage. If you do not check this box, data that is written by the client can be buffered on the server before it is written to disk. This allows the client to continue to do other writing as the server continues to write the data to the disk. This is the faster write option and is recommended. | ||||||||
Allow symbolic linking outside of the share | Specifies that symbolic links made by NFS users that point outside of the Samba share will be followed.
| ||||||||
All hosts | Allows connections from anywhere on a network. | ||||||||
Local subnets | Allows connections from the indicated subnet. You can select one subnet in this field and you must choose it from the available interfaces as set in the Network Interfaces screen. | ||||||||
Restrict to hosts | Specifies the set of hosts that are permitted to access the CIFS filesystem. You can specify the hosts by name or IP number; separate values by a space or tab. For example, you could restrict access to only the hosts on a Class C subnet by specifying something like the following:
To allow hosts of IP address 150.203.5.* and myhost.mynet.edu.au, specify the following:
You can also specify hosts by network/subnet mask pairs and by netgroup names if the system supports netgroups. You can use the EXCEPT keyword to limit a wildcard list. For example, to allow all IP address in 150.203.*.* except one address (150.203.6.66), you would specify the following:
To allow hosts that match the network/subnet mask of 150.203.15.0/255.255.255.0 , you would specify the following:
To allow two hosts, hostA and hostB , specify the following:
|
After specifying the configuration parameters, select Apply changes.
To manage a CXFS cluster, select the following:
Management -> Services -> CXFS
This lets you choose the following options:
Cluster Nodes | Adds, enables, disables, and deletes client-only nodes and displays node status. To add a client-only node, you must specify the node's hostname, CXFS private network IP address, and operating system:
For the specific operating system release levels supported, see the CXFS release notes. When you add a new node, it is automatically enabled and able to mount all CXFS filesystems. However, if you had to install software on the client, you must first reboot it. For example, for a Linux client:
| ||||||||||
Switches | Displays Fibre Channel switches. To fence/unfence ports on a switch, select the switch's IP address then select the ports to fence/unfence. | ||||||||||
Stop/Start | Displays the status of CXFS cluster daemon and lets you start, restart, or stop all of the CXFS daemons. | ||||||||||
Client Packages | Provides access to CXFS client packages for each client platform, which may be downloaded to the clients via Appliance Manager. |
To create a CXFS filesystem, see “Creating Filesystems”.
The storage server administered by Appliance Manager acts as a network data management protocol (NDMP) server; that is, it processes requests sent to it from a data migration application (DMA) in order to transfer data to/from a remote NDMP tape/data server.
In order to perform backups of user data on the storage server using NDMP, you will need a DMA (such as Legato Networker) and a separate NDMP tape server.
The NDMP configuration screen in Appliance Manager allows you to configure your system such that it will communicate with your DMA and your NDMP tape server. For information on initiating backup/restore operations, refer to the documentation that came with your DMA software.
To administer NDMP for backups, select the following:
Management -> Services -> NDMP
The NDMP screen lets you configure the following parameters:
Protocol | Specifies the NDMP version. (Protocol version 4 is the default. Protocol version 3 is provided for backward compatibility. If in doubt, use version 4.) | |
New Sessions | Specifies whether new NDMP sessions are allowed or disallowed, which lets you stop backup clients from connecting to the NDMP server or allow the connection. With Allowed, authorized backup clients may connect and initiate backup sessions. With Disallowed, no new client sessions may be established (existing sessions will not be affected). | |
Interfaces | Specifies the individual interfaces where the ndmp server will listen for connections. To use all interfaces, leave all interfaces unselected. | |
Authorized Clients | Specifies the IP address of those clients that are authorized to access NDMP. If you want all clients to have access, leave this field blank. | |
Username | Specifies the username that NDMP clients will use to establish sessions with the NDMP server. | |
New Password | Sets the password for the username. | |
Confirm New Password | Confirms the password for the username. |
Note: When performing a full filesystem backup (as opposed to an
incremental backup), the quota and mkfs information
will be backed up into a tar file in the root directory
of the backup. The file will be named:
For example, the following file was backed up on August 6th 2007 at 2:45 PM:
This file will be placed in the root directory of the filesystem if it is restored. However, the quotas and mkfs options will not be applied on restoration; the administrator may choose to apply them if desired. |
Appliance Manager lets you configure basic SNMP monitoring support on your storage server. In order to query the SNMP service and receive SNMP traps, you will require an external management station with appropriately configured monitoring software.
To configure the SNMP service, select the following:
Management -> Services -> SNMP
The SNMP screen lets you configure the following parameters:
Enable SNMP | Enables or disables the SNMP service. | |
Allow SNMP access from | Specifies the IP address of the Network Monitoring Station (NMS) or the network segment that is allowed to access the SNMP service. | |
Trap destination | Specifies the IP address of your NMS for receiving default SNMP traps and RAID hardware traps for supported storage subsystems. | |
Community string | Specifies the SNMP community string to use when sending SNMP traps and when querying the SNMP service. The default is public. | |
System name | Specifies the system name. This field is automatically set by Appliance Manager to the hostname of the server. However, you may change this to something more appropriate to your environment. | |
System location | Specifies the physical location of the storage server (optional). | |
System contact | Specifies the contact details (such as the name and email address) of one or more persons responsible for administration of the server (optional). | |
System description | Provides addition descriptive information for identifying the server (optional). |
The following option will enable the RAID management software to emit SNMP traps for RAID hardware events:
Enable hardware-level SNMP traps | Enables SNMP traps for hardware monitoring events. |
For Altix XE systems, the following options allow configuration of the network interface on the IPMI device:
IP address | Specifies the IPMI network interface IP address. | |
Subnet mask | Specifies the IPMI network interface subnet mask. | |
Gateway address | Specifies the IPMI network interface gateway address. |
After applying your configuration changes to the SNMP service, you should receive start/stop SNMP v2 traps notifying you that the SNMP service has been restarted.
The following sections describe the following aspects of system administration that you can perform with Appliance Manager:
Use the System Name screen to set the following system components:
System name | Specifies the fully qualified domain name (FQDN) for this storage server. The default hostname is sgiserver. (You cannot change the default hostname for a SAN Server.)
| |||
Workgroup | Specifies the NetBIOS workgroup to which the machine should belong. The default is WORKGROUP. If you are not using CIFS, you can ignore this setting. | |||
Default network gateway | Specifies the IP address of the router that this system should use to communicate with machines that are outside of its subnet. | |||
Management IP address | Specifies the IP address of the management interface | |||
Subnet mask | Specifies the subnet mask of the management interface. | |||
Use DHCP | Specifies whether or not to use dynamic host configuration protocol (DHCP). |
You can also use the Network Interfaces screen for eth0 to configure or modify the management interface. For information on these options, see “Ethernet Network Interfaces”.
The Name Service Client screen lets you specify a name service (or directory service) for the system. A name service is the application that manages the information associated with the network users. For example, it maps user names with user IDs and group names with group IDs. It allows for centralized administration of these management tasks.
You can specify whether you are using local files (if you have no sitewide protocol and names and IDs are kept locally on server), Active Directory services, lightweight directory access protocol (LDAP), or the sitewide network information service (NIS).
Note: When specifying servers on the Name Service Client screen, you must use IP addresses rather than hostnames, because the system may require a name service client to determine the IP address from the hostname. |
The Local Files Only selection specifies that an external name server will not be used. All user and group name to ID mapping will be done using local users and groups. See “Local Users and Groups”.
Active Directory is a directory service that implements LDAP in a Windows environment. It provides a hierarchical structure for organizing access to data. CIFS authentication will automatically use the Active Directory service.
Note: The Active Directory section is disabled if there are no Active Directory DNS servers specified. See “DNS and Hostnames”. |
The following Active Directory components appear on the Name Service Client screen:
Active Directory domain | Specifies the full domain name of the Active Directory.
| |||||
Domain Controller | Specifies a domain controller. | |||||
Administrative user | Specifies the user with administrator privileges. | |||||
Allow this user to remotely manage CIFS share permissions | Specifies whether or not the Administrative user specified will be able to use the Windows MMC Computer Management GUI to manipulate CIFS share permissions remotely when you join the Active Directory domain. | |||||
Password | Specifies the password for the administrator user. For security reasons, the Active Directory password cannot contain the following characters:
| |||||
Re-enter password | Verifies the password for the administrator user. | |||||
UID/GID Mapping | Lets you manage UNIX user ID (UID) and group ID (GID) mapping on the Active Directory server, using one of the following:
|
Caution: Depending on your environment, making changes to the UID/GID mapping may result in ownership changes of user files. |
Lightweight directory access protocol (LDAP) is a networking protocol that organizes access to data in a directory tree structure. Each entry in the tree has a unique identifier called the distinguished name.
The default LDAP server IP address is the local host. You will probably need to specify a different IP address.
Fields:
LDAP server | Specifies the IP address of the LDAP server. | ||
Base | Specifies the distinguished name of the base of the subtree you will be searching. | ||
Root binddn | Specifies the distinguished name of the user to whom you are assigning root privileges for administration. This is expressed as a node in the directory tree that refers to a user account. | ||
Password | Specifies the password that will be required to authenticate against the LDAP server. For security reasons, the LDAP password cannot contain the following characters:
| ||
Re-enter password | Verifies the password that will be required to authenticate against the LDAP server. |
To use LDAP for CIFS authentication, you must configure the LDAP server to use the RFC2307bis or NIS schema to supply POSIX account information. In addition, you must add a Samba schema to the LDAP database. These schemas specify how the user and group data is organized in the database. The database must be organized using these particular schemas so that the CIFS authentication mechanism is able to extract the data it needs.
For a description of how to add the Samba schema to a Fedora Directory Server, see:
http://directory.fedora.redhat.com/wiki/Howto:Samba |
For a description of how to add the samba schema to an OpenLDAP Server, see:
http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/passdb.html#id327194 |
The following website provides another description of an OpenLDAP configuration:
http://www.unav.es/cti/ldap-smb/ldap-smb-3-howto.html |
For other LDAP servers (such as the Sun Directory Server, Novell's eDirectory, and IBM's Tivoli Directory Server) the above information may be useful; however, please refer to the relevant documentation for your server product for more information.
Network information service (NIS) is a network lookup service that provides a centralized database of information about the network to systems participating in the service. The NIS database is fully replicated on selected systems and can be queried by participating systems on an as-needed basis. Maintenance of the database is performed on a central system.
Specify the following:
Domain name | Specifies the NIS domain name for this system. | |
NIS server IP address | Specifies the IP address of the NIS server. If the NIS server is on the same subnet as Appliance Manager, Appliance Manager finds the NIS server IP address and provides it as a default. If you are not on the same subnet, you must enter the address in this field. |
Click Apply changes.You will then be presented with a confirmation screen that allows you to verify whether or not you want to commit the changes.
You can use the DNS and Hostnames screen to specify how to map hostnames to IP addresses for the system. Click Edit local hosts table to access the Hosts screen, where you can edit the /etc/hosts file that contains local mappings or import the contents of a file you specify. For information on the /etc/hosts file, see the hosts(5) man page. [8]
You can also specify the DNS servers to map hostnames to IP addresses and to resolve hostnames that are incomplete.
Domain Search |
Specifies the domain name or names of the DNS servers that the system uses to provide hostname-to-IP-address translation. If you have multiple domains, list them in the order you want to use for lookup. This is important in cases where you have two machines with the same name, each on a different domain, to establish the lookup priority. | |
Nameserver # | You can specify up to three IP addresses for the DNS name servers to use. If an address you specify is down, the system will use the next one. |
Note: If you specify one or more servers for DNS, all name resolution
will be provided by the specified DNS servers (plus the contents of
/etc/hosts). If you do not specify a server, only
.local names will be resolvable via multicast DNS (plus the
contents of /etc/hosts). You cannot use both DNS to
resolve names and multicast DNS to resolve .local domain
names.
If you specify one or more DNS servers, SGI InfiniteStorage Appliance Manager adds mdns off to the /etc/host.conf file in order to force resolution of .local names to go to the DNS server rather than using multicast DNS. If you later remove the DNS servers, the value of mdns off in /etc/host.conf remains the same. If you manually edit /etc/host.conf to force mdns on, Appliance Manager will not change this setting provided that you do not specify DNS servers via “DNS and Hostnames”. |
Use the Time and Date screen to set the following:
Time zone | Sets the local time zone for Appliance Manager. You can choose a time zone from a drop-down list of options, or you can set a custom time zone. For example, the following specifies what the name of the time zone is for both standard and daylight savings periods, and when the change-over is from daylight to standard and back again (going from standard to daylight on the 10th month and the 5th Sunday, and back again on the 4th month and the first Sunday):
For more information about custom time-zone format, see the tzfile man page.[9] | ||
NTP Time Synchronisation | Enables automatic time synchronization with Network Time Protocol (NTP). The NTP protocol is used to synchronize clocks on computer systems over a network. Select Apply NTP changes keep the system's time in synchronization with an NTP server or Set time from NTP server to go off and synchronize it now once only. If the server has Internet access, see the following website for information about using the public NTP timeserver: | ||
Set Current Time and Date | Sets the system date (in the format year/month/day ) and time directly instead of using NTP time synchronization. |
Appliance Manager is shipped with temporary licenses. The Licenses screen provides information required to request licenses and a text box in which you can type in or paste permanent licenses obtained from SGI. Some licenses, such as the license for XVM snapshot, will not take affect until you reboot the system.
The following sections describe other operations you can perform with Appliance Manager:
The Save/Restore Configuration screen screen lets you save the current Appliance Manager configuration or restore a previously saved version. The configuration information saved includes how the interfaces are configured and what filesystems should be mounted. You may find this useful if you have made an error in the present configuration and you wish to return to a previously configured state.
Caution: This procedure does not provide a system backup and specifically does not save or restore user data; it provides a snapshot record of the configuration. |
This screen lists previously saved configurations, labeled by date. After restoring a configuration, you should restart the system.
If there is a problem with the system, SGI Call Center Support may request support data in order to find and resolve the problem. The Gather Support Data screen lets you generate an archive containing copies of the storage server's software and hardware configuration and log files.
To collect the data, select Yes, gather information. This process can take more than 30 seconds on large RAID configurations and requires at least 200 MB of free space in /tmp.
This screen lets you capture and download archives of performance data from the server on which Appliance Manager is running. SGI may request such an archive for performance-analysis purposes, but please be aware that it may contain potentially sensitive information such as network traces.
Note: The Performance Data screen in Appliance Manager is only available if you have installed the oprofile and ethereal packages. |
[1] GiB, 1024 megabytes
[2] Metadata is information that describes a file, such as the file's name, size, location, and permissions.The metadata server is the node that coordinates the updating of metadata on behalf of all nodes in a cluster.
[3] GiB, 1024 megabytes
[4] GiB, 1024 megabytes
[5] You can access man pages and books from the SGI Technical Publications Library at http://docs.sgi.com.
[6] You can access man pages from the SGI Technical Publications Library at http://docs.sgi.com .
[7] Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES).
[8] You can access man pages from the SGI Technical Publications Library at http://docs.sgi.com.
[9] You can access man pages from the SGI Technical Publications Library at http://docs.sgi.com.