Enterprise Volume Management System |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Documentation
Screen Shots
Downloads
|
Copyright © 2005 IBM Special Notices The following terms are registered trademarks of International Business Machines corporation in the United States and/or other countries: AIX, OS/2, System/390. A full list of U.S. trademarks owned by IBM may be found at http://www.ibm.com/legal/copytrade.shtml. Intel is a trademark or registered trademark of Intel Corporation in the United States, other countries, or both. Windows is a trademark of Microsoft Corporation in the United States, other countries, or both. Linux is a trademark of Linus Torvalds. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, and service names may be trademarks or service marks of others. This document is provided "AS IS," with no express or implied warranties. Use the information in this document at your own risk. License Information This document may be reproduced or distributed in any form without prior permission provided the copyright notice is retained on all copies. Modified versions of this document may be freely distributed provided that they are clearly identified as such, and this copyright is included intact. January 18, 2005 Table of Contents
List of Figures This guide tells how to configure and manage Enterprise Volume Management System (EVMS). EVMS is a storage management program that provides a single framework for managing and administering your system's storage. This guide is intended for Linux system administrators and users who are responsible for setting up and maintaining EVMS. For additional information about EVMS or to ask questions specific to your distribution, refer to the EVMS mailing lists. You can view the list archives or subscribe to the lists from the EVMS Project web site. The following table shows how this guide is organized: Table 1. Organization of the EVMS User Guide
EVMS brings a new model of volume management to Linux®. EVMS integrates all aspects of volume management, such as disk partitioning, Linux logical volume manager (LVM) and multi-disk (MD) management, and file system operations into a single cohesive package. With EVMS, various volume management technologies are accessible through one interface, and new technologies can be added as plug-ins as they are developed. EVMS lets you manage storage space in a way that is more intuitive and flexible than many other Linux volume management systems. Practical tasks, such as migrating disks or adding new disks to your Linux system, become more manageable with EVMS because EVMS can recognize and read from different volume types and file systems. EVMS provides additional safety controls by not allowing commands that are unsafe. These controls help maintain the integrity of the data stored on the system. You can use EVMS to create and manage data storage. With EVMS, you can use multiple volume management technologies under one framework while ensuring your system still interacts correctly with stored data. With EVMS, you are can use drive linking, shrink and expand volumes, create snapshots of your volumes, and set up RAID (redundant array of independent devices) features for your system. You can also use many types of file systems and manipulate these storage pieces in ways that best meet the needs of your particular work environment. EVMS also provides the capability to manage data on storage that is physically shared by nodes in a cluster. This shared storage allows data to be highly available from different nodes in the cluster. There are currently three user interfaces available for EVMS: graphical (GUI), text mode (Ncurses), and the Command Line Interpreter (CLI). Additionally, you can use the EVMS Application Programming Interface to implement your own customized user interface. Table 1.1 tells more about each of the EVMS user interfaces. Table 1.1. EVMS user interfaces
To avoid confusion with other terms that describe volume management in general, EVMS uses a specific set of terms. These terms are listed, from most fundamental to most comprehensive, as follows:
There are numerous drivers in the Linux kernel, such as Device Mapper and MD (software RAID), that implement volume management schemes. EVMS is built on top of these drivers to provide one framework for combining and accessing the capabilities. The EVMS Engine handles the creation, configuration, and management of volumes, segments, and disks. The EVMS Engine is a programmatic interface to the EVMS system. User interfaces and programs that use EVMS must go through the Engine. EVMS provides the capacity for plug-in modules to the Engine that allow EVMS to perform specialized tasks without altering the core code. These plug-in modules allow EVMS to be more extensible and customizable than other volume management systems. EVMS defines a layered architecture where plug-ins in each layer create abstractions of the layer or layers below. EVMS also allows most plug-ins to create abstractions of objects within the same layer. The following list defines these layers from the bottom up.
This chapter explains how to use the EVMS GUI, Ncurses, and CLI interfaces. This chapter also includes information about basic navigation and commands available through the CLI. The EVMS GUI is a flexible and easy-to-use interface for administering volumes and storage objects. Many users find the EVMS GUI easy to use because it displays which storage objects, actions, and plug-ins are acceptable for a particular task. The EVMS GUI lets you accomplish most tasks in one of two ways: context sensitive menus or the Actions menu. Context sensitive menus are available from any of the main "views." Each view corresponds to a page in a notebook widget located on the EVMS GUI main window. These views are made up of different trees or lists that visually represent the organization of different object types, including volumes, feature objects, regions, containers, segments, or disks. You can view the context sensitive menu for an object by right-clicking on that object. The actions that are available for that object display on the screen. The GUI will only present actions that are acceptable for the selected object at that point in the process. These actions are not always a complete set. To use the Actions menu, choose Action-><the action you want to accomplish>-><options>. The Actions menu provides a more guided path for completing a task than do context sensitive menus. The Actions option is similar to the wizard or druid approach used by many GUI applications. All of the operations you need to perform as an administrator are available through the Actions menu. All of the changes that you make while in the EVMS GUI are only in memory until you save the changes. In order to make your changes permanent, you must save all changes before exiting. If you forget to save the changes and decide to exit or close the EVMS GUI, you are reminded to save any pending changes. To explicitly save all the changes you made, select Action->Save, and click the Save button. The Refresh button updates the view and allows you to see changes, like mount points, that might have changed outside of the GUI. Along the left hand side of the panel views in the GUI is a "+" that resides beside each item. When you click the "+," the objects that are included in the item are displayed. If any of the objects that display also have a "+" beside them, you can expand them further by clicking on the "+" next to each object name. You can avoid using a mouse for navigating the EVMS GUI by using a series of key strokes, or "accelerator keys," instead. The following sections tell how to use accelerator keys in the EVMS Main Window, the Selection Window, and the Configuration Options Window. In the Main Window view, use the following keys to navigate: Table 2.1. Accelerator keys in the Main Window
While in a view, use the following keys to navigate: Table 2.2. Accelerator keys in the views
To access the action bar menu, press Alt and then the underlined accelerator key for the menu choice (for example, "A" for the Actions dropdown menu). In a dropdown menu, you can use the up and down arrows to navigate. You could also just type the accelerator key for the menu item, which is the character with the underscore. For example, to initiate a command to delete a container, type Alt + "A" + "D" + "C." Ctrl-S is a shortcut to initiate saving changes. Ctrl-Q is a shortcut to initiate quitting the EVMS GUI. A selection window typically contains a selection list, plus four to five buttons below it. Use the following keys to navigate in the selection window: Table 2.3. Accelerator keys in the selection window
Use the following keys to navigate in the configuration options window: Table 2.4. Accelerator keys in the configuration options window
For widgets, use the following keys to navigate: Table 2.5. Widget navigation keys in the configuration options window
The widget navigation, selection, and activation is the same in all dialog windows. The EVMS Ncurses (evmsn) user interface is a menu-driven interface with characteristics similar to those of the EVMS GUI. Like the EVMS GUI, evmsn can accommodate new plug-ins and features without requiring any code changes. The EVMS Ncurses user interface allows you to manage volumes on systems that do not have the X and GTK+ libraries that are required by the EVMS GUI. The EVMS Ncurses user interface initially displays a list of logical volumes similar to the logical volumes view in the EVMS GUI. Ncurses also provides a menu bar similar to the menu bar in the EVMS GUI. A general guide to navigating through the layout of the Ncurses window is listed below:
Dialog windows are similar in design to the EVMS GUI dialogs, which allow a user to navigate forward and backward through a series of dialogs using Next and Previous. A general guide to dialog windows is listed below:
The EVMS Ncurses user interface, like the EVMS GUI, provides context menus for actions that are available only to the selected object in a view. Ncurses also provides context menus for items that are available from the Actions menu. These context menus present a list of commands available for a certain object. All changes you make while in the EVMS Ncurses are only in memory until you save the changes. In order to make the changes permanent, save all changes before exiting. If you forget to save the changes and decide to exit the EVMS Ncurses interface, you will be reminded of the unsaved changes and be given the chance to save or discard the changes before exiting. To explicitly save all changes, press A + S and confirm that you want to save changes. The EVMS Command Line Interpreter (EVMS CLI) provides a command-driven user interface for EVMS. The EVMS CLI helps automate volume management tasks and provides an interactive mode in situations where the EVMS GUI is not available. Because the EVMS CLI is an interpreter, it operates differently than command line utilities for the operating system. The options you specify on the EVMS CLI command line to invoke the EVMS CLI control how the EVMS CLI operates. For example, the command line options tell the CLI where to go for commands to interpret and how often the EVMS CLI must save changes to disk. When invoked, the EVMS CLI prompts for commands. The volume management commands the EVMS CLI understands are specified in the /usr/src/evms-x.y.z/engine2/ui/cli/grammar.txt file that accompanies the EVMS package. These commands are described in detail in the EVMS man page, and help on these commands is available from within the EVMS CLI. Use the evms command to start the EVMS CLI. If you do not enter an option with evms, the EVMS CLI starts in interactive mode. In interactive mode, the EVMS CLI prompts you for commands. The result of each command is immediately saved to disk. The EVMS CLI exits when you type exit. You can modify this behavior by using the following options with evms:
NOTEInformation on less commonly used options is available in the EVMS man page. The EVMS CLI allows multiple commands to be displayed on a command line. When you specify multiple commands on a single command line, separate the commands with a colon ( : ). This is important for command files because the EVMS CLI sees a command file as a single long command line. The EVMS CLI has no concept of lines in the file and ignores spaces. These features allow a command in a command file to span several lines and use whatever indentation or margins that are convenient. The only requirement is that the command separator (the colon) be present between commands. The EVMS CLI ignores spaces unless they occur within quote marks. Place in quotation marks a name that contains spaces or other non-printable or control characters. If the name contains a quotation mark as part of the name, the quotation mark must be "doubled," as shown in the following example:
EVMS CLI keywords are not case sensitive, but EVMS names are case sensitive. Sizes can be input in any units with a unit label, such as KB, MB, GB, or TB. Finally, C programming language style comments are supported by the EVMS CLI. Comments can begin and end anywhere except within a quoted string, as shown in the following example:
This chapter discusses the EVMS information and error log file and the various logging levels. It also explains how to change the logging level. The EVMS Engine creates a log file called /var/log/evmsEngine.log every time the Engine is opened. The Engine also saves copies of up to nine previous Engine sessions in the files /var/log/evmsEngine.n.log, where n is the number of the session between 1 and 9. There are several possible logging levels that you can choose to be collected in /var/log/evmsEngine.log. The "lowest" logging level, critical, collects only messages about serious system problems, whereas the "highest" level, everything, collects all logging related messages. When you specify a particular logging level, the Engine collects messages for that level and all the levels below it. The following table lists the allowable log levels and the information they provide: Table 3.1. EVMS logging levels
By default, when any of the EVMS interfaces is opened, the Engine logs the Default level of messages into the /var/log/evmsEngine.log file. However, if your system is having problems and you want to see more of what is happening, you can change the logging level to be higher; if you want fewer logging messages, you can change the logging level to be lower. To change the logging level, specify the -d parameter and the log level on the interface open call. The following examples show how to open the various interfaces with the highest logging level (everything):
NOTEIf you use the EVMS mailing list for help with a problem, providing to us the log file that is created when you open one of the interfaces (as shown in the previous commands) makes it easier for us to help you. The EVMS GUI lets you change the logging level during an Engine session. To do so, follow these steps:
The CLI command, probe, opens and closes the Engine, which causes a new log to start. The log that existed before the probe command was issued is renamed /var/log/evmsEngine.1.log and the new log is named /var/log/evmsEngine.log. If you will be frequently using a different log level than the default, you can specify the default logging level in /etc/evms.conf rather than having to use the -d option when starting the user interface. The "debug_level" option in the "engine" section sets the default logging level for when the Engine is opened. Using the -d option during the command invocation overrides the setting in /etc/evms.conf. Migrating to EVMS allows you to have the flexibility of EVMS without losing the integrity of your existing data. EVMS discovers existing volume management volumes as compatibility volumes. After you have installed EVMS, you can view your existing volumes with the interface of your choice. If you are using the EVMS GUI as your preferred interface, you can view your migrated volumes by typing evmsgui at the command prompt. The following window opens, listing your migrated volumes. If you are using the Ncurses interface, you can view your migrated volumes by typing evmsn at the command prompt. The following window opens, listing your migrated volumes. If you are using the Command Line Interpreter (CLI) interface, you can view your migrated volumes by following these steps:
The EVMS interfaces let you view more detailed information about an EVMS object than what is readily available from the main views of the EVMS user interfaces. The type and extent of additional information available is dependent on the interface you use. For example, the EVMS GUI provides more in-depth information than does the CLI. The following sections show how to find detailed information on the region lvm/Sample Container/Sample Region, which is part of volume /dev/evms/Sample Volume (created in section 10.2). With the EVMS GUI, it is only possible to display additional details on an object through the Context Sensitive Menus, as shown in the following steps:
Follow these steps to display additional details on an object with Ncurses:
Use the query command (abbreviated q) with filters to display details about EVMS objects. There are two filters that are especially helpful for navigating within the command line: list options (abbreviated lo) and extended info (abbreviated ei). The list options command tells you what can currently be done and what options you can specify. To use this command, first build a traditional query command starting with the command name query, followed by a colon (:), and then the type of object you want to query (for example, volumes, objects, plug-ins). Then, you can use filters to narrow the search to only the area you are interested in. For example, to determine the acceptable actions at the current time on lvm/Sample Container/Sample Region, enter the following command:
The extended info filter is the equivalent of Display Details in the EVMS GUI and Ncurses interfaces. The command takes the following form: query, followed by a colon (:), the filter (extended info), a comma (,), and the object you want more information about. The command returns a list containing the field names, titles, descriptions and values for each field defined for the object. For example, to obtain details on lvm/Sample Container/Sample Region, enter the following command:
Many of the field names that are returned by the extended info filter can be expanded further by specifying the field name or names at the end of the command, separated by commas. For example, if you wanted additional information about logical extents, the query would look like the following:
This chapter discusses when to use a segment manager, what the different types of segment managers are, how to add a segment manager to a disk, and how to remove a segment manager. Adding a segment manager to a disk allows the disk to be subdivided into smaller storage objects called disk segments. The add command causes a segment manager to create appropriate metadata and expose freespace that the segment manager finds on the disk. You need to add segment managers when you have a new disk or when you are switching from one partitioning scheme to another. EVMS displays disk segments as the following types:
There are seven types of segment managers in EVMS: DOS, GPT, S/390, Cluster, BSD, MAC, and BBR. The most commonly used segment manager is the DOS Segment Manager. This plug-in provides support for traditional DOS disk partitioning. The DOS Segment Manager also recognizes and supports the following variations of the DOS partitioning scheme:
The GUID Partitioning Table (GPT) Segment Manager handles the new GPT partitioning scheme on IA-64 machines. The Intel Extensible Firmware Interface Specification requires that firmware be able to discover partitions and produce logical devices that correspond to disk partitions. The partitioning scheme described in the specification is called GPT due to the extensive use of Globally Unique Identifier (GUID) tagging. GUID is a 128 bit long identifier, also referred to as a Universally Unique Identifier (UUID). As described in the Intel Wired For Management Baseline Specification, a GUID is a combination of time and space fields that produce an identifier that is unique across an entire UUID space. These identifiers are used extensively on GPT partitioned disks for tagging entire disks and individual partitions. GPT partitioned disks serve several functions, such as:
The GPT Segment Manager scales better to large disks. It provides more redundancy with added reliability and uses unique names. However, the GPT Segment Manager is not compatible with DOS, OS/2, or Windows®. The S/390 Segment Manager is used exclusively on System/390 mainframes. The S/390 Segment Manager has the ability to recognize various disk layouts found on an S/390 machine, and provide disk segment support for this architecture. The two most common disk layouts are Linux Disk Layout (LDL) and Common Disk Layout (CDL). The principle difference between LDL and CDL is that an LDL disk cannot be further subdivided. An LDL disk will produce a single metadata disk segment and a single data disk segment. There is no freespace on an LDL disk, and you cannot delete or re-size the data segment. A CDL disk can be subdivided into multiple data disk segments because it contains metadata that is missing from an LDL disk, specifically the Volume Table of Contents (vtoc) information. The S/390 Segment Manager is the only segment manager plug-in capable of understanding the unique S/390 disk layouts. The S/390 Segment Manager cannot be added or removed from a disk. The cluster segment manager (CSM) supports high availability clusters. When the CSM is added to a shared storage disk, it writes metadata on the disk that:
This metadata allows the CSM to build containers for supporting failover situations. It does so by constructing an EVMS container object that consumes all shared disks discovered by the CSM and belonging to the same container. These shared storage disks are consumed by the container and a single data segment is produced by the container for each consumed disk. A failover of the EVMS resource is accomplished by simply reassigning the CSM container to the standby cluster node and having that node re-run its discovery process. Adding disks to CSM containers implies that only disk storage objects are acceptable to the CSM. This is an important aspect of the CSM. Other segment managers can be embedded within storage objects and used to further subdivide them. However, the CSM cannot add any other kind of storage object to a CSM container because the container is meant to be a disk group and the entire disk group is reassigned during a failover. So, the CSM only accepts disks when constructing containers. This is important to remember when adding the CSM to a disk. If you choose Add and the CSM does not appear in the list of selectable plug-ins when you know you have a disk, you should look at the Volume list and see if the disk has already been listed as a compatibility volume. If you simply delete the volume, the disk will become an available object and the CSM will then appear in the list of plug-ins because it now has an available disk that it can add to a container. BSD refers to the Berkeley Software Distribution UNIX® operating system. The EVMS BSD segment manager is responsible for recognizing and producing EVMS segment storage objects that map BSD partitions. A BSD disk may have a slice table in the very first sector on the disk for compatibility purposes with other operating systems. For example, a DOS slice table might be found in the usual MBR sector. The BSD disk would then be found within a disk slice that is located using the compatibility slice table. However, BSD has no need for the slice table and can fully dedicate the disk to itself by placing the disk label in the very first sector. This is called a "fully dedicated disk" because BSD uses the entire disk and does not provide a compatibility slice table. The BSD segment manager recognizes such "fully dedicated disks" and provides mappings for the BSD partitions. Apple-partitioned disks use a disk label that is recognized by the MAC segment manager. The MAC segment manager recognizes the disk label during discovery and creates EVMS segments to map the MacOS disk partitions. The bad block replacement (BBR) segment manager enhances the reliability of a disk by remapping bad storage blocks. When BBR is added to a disk, it writes metadata on the disk that:
Bad blocks occur when an I/O error is detected for a write operation. When this happens, I/O normally fails and the failure code is returned to the calling program code. BBR detects failed write operations and remaps the I/O to a reserved block on the disk. Afterward, BBR restarts the I/O using the reserve block. Every block of storage has an address, called a logical block address, or LBA. When BBR is added to a disk, it provides two critical functions: remap and recovery. When an I/O operation is sent to disk, BBR inspects the LBA in the I/O command to see if the LBA has been remapped to a reserve block due to some earlier I/O error. If BBR finds a mapping between the LBA and a reserve block, it updates the I/O command with the LBA of the reserve block before sending it on to the disk. Recovery occurs when BBR detects an I/O error and remaps the bad block to a reserve block. The new LBA mapping is saved in BBR metadata so that subsequent I/O to the LBA can be remapped. When you add a segment manager to a disk, the segment manager needs to change the basic layout of the disk. This change means that some sectors are reserved for metadata and the remaining sectors are made available for creating data disk segments. Metadata sectors are written to disk to save information needed by the segment manager; previous information found on the disk is lost. Before adding a segment manager to an existing disk, you must remove any existing volume management structures, including any previous segment manager. When a new disk is added to a system, the disk usually contains no data and has not been partitioned. If this is the case, the disk shows up in EVMS as a compatibility volume because EVMS cannot tell if the disk is being used as a volume. To add a segment manager to the disk so that it can be subdivided into smaller disk segment objects, tell EVMS that the disk is not a compatibility volume by deleting the volume information. If the new disk was moved from another system, chances are good that the disk already contains metadata. If the disk does contain metadata, the disk shows up in EVMS with storage objects that were produced from the existing metadata. Deleting these objects will allow you to add a different segment manager to the disk, and you lose any old data. This section shows how to add a segment manager with EVMS. EVMS initially displays the physical disks it sees as volumes. Assume that you have added a new disk to the system that EVMS sees as sde. This disk contains no data and has not been subdivided (no partitions). EVMS assumes that this disk is a compatibility volume known as /dev/evms/sde. NOTEIn the following example, the DOS Segment Manager creates two segments on the disk: a metadata segment known as sde_mbr, and a segment to represent the available space on the drive, sde_freespace1. This freespace segment (sde_freespace1) can be divided into other segments because it represents space on the drive that is not in use. To add the DOS Segment Manager to sde, first remove the volume, /dev/evms/sde:
Alternatively, you can remove the volume through the GUI context sensitive menu:
After the volume is removed, add the DOS Segment Manager:
To add the DOS Segment Manager to sde, first remove the volume /dev/evms/sde:
Alternatively, you can remove the volume through the context sensitive menu:
After the volume is removed, add the DOS Segment Manager:
When a segment manager is removed from a disk, the disk can be reused by other plug-ins. The remove command causes the segment manager to remove its partition or slice table from the disk, leaving the raw disk storage object that then becomes an available EVMS storage object. As an available storage object, the disk is free to be used by any plug-in when storage objects are created or expanded. You can also add any of the segment managers to the available disk storage object to subdivide the disk into segments. Most segment manager plug-ins check to determine if any of the segments are still in use by other plug-ins or are still part of volumes. If a segment manager determines that there are no disks from which it can safely remove itself, it will not be listed when you use the remove command. In this case, you should delete the volume or storage object that is consuming segments from the disk you want to reuse. This section shows how to remove a segment manager with EVMS. NOTEIn the following example, the DOS Segment Manager has one primary partition on disk sda. The segment is a compatibility volume known as /dev/evms/sda1. Follow these steps to remove a segment manager with the GUI context sensitive menu:
Follow these steps to remove a segment manager with the Ncurses interface:
This chapter discusses when to use segments and how to create them using different EVMS interfaces. A disk can be subdivided into smaller storage objects called disk segments. A segment manager plug-in provides this capability. Another reason for creating disk segments is to maintain compatibility on a dual boot system where the other operating system requires disk partitions. Before creating a disk segment, you must choose a segment manager plug-in to manage the disk and assign the segment manager to the disk. An explanation of when and how to assign segment managers can be found in Chapter 6. "Adding and removing a segment manager". This section provides a detailed explanation of how to create a segment with EVMS by providing instructions to help you complete the following task: To create a segment using the GUI, follow the steps below:
Alternatively, you can perform some of the steps to create a segment from the GUI context sensitive menu:
To create a segment using Ncurses, follow these steps:
Alternatively, you can perform some of the steps to create a segment from the context sensitive menu:
To create a data segment from a freespace segment, use the Create command. The arguments the Create command accepts vary depending on what is being created. The first argument to the Create command indicates what is to be created, which in the above example is a segment. The remaining arguments are the freespace segment to allocate from and a list of options to pass to the segment manager. The command to accomplish this is:
NOTEThe Allocate command also works to create a segment. The previous example accepts the default values for all options you don't specify. To see the options for this command type:
This chapter discusses when and how to create a container. Segments and disks can be combined to form a container. Containers allow you to combine storage objects and then subdivide those combined storage objects into new storage objects. You can combine storage objects to implement the volume group concept as found in the AIX and Linux logical volume managers. Containers are the beginning of more flexible volume management. You might want to create a container in order to account for flexibility in your future storage needs. For example, you might need to add additional disks when your applications or users need more storage. This section provides a detailed explanation of how to create a container with EVMS by providing instructions to help you complete the following task. To create a container using the EVMS GUI, follow these steps:
To create a container using the Ncurses interface, follow these steps:
The Create command creates containers. The first argument in the Create command is the type of object to produce, in this case a container. The Create command then accepts the following arguments: the region manager to use along with any parameters it might need, and the segments or disks to create the container from. The command to complete the previous example is:
The previous example accepts the default values for all options you don't specify. To see the options for this command type:
Regions can be created from containers, but they can also be created from other regions, segments, or disks. Most region managers that support containers create one or more freespace regions to represent the freespace within the container. This function is analogous to the way a segment manager creates a freespace segment to represent unused disk space. You can create regions because you want the features provided by a certain region manager or because you want the features provided by that region manager. You can also create regions to be compatible with other volume management technologies, such as MD or LVM. For example, if you wanted to make a volume that is compatible with Linux LVM, you would create a region out of a Linux LVM container and then a compatibility volume from that region. This section tells how to create a region with EVMS by providing instructions to help you complete the following task. To create a region, follow these steps:
Alternatively, you can perform some of the steps for creating a region with the GUI context sensitive menu:
To create a region, follow these steps:
Alternatively, you can perform some of the steps for creating a region with the context sensitive menu:
Create regions with the Create command. Arguments to the Create command are the following: keyword Region, the name of the region manager to use, the region managers options, and the objects to consume. The form of this command is:
The LVM Region Manager supports many options for creating regions. To see the available options for creating regions and containers, use the following Query:
This chapter discusses the EVMS drive linking feature, which is implemented by the drive link plug-in, and tells how to create, expand, shrink, and delete a drive link. Drive linking linearly concatenates objects, allowing you to create larger storage objects and volumes from smaller individual pieces. For example, say you need a 1 GB volume but do not have contiguous space available of that length. Drive linking lets you link two or more objects together to form the 1 GB volume. The types of objects that can be drive linked include disks, segments, regions, and other feature objects. Any resizing of an existing drive link, whether to grow it or shrink it, must be coordinated with the appropriate file system operations. EVMS handles these file system operations automatically. Because drive linking is an EVMS-specific feature that contains EVMS metadata, it is not backward compatible with other volume-management schemes. The drive link plug-in consumes storage objects, called link objects, which produce a larger drive link object whose address space spans the link objects. The drive link plug-in knows how to assemble the link objects so as to create the exact same address space every time. The information required to do this is kept on each link child as persistent drive-link metadata. During discovery, the drive link plug-in inspects each known storage object for this metadata. The presence of this metadata identifies the storage object as a link object. The information contained in the metadata is sufficient to:
If any link objects are missing at the conclusion of the discovery process, the drive link storage object contains gaps where the missing link objects occur. In such cases, the drive link plug-in attempts to fill in the gap with a substitute link object and construct the drive link storage object in read-only mode, which allows for recovery action. The missing object might reside on removable storage that has been removed or perhaps a lower layer plug-in failed to produce the missing object. Whatever the reason, a read-only drive link storage object, together logging errors, help you take the appropriate actions to recover the drive link. The drive link plug-in provides a list of acceptable objects from which it can create a drive-link object. When you create an EVMS storage object and then choose the drive link plug-in, a list of acceptable objects is provided that you can choose from. The ordering of the drive link is implied by the order in which you pick objects from the provided list. After you provide a name for the new drive-link object, the identified link objects are consumed and the new drive-link object is produced. The name for the new object is the only option when creating a drive-link. Only the last object in a drive link can be expanded, shrunk or removed. Additionally, a new object can be added to the end of an existing drive link only if the file system (if one exists) permits. Any resizing of a drive link, whether to grow it or shrink it, must be coordinated with the appropriate file system operations. EVMS handles these file system operations automatically. This section shows how to create a drive link with EVMS: To create the drive link using the GUI, follow these steps:
Alternatively, you can perform some of the steps to create a drive link with the GUI context sensitive menu:
To create the drive link, follow these steps:
Alternatively, you can perform some of the steps to create a drive link with the context sensitive menu:
Use the create command to create a drive link through the CLI. You pass the "object" keyword to the create command, followed by the plug-in and its options, and finally the objects. To determine the options for the plug-in you are going to use, issue the following command:
Now construct the create command, as follows:
A drive link is an aggregating storage object that is built by combining a number of storage objects into a larger resulting object. A drive link consumes link objects in order to produce a larger storage object. The ordering of the link objects as well as the number of sectors they each contribute is described by drive link metadata. The metadata allows the drive link plug-in to recreate the drive link, spanning the link objects in a consistent manner. Allowing any of these link objects to expand would corrupt the size and ordering of link objects; the ordering of link objects is vital to the correct operation of the drive link. However, expanding a drive link can be controlled by only allowing sectors to be added at the end of the drive link storage object. This does not disturb the ordering of link objects in any manner and, because sectors are only added at the end of the drive link, existing sectors have the same address (logical sector number) as before the expansion. Therefore, a drive link can be expanded by adding additional sectors in two different ways:
If the expansion point is the drive link storage object, you can perform the expansion by adding an additional storage object to the drive link. This is done by choosing from a list of acceptable objects during the expand operation. Multiple objects can be selected and added to the drive link. If the expansion point is the last storage object in the drive link, then you expand the drive link by interacting with the plug-in that produced the object. For example, if the link was a segment, then the segment manager plug-in that produced the storage object expands the link object. Afterwords, the drive link plug-in notices the size difference and updates the drive link metadata to reflect the resize of the child object. There are no expand options. Shrinking a drive link has the same restrictions as expanding a drive link. A drive link object can only be shrunk by removing sectors from the end of the drive link. This can be done in the following ways:
The drive link plug-in attempts to orchestrate the shrinking of a drive-link storage object by only listing the last link object. If you select this object, the drive link plug-in then lists the next-to-last link object, and so forth, moving backward through the link objects to satisfy the shrink command. If the shrink point is the last storage object in the drive link, then you shrink the drive link by interacting with the plug-in that produced the object. There are no shrink options. This chapter discusses snapshotting and tells how to create a snapshot. A snapshot represents a frozen image of a volume. The source of a snapshot is called an "original." When a snapshot is created, it looks exactly like the original at that point in time. As changes are made to the original, the snapshot remains the same and looks exactly like the original at the time the snapshot was created. Snapshotting allows you to keep a volume online while a backup is created. This method is much more convenient than a data backup where a volume must be taken offline to perform a consistent backup. When snapshotting, a snapshot of the volume is created and the backup is taken from the snapshot, while the original remains in active use. You can create a snapshot object from any unused storage object in EVMS (disks, segments, regions, or feature objects). The size of this consumed object is the size available to the snapshot object. The snapshot object can be smaller or larger than the original volume. If the object is smaller, the snapshot volume could fill up as data is copied from the original to the snapshot, given sufficient activity on the original. In this situation, the snapshot is deactivated and additional I/O to the snapshot fails. Base the size of the snapshot object on the amount of activity that is likely to take place on the original during the lifetime of the snapshot. The more changes that occur on the original and the longer the snapshot is expected to remain active, the larger the snapshot object should be. Clearly, determining this calculation is not simple and requires trial and error to determine the correct snapshot object size to use for a particular situation. The goal is to create a snapshot object large enough to prevent the shapshot from being deactivated if it fills up, yet small enough to not waste disk space. If the snapshot object is the same size as the original volume, or a little larger, to account for the snapshot mapping tables, the snapshot is never deactivated. After you've created the snapshot object and saved the changes, the snapshot will be activated (as long as the snapshot child object is already active). This is a change from snapshots in EVMS 2.3.x and earlier, where the snapshot would not be activated until the object was made into an EVMS volume. If you wish to have an inactive snapshot, please add the name of the snapshot object to the "activate.exclude" line in the EVMS configuration file (see section about selective-activation for more details). If at any point you decide to deactivate a snapshot object while the original volume is still active, the snapshot will be reset. The next time that the snapshot object is activated, it will reflect the state of the original volume at that point in time, just as if the snapshot had just been created. In order to mount the snapshot, the snapshot object must still be made into an EVMS volume. The name of this volume can be the same as or different than the name of the snapshot object. This section shows how to create a snapshot with EVMS: To create the snapshot using the GUI, follow these steps:
Alternatively, you can perform some of the steps to create a snapshot with the GUI context sensitive menu:
To create the snapshot, follow these steps:
Alternatively, you can perform some of the steps to create a snapshot with the context sensitive menu:
Use the create command to create a snapshot through the CLI. You pass the "Object" keyword to the create command, followed by the plug-in and its options, and finally the objects. To determine the options for the plug-in you are going to use, issue the following command:
Now construct the create command, as follows:
Snapshots can be reinitialized. Reinitializing causes all of the saved data to be erased and starts the snapshot from the current point in time. A reinitialized snapshot has the same original, chunk size, and writeable flags as the original snapshot. To reinitialize a snapshot, use the Reset command on the snapshot object (not the snapshot volume). This command reinitializes the snapshot without requiring you to manually deactivate and reactivate the volume. The snapshot must be active but unmounted for it to be reinitialized. This section continues the example from the previous section, where a snapshot object and volume were created. The snapshot object is called "snap" and the volume is called "/dev/evms/snap." To reinitialize a snapshot, follow these steps:
Alternatively, you can perform these same steps with the context sensitive menus:
As mentioned in Section 11.2, "Creating snapshot objects", as data is copied from the original volume to the snapshot, the space available for the snapshot might fill up, causing the snapshot to be invalidated. This situation might cause your data backup to end prematurely, as the snapshot volume begins returning I/O errors after it is invalidated. To solve this problem, EVMS now has the ability to expand the storage space for a snapshot object while the snapshot volume is active and mounted. This feature allows you to initially create a small snapshot object and expand the object as necessary as the space begins to fill up. In order to expand the snapshot object, the underlying object must be expandable. Continuing the example from the previous sections, the object "snap" is built on the LVM region lvm/Sample Container/Sample Region. When we refer to expanding the "snap" object, the region lvm/Sample Container/Sample Region is the object that actually gets expanded, and the object "snap" simply makes use of the new space on that region. Thus, to have expandable snapshots, you will usually want to build your snapshot objects on top of LVM regions that have extra freespace available in their LVM container. DriveLink objects and some disk segments also work in certain situations. One notable quirk about expanding snapshots is that the snapshot object and volume do not actually appear to expand after the operation is complete. Because the snapshot volume is supposed to be a frozen image of the original volume, the snapshot volume always has the same size as the original, even if the snapshot has been expanded. However, you can verify that the snapshot object is using the additional space by displaying the details for the snapshot object and comparing the percent-full field before and after the expand operation. To create the snapshot using the GUI or Ncurses, follow these steps:
Alternatively, you can perform the same steps using the context sensitive menus.
The CLI expands volumes by targeting the object to be expanded. The CLI automatically handles expanding the volume and other objects above the volume in the volume stock. As with a regular expand operation, the options are determined by the plug-in that owns the object being expanded. Issue the following command to determine the expand options for the region lvm/Sample Container/Sample Region:
The option to use for expanding this region is called "add_size." Issue the following command to expand the snapshot by 100 MB:
When a snapshot is no longer needed, you can remove it by deleting the EVMS volume from the snapshot object, and then deleting the snapshot object. Because the snapshot saved the initial state of the original volume (and not the changed state), the original is always up-to-date and does not need any modifications when a snapshot is deleted. No options are available for deleting snapshots. Situations can arise where a user wants to restore the original volume to the saved state of the snapshot. This action is called a rollback. One such scenario is if the data on the original is lost or corrupted. Snapshot rollback acts as a quick backup and restore mechanism, and allows the user to avoid a more lengthy restore operation from tapes or other archives. Another situation where rollback can be particularly useful is when you are testing new software. Before you install a new software package, create a writeable snapshot of the target volume. You can then install the software to the snapshot volume, instead of to the original, and then test and verify the new software on the snapshot. If the testing is successful, you can then roll back the snapshot to the original and effectively install the software on the regular system. If there is a problem during the testing, you can simply delete the snapshot without harming the original volume. You can perform a rollback when the following conditions are met:
No options are available for rolling back snapshots. Follow these steps to roll back a snapshot with the EVMS GUI or Ncurses:
Alternatively, you can perform these same steps with the context-sensitive menus:
This chapter discusses when and how to create volumes. EVMS treats volumes and storage objects separately. A storage object does not automatically become a volume; it must be made into a volume. Volumes are created from storage objects. Volumes are either EVMS native volumes or compatibility volumes. Compatibility volumes are intended to be compatible with a volume manager other than EVMS, such as the Linux LVM, MD, OS/2 or AIX. Compatibility volumes might have restrictions on what EVMS can do with them. EVMS native volumes have no such restrictions, but they can be used only by an EVMS equipped system. Volumes are mountable and can contain file systems. EVMS native volumes contain EVMS-specific information to identify the volume name. After this volume information is applied, the volume is no longer fully backward compatible with existing volume types. Instead of adding EVMS metadata to an existing object, you can tell EVMS to make an object directly available as a volume. This type of volume is known as a compatibility volume. Using this method, the final product is fully backward-compatible with the desired system. This section provides a detailed explanation of how to create an EVMS native volume with EVMS by providing instructions to help you complete the following task.
Follow these instructions to create an EVMS volume:
Alternatively, you can perform some of the steps to create an EVMS volume from the GUI context sensitive menu:
To create a volume, follow these steps:
Alternatively, you can perform some of the steps to create an EVMS volume from the context sensitive menu:
To create a volume, use the Create command. The arguments the Create command accepts vary depending on what is being created. In the case of the example, the first argument is the key word volume that specifies what is being created. The second argument is the object being made into a volume, in this case lvm/Sample Container/Sample Region. The third argument is type specific for an EVMS volume, Name=, followed by what you want to call the volume, in this case Sample Volume. The following command creates the volume from the example.
This section provides a detailed explanation of how to create a compatibility volume with EVMS by providing instructions to help you complete the following task.
To create a compatibility volume, follow these steps:
Alternatively, you can perform some of the steps to create a compatibility volume from the GUI context sensitive menu:
To create a compatibility volume, follow these steps:
Alternatively, you can perform some of the steps to create a compatibility volume from the context sensitive menu:
To create a volume, use the Create command. The arguments the Create command accepts vary depending on what is being created. In the case of the example, the first argument is the key word volume that specifies what is being created. The second argument is the object being made into a volume, in this case lvm/Sample Container/Sample Region. The third argument, compatibility, indicates that this is a compatibility volume and should be named as such.
This chapter discusses the seven File System Interface Modules (FSIMs) shipped with EVMS, and then provides examples of adding file systems and coordinating file system checks with the FSIMs. EVMS currently ships with seven FSIMs. These file system modules allow EVMS to interact with file system utilities such as mkfs and fsck. Additionally, the FSIMs ensure that EVMS safely performs operations, such as expanding and shrinking file systems, by coordinating these actions with the file system. You can invoke operations such as mkfs and fsck through the various EVMS user interfaces. Any actions you initiate through an FSIM are not saved to disk until the changes are saved in the user interface. Later in this chapter we provide examples of creating a new file system and coordinating file system checks through the EVMS GUI, Ncurses, and command-line interfaces. The FSIMs supported by EVMS are:
The JFS module supports the IBM journaling file system (JFS). Current support includes mkfs, unmkfs, fsck, and online file system expansion. You must have at least version 1.0.9 of the JFS utilities for your system to work with this EVMS FSIM. You can download the latest utilities from the JFS for Linux site. For more information on the JFS FSIM, refer to Appendix F: "JFS file system interface module". The XFS FSIM supports the XFS file system from SGI. Command support includes mkfs, unmkfs, fsck, and online expansion. Use version 1.2 or higher, which you can download from the SGI open source FTP directory. For more information on the XFS FSIM, refer to Appendix G: "XFS file system interface module". The ReiserFS module supports the ReiserFS journaling file system. This module supports mkfs, unmkfs, fsck, online and offline expansion and offline shrinkage. You need version 3.x.1a or higher of the ReiserFS utilities for use with the EVMS FSIM modules. You can download the ReiserFS utilities from The Naming System Venture (Namesys) Web site. For more information on the ReiserFS FSIM, refer to Appendix H: "ReiserFS file system interface module". The EXT2/EXT3 FSIM supports both the ext2 and ext3 file system formats. The FSIM supports mkfs, unmkfs, fsck, and offline shrinkage and expansion. For more information on the Ext2/3 FSIM, refer to Appendix I: "Ext-2/3 file system interface module". The SWAPFS FSIM supports Linux swap devices. The FSIM lets you create and delete swap devices, and supports mkfs, unmkfs, shrinkage and expansion. Currently, you are responsible for issuing the swapon and swapoff commands either in the startup scripts or manually. You can resize swap device with the SWAPFS FSIM as long as the device is not in use. The OpenGFS module supports the OpenGFS clustered journaling file system. This module supports mkfs, unmkfs, fsck, and online expansion. You need the OpenGFS utilities for use with the EVMS FSIM module. You can download the OpenGFS utilities from the OpenGFS project on SourceForge. For more information on the OpenGFS FSIM, refer to Appendix J: "OpenGFS file system interface module". The NTFS FSIM supports the NTFS file system format. The FSIM supports mkfs, unmkfs, and offline shrinkage and expansion. It also has support for running the ntfsfix and netfsclone from the ntfsprogs utilities. You can download the ntfsprogs utilities from the Linux NTFS project web site. For more information on the NTFS FSIM, refer to Appendix K: "NTFS file system interface module". After you have made an EVMS or compatibility volume, add a file system to the volume before mounting it. You can add a file system to a volume through the EVMS interface of your choice. Follow these steps to create a JFS file system with the EVMS GUI:
Alternatively, you can perform some of the steps to create a file system with the GUI context sensitive menu:
Follow these steps to create a JFS file system with Ncurses:
Alternatively, you can perform some of the steps to create a file system with the context sensitive menu:
Use the mkfs command to create the new file system. The arguments to mkfs include the FSIM type (in our example, JFS), followed by any option pairs, and then the volume name. The command to accomplish this is:
The command is completed upon saving. If you are interested in other options that mkfs can use, look at the results of the following query:
You can also coordinate file system checks from the EVMS user interfaces. Follow these steps to check a JFS file system with the EVMS GUI:
Alternatively, you can perform some of the steps to check a file system with the GUI context sensitive menu:
Follow these steps to check a JFS file system with Ncurses:
Alternatively, you can perform some of the steps to check a file system with the context sensitive menu:
This chapter discusses how to configure cluster storage containers (referred to throughout this chapter as "cluster containers"), a feature provided by the EVMS Cluster Segment Manager (CSM). Disks that are physically accessible from all of the nodes of the cluster can be grouped together as a single manageable entity. EVMS storage objects can then be created using storage from these containers. Ownership is assigned to a container to make the container either private or shared. A container that is owned by any one node of the cluster is called a private container. EVMS storage objects and storage volumes created using space from a private container are accessible from only the owning node. A container that is owned by all the nodes in a cluster is called a shared container. EVMS storage objects and storage volumes created using space from a shared container are accessible from all nodes of the cluster simultaneously. EVMS provides the tools to convert a private container to a shared container, and a shared container to a private container. EVMS also provides the flexibility to change the ownership of a private container from one cluster node to another cluster node. Note the following rules and limitations for creating cluster containers:
This section tells how to create a sample private container and provides instructions for completing the following task: To create a container with the EVMS GUI, follow these steps:
To create the private container with the Ncurses interface, follow these steps:
An operation to create a private cluster container with the CLI takes three parameters: the name of the container, the type of the container, and the nodeid to which the container belongs. On the CLI, type the following command to create the private container Priv1:
This section tells how to create a sample shared container and provides instructions to help you complete the following task: To create a shared cluster container with the EVMS GUI, follow these steps:
To create a shared cluster contained with the Ncurses interface, follow these steps:
This section tells how to convert a sample private container to a shared container and provides instructions for completing the following task: CAUTIONEnsure that no application is using the volumes on the container on any node of the cluster. Follow these steps to convert a private cluster container to a shared cluster container with the EVMS GUI:
Follow these steps to convert a private cluster container to a shared cluster container with the Ncurses interface:
This section tells how to convert a sample shared container to a private container and provides instructions for completing the following task: CAUTIONEnsure that no application is using the volumes on the container of any node in the cluster. Follow these steps to convert a shared cluster container to a private cluster container with the EVMS GUI:
Follow these steps to convert a shared cluster container to a private cluster container with the Ncurses interface:
When a container is deported, the node disowns the container and deletes all the objects created in memory that belong to that container. No node in the cluster can discover objects residing on a deported container or create objects for a deported container. This section explains how to deport a private or shared container. To deport a container with the EVMS GUI, follow these steps:
To deport a container with Ncurses, follow these steps:
The procedure for deleting a cluster container is the same for deleting any container. See Section 21.2, "Example: perform a delete recursive operation" EVMS supports the Linux-HA cluster manager in EVMS V2.0 and later. Support for the RSCT cluster manager is also available as of EVMS V2.1, but is not as widely tested. NOTEEnsure that evms_activate is called in one of the startup scripts before the heartbeat startup script is called. If evms_activate is not called, failover might not work correctly. Follow these steps to set up failover and failback of a private container:
EVMS supports the administration of cluster nodes by any node in the cluster. For example, storage on remote cluster node node1 can be administered from cluster node node2. The following sections show how to set up remote administration through the various EVMS user interfaces. To designate node2 as the node to administer from the GUI, follow these steps:
The GUI gathers information about the objects, containers, and volumes on the other node. The status bar displays the message "Now administering node node2," which indicates that the GUI is switched over to node node2. To designate node2 as the node to administer from Ncurses, follow these steps:
A private container and its objects are made active on a node if:
Similarly, a shared container and its objects are made active on a node if the node is in a cluster that currently has quorum. However, the administrator can force the activation of private and shared containers by overriding these rules. NOTEUse extreme caution when performing this operation by ensuring that the node on which the cluster container resides is the only active node in the cluster. Otherwise, the data in volumes on shared and private containers on the node can get corrupted.
This chapter discusses converting compatibility volumes to EVMS volumes and converting EVMS volumes to compatibility volumes. For a discussion of the differences between compatibility and EVMS volumes, see Chapter 12. "Creating volumes". There are several different scenarios that might help you determine what type of volumes you need. For example, if you wanted persistent names or to make full use of EVMS features, such as Drive Linking or Snapshotting, you would convert your compatibility volumes to EVMS volumes. In another situation, you might decide that a volume needs to be read by a system that understands the underlying volume management scheme. In this case, you would convert your EVMS volume to a compatibility volume. A volume can only be converted when it is offline. This means the volume must be unmounted and otherwise not in use. The volume must be unmounted because the conversion operation changes both the name and the device number of the volume. Once the volume is converted, you can remount it using its new name. A compatibility volume can be converted to an EVMS volume in the following situations:
This section provides a detailed explanation of how to convert compatibility volumes to EVMS volumes and provides instructions to help you complete the following task. Follow these steps to convert a compatibility volume with the EVMS GUI:
Alternatively, you can perform some of the steps to convert the volume from the GUI context sensitive menu:
Follow these instructions to convert a compatibility volume to an EVMS volume with the Ncurses interface:
Alternatively, you can perform some of the steps to convert the volume from the context sensitive menu:
To convert a volume, use the Convert command. The Convert command takes the name of a volume as its first argument, and then name= for what you want to name the new volume as the second argument. To complete the example and convert a volume, type the following command at the EVMS: prompt:
An EVMS volume can be converted to a compatibility volume only if the volume does not have EVMS features on it. This section provides a detailed explanation of how to convert EVMS volumes to compatibility volumes by providing instructions to help you complete the following task. Follow these instructions to convert an EVMS volume to a compatibility volume with the EVMS GUI:
Alternatively, you can perform some of the steps to convert the volume through the GUI context sensitive menu:
Follow these instructions to convert an EVMS volume to a compatibility volume with the Ncurses interface:
Alternatively, you can perform some of the steps to convert the volume through the context sensitive menu:
To convert a volume use the Convert command. The Convert command takes the name of a volume as its first argument, and the keyword compatibility to indicate a change to a compatibility volume as the second argument. To complete the example and convert a volume, type the following command at the EVMS: prompt:
This chapter tells how to expand and shrink EVMS volumes with the EVMS GUI, Ncurses, and CLI interfaces. Note that you can also expand and shrink compatibility volumes and EVMS objects. Expanding and shrinking volumes are common volume operations on most systems. For example, it might be necessary to shrink a particular volume to create free space for another volume to expand into or to create a new volume. EVMS simplifies the process for expanding and shrinking volumes, and protects the integrity of your data, by coordinating expand and shrink operations with the volume's file system. For example, when shrinking a volume, EVMS first shrinks the underlying file system appropriately to protect the data. When expanding a volume, EVMS expands the file system automatically when new space becomes available. Not all file system interface modules (FSIM) types supported by EVMS allow shrink and expand operations, and some only perform the operations when the file system is mounted ("online"). The following table details the shrink and expand options available for each type of FSIM. Table 16.1. FSIM support for expand and shrink operations
You can perform all of the supported shrink and expand operations with each of the EVMS user interfaces. This section tells how to shrink a compatibility volume by 500 MB. Follow these steps to shrink the volume with the EVMS GUI:
Alternatively, you can perform some of the steps to shrink the volume with the GUI context sensitive menu:
Follow these steps to shrink a volume with Ncurses:
Alternatively, you can perform some of the steps to shrink the volume with the context sensitive menu:
The shrink command takes a shrink point followed by an optional name value pair or an optional shrink object. To find the shrink point, use the query command with the shrink points filter on the object or volume you plan to shrink. For example:
Use a list options filter on the object of the shrink point to determine the name-value pair to use, as follows:
With the option information that is returned, you can construct the command, as follows:
This section tells how to expand a volume a compatibility volume by 500 MB. Follow these steps to expand the volume with the EVMS GUI:
Alternatively, you can perform some of the steps to expand the volume with the GUI context sensitive menu:
Follow these steps to expand a volume with Ncurses:
Alternatively, you can perform some of the steps to shrink the volume with the context sensitive menu:
The expand command takes an expand point followed by an optional name value pair and an expandable object. To find the expand point, use the query command with the Expand Points filter on the object or volume you plan to expand. For example:
Use a list options filter on the object of the expand point to determine the name-value pair to use, as follows:
The free space in your container is the container name plus /Freespace. With the option information that is returned, you can construct the command, as follows:
This chapter tells how to add additional EVMS features to an already existing EVMS volume. EVMS lets you add features such as drive linking to a volume that already exists. By adding features, you avoid having to potentially destroy the volume and recreate it from scratch. For example, take the scenario of a volume that contains important data but is almost full. If you wanted to add more data to that volume but no free space existed on the disk immediately after the segment, you could add a drive link to the volume. The drive link concatenates another object to the end of the volume and continues seamlessly. The following example shows how to add drive linking to a volume with the EVMS GUI, Ncurses, and CLI interfaces.
Follow these steps to add a drive link to the volume with the EVMS GUI:
Alternatively, you can perform some of the steps to add a drive link with the GUI context sensitive menu:
Follow these steps to add a drive link to a volume with Ncurses:
Alternatively, you can perform some of the steps to add a drive link with the context sensitive menu:
Use the add feature to add a feature to an existing volume. Specify the command name followed by a colon, followed by any options and the volume to operate on. To determine the options for a given feature, use the following query:
The option names and descriptions are listed to help you construct your command. For our example, the command would look like the following:
This chapter discusses selective activation and deactivation of EVMS volumes and objects. There is a section in the EVMS configuration file, /etc/etc/evms.conf, named "activate." This section has two entries: "include" and "exclude." The "include" entry lists the volumes and objects that should be activated. The "exclude" entry lists the volumes and objects that should not be activated. Names in either of the entries can be specified using "*", "?", and "[...]" notation. For example, the following entry will activate all the volumes:
The next entry specifies that objects sda5 and sda7 not be activated:
When EVMS is started, it first reads the include entry and builds a list of the volumes and objects that it should activate. It then reads the exclude entry and removes from the list any names found in the exclude list. For example, an activation section that activates all of the volumes except /dev/evms/temp looks like this:
If /etc/evms.conf does not contain an activate section, the default behavior is to activate everything. This behavior is consistent with versions of EVMS prior to 2.4. Initial activation via /etc/evms.conf does not deactivate any volumes or objects. It only determines which ones should be active. The EVMS user interfaces offer the ability to activate or deactivate a particular volume or object. The volume or object will be activated or deactivated when the changes are saved. You can activate inactive volumes and objects using the various EVMS user interfaces. NoteEVMS does not currently update the EVMS configuration file (/etc/evms.conf) when volumes and objects are activated. If you activate a volume or object that is not initially activated and do not make the corresponding change in /etc/evms.conf, the volume or object will not be activated the next time the system is booted and you run evms_activate or one of the user interfaces. To activate volumes or objects with the GUI, follow these steps:
To activate with the GUI context-sensitive menu, follow these steps:
To activate a volume or object with Ncurses, follow these steps:
To enable activation on a volume or object with the Ncurses context-sensitive menu, follow these steps:
You can deactivate active volumes and objects using the various EVMS user interfaces. NoteEVMS does not currently update the EVMS configuration file (/etc/evms.conf) when a volume or object is deactivated. If you deactivate a volume or object that is initially activated and do not make the corresponding change in /etc/evms.conf, then the volume or object will be activated the next time you run evms_activate or one of the user interfaces. To deactivate a volume or object with the GUI, follow these steps:
To deactivate a volume or object with the GUI context-sensitive menu, follow these steps:
To deactive a volume or object with Ncurses, follow these steps:
To deactivate a volume or object with the Ncurses context-sensitive menu, follow these steps:
In order for a volume or object to be active, all of its children must be active. When you activate a volume or object, EVMS will activate all the objects that the volume or object comprises. Similarly, in order for an object to be inactive, all of its parents cannot be activate. When you deactivate an object, EVMS will deactivate all of the objects and volumes that are built from that object. As discussed in Section 18.1, "Initial activation using /etc/evms.conf", when EVMS starts, it builds an initial list of volumes and objects whose names match the "include" entry in the activation section of /etc/evms.conf. Because those volumes and objects cannot be active unless the objects they comprise are active, EVMS then adds to the list all the objects that are comprised by the volumes and objects that were found in the initial match. EVMS then removes from the list the volumes and objects whose names match the "exclude" entry in the activation section of /etc/evms.conf. Because any volumes or objects that are built from the excluded ones cannot be active, EVMS removes them from the list as well. The enforcement of the dependencies can result in behavior that is not immediately apparent. Let's say, for example, that segment hda7 is made into volume /dev/evms/home. and the activation section in /etc/evms.conf looks like this:
When EVMS builds the list of volumes and objects to activate, everything is included. EVMS next removes all objects whose names start with "hda." hda7 will be removed from the list. Next, because volume /dev/evms/home is built from hda7, it will also be removed from the list and will not be activated. So, although volume /dev/evms/home is not explicitly in the exclude list, it is not activated because it depends on an object that will not be activated. Compatibility volumes are made directly from the volume's object. That is, the device node for the volume points directly to the device for the volume's object. Because a compatibility volume is inseparable from its object, a compatibility volume itself cannot be deactivated. To deactivate a compatibility volume you must deactivate the volume's object. Similarly, if a compatibility volume and its object are not active and you activate the volume's object, the compatibility volume will be active as well. Some volume operations, such as expanding and shrinking, may require that the volume be mounted or unmounted before you can perform the operation. EVMS lets you mount and unmount volumes from within EVMS without having to go to a separate terminal session. EVMS performs the mount and unmount operations immediately. It does not wait until the changes are saved. This section tells how to mount a volume through the various EVMS user interfaces. Follow these steps to mount a volume with the EVMS GUI:
Alternatively, you can mount a volume from the EVMS GUI context sensitive menu:
Follow these steps to mount a volume with Ncurses:
Alternatively, you can mount a volume with the Ncurses context-sensitive menu:
To mount a volume with the CLI, use the following command:
<volume> is the name of the volume to be mounted. <mount point> is the name of the directory on which to mount the volume. <mount options> is a string of options to be passed to the <command>mount</command> command. This section tells how to unmount a volume through the various EVMS user interfaces. Follow these steps to unmount a volume with the EVMS GUI:
Alternatively, you can unmount a volume from the EVMS GUI context sensitive menu:
Follow these steps to unmount a volume with Ncurses:
Alternatively, you can unmount a volume with the Ncurses context-sensitive menu:
A volume with the SWAPFS file system is not mounted or unmounted. Rather, swapping is turned on for the volume using the sbin/swapon command and turned off using the <command>sbin/swapoff</command>. EVMS lets you turn swapping on or off for a volume from within EVMS without having to go to a separate terminal session. As with mounting and unmounting, EVMS performs the swapon and swapoff operations immediately. It does not wait until the changes are saved. This section tells how to turn swap on using the various EVMS user interfaces. Follow these steps to turn swap on with the EVMS GUI:
Alternatively, you can turn swap on from the EVMS GUI context-sensitive menu:
Follow these steps to turn swap on with Ncurses:
Alternatively, you can turn swap on with the Ncurses context-sensitive menu:
This section tells how to turn swap off using the various EVMS user interfaces. Follow these steps to turn swap off with the EVMS GUI:
Alternatively, you can turn swap off from the EVMS GUI context-sensitive menu:
Follow these steps to turn swap off with Ncurses:
Alternatively, you can turn swap on with the Ncurses context-sensitive menu:
This chapter discusses plug-in operations tasks and shows how to complete a plug-in task with the EVMS GUI, Ncurses, and CLI interfaces. Plug-in tasks are functions that are available only within the context of a particular plug-in. These functions are not common to all plug-ins. For example, tasks to add spare disks to a RAID array make sense only in the context of the MD plug-in, and tasks to reset a snapshot make sense only in the context of the Snapshot plug-in. This section shows how to complete a plug-in operations task with the EVMS GUI, Ncurses, and CLI interfaces. Follow these steps to add sde to /dev/evms/md/md0 with the EVMS GUI:
Alternatively, you could use context-sensitive menus to complete the task, as follows:
Follow these steps to add sde to /dev/evms/md/md0 with Ncurses:
Alternatively, you can use the context sensitive menu to complete the task:
With the EVMS CLI, all plug-in tasks must be accomplished with the task command. Follow these steps to add sde to /dev/evms/md/md0 with the CLI:
This chapter tells how to delete EVMS objects through the delete and delete recursive operations. There are two ways in EVMS that you can destroy objects that you no longer want: Delete and Delete Recursive. The Delete option destroys only the specific object you specify. The Delete Recursive option destroys the object you specify and its underlying objects, down to the container, if one exists, or else down to the disk. In order for a volume to be deleted, it must not be mounted. EVMS verifies that the volume you are attempting to delete is not mounted and does not perform the deletion if the volume is mounted. The following example shows how to destroy a volume and the objects below it with the EVMS GUI, Ncurses, and CLI interfaces.
Follow these steps to delete the volume and the container with the EVMS GUI:
Alternatively, you can perform some of the volume deletion steps with the GUI context sensitive menu:
Follow these steps to delete the volume and the container with Ncurses:
Alternatively, you can perform some of the volume deletion steps with the context sensitive menu:
Use the delete and delete recursive commands to destroy EVMS objects. Specify the command name followed by a colon, and then specify the volume, object, or container name. For example:
This chapter discusses how to replace objects. Occasionally, you might wish to change the configuration of a volume or storage object. For instance, you might wish to replace one of the disks in a drive-link or RAID-0 object with a newer, faster disk. As another example, you might have an EVMS volume created from a simple disk segment, and want to switch that segment for a RAID-1 region to provide extra data redundancy. Object-replace accomplishes such tasks. Object-replace gives you the ability to swap one object for another object. The new object is added while the original object is still in place. The data is then copied from the original object to the new object. When this is complete, the original object is removed. This process can be performed while the volume is mounted and in use. For this example, we will start with a drive-link object named link1, which is composed of two disk segments named sda1 and sdb1. The goal is to replace sdb1 with another segment named sdc1. NoteThe drive-linking plug-in allows the target object (sdc1 in this example) to be the same size or larger than the source object. If the target is larger, the extra space will be unused. Other plug-ins have different restrictions and might require that both objects be the same size. Follow these steps to replace sdb1 with sdc1:
Alternatively, you can perform these same steps with the context sensitive menus:
When you save changes, EVMS begins to copy the data from sdb1 to sdc1. The status bar at the bottom of the UI will reflect the percent-complete of the copy operation. The UI must remain open until the copy is finished. At that time, the object sdb1 will be moved to the "Available Objects" panel. This chapter discusses how and why to move segments. A segment move is when a data segment is relocated to another location on the underlying storage object. The new location of the segment cannot overlap with the current segment location. Segments are moved for a variety of reasons. The most compelling among them is to make better use of disk freespace. Disk freespace is an unused contiguous extent of sectors on a disk that has been identified by EVMS as a freespace segment. A data segment can only be expanded by adding sectors to the end of the segment, moving the end of the data segment up into the freespace that immediately follows the data segment. However, what if there is no freespace following the data segment? A segment or segments could be be moved around to put freespace after the segment that is to be expanded. For example:
The following segment manager plug-ins support the move function:
This section shows how to move a DOS segment: NoteIn the following example, the DOS segment manager has a single primary partition on disk sda that is located at the very end of the disk. We want to move it to the front of the drive because we want to expand the segment but there is currently no freespace following the segment. To move the DOS segment through the GUI context sensitive menu, follow these steps:
To move the DOS segment, follow these steps:
Segments, containers, regions, EVMS objects, and volumes are each defined by some sort of metadata on the disk(s). For example, the DOS segment manager writes a partition table on the disk; the LVM region manager writes metadata that define the containers and regions; the MD region manager writes a "superblock" on each of the objects that make up a RAID array; the EVMS Engine writes metadata that define an EVMS volume. It is the combination of these metadata that define the system's configuration of the volumes. EVMS allows you to back up the metadata and restore it later in case the metadata get corrupted. The following sections tell how to back up metadata in the following ways:
Follow these steps to back up the metadata with the EVMS GUI:
The metadata will be saved to the file evms-metadata-yyyy-mm-dd-hh.mm.ss, where yyyy-mm-dd-hh.mm.ss is the system time of the backup. NoteEVMS cannot back up the metadata when there are outstanding configuration changes to the metadata. The configuration changes must first be saved before they can be backed up. Follow these steps to back up the metadata with Ncurses:
The metadata will be saved to the file evms-metadata-yyyy-mm-dd-hh.mm.ss, where yyyy-mm-dd-hh.mm.ss is the system time of the backup. NoteEVMS cannot back up the metadata when there are outstanding configuration changes to the metadata. The configuration changes must first be saved before they can be backed up. There are two configuration options in the engine section of /etc/evms.conf for controlling the behavior of metadata backups. metadata_backup_dir sets the default directory for storing the metadata backup files. If metadata_backup_dir is not specified, the default location is /var/evms/metadata_backups. auto_metadata_backup indicates that EVMS should automatically save a backup of the metadata each time a change to the metadata configuration is saved. If auto_metadata_backup is set to "yes," every time you use the EVMS CLI, EVMS Ncurses, or the EVMS GUI to make changes to the configuration of the metadata and then save the changes, EVMS will save a backup of the new metadata changes that were saved. If auto_metadata_backup is not specified, the default is not to automatically back up the metadata. EVMS provides a standalone utility, evms_metadata_backup, for backup the metadata. Its syntax is: emvs_metadata_backup [options] [directory]
evms_metadata_backup has an optional directory parameter. directory is the directory in which to save the metadata backup file. If directory is not specified, EVMS will use the directory given for the metadata_backup_dir in the engine section of /etc/evms/conf. If metadata_backup_dir is not specified in /etc/evms.conf, the default directory is /var/evms/metadata_backups. The metadata are saved in a file named evms-metadata-yyyy-mm-dd.hh.ss, where yyyy-mm-dd-hh.ss is the system time of the backup. You restore metadata by using the evms_metadata_restore utility. The syntax is: evms_metadata_restore [options] [thing_name ...] The following options can be used with evms_metadata_restore:
evms_metadata_restore accepts optional parameters, which are the names of the things (segments, regions, containers, EVMS objects, volumes) for which the metadata are to be restored. The following sections give examples of how to use the various options and parameters for evms_metadata_restore. use the -L option to have evms_metadata_restore print the contents of the metadata backup file in a human-readable form. The output it send to standard out. For example:
evms_metadata_restore will fill the latest metadata backup file in the default metadata backup directory and print its contents. The output would look something like:
The "Offset" and "Length" are in units of 512-byte sectors. Use the -a option to restore all the metadata in the backup file. You must use the -a option if you do not specify any thing names as command-line parameters. For example:
evms_metadata_restore will restore all the metadata from the latest backup file in the default metadata backup directory. Use the -f option to specify a metadata backup file other than the latest file in the default metadata backup directory. For example:
evms_metadata_restore will restore all the metadata from file /root/backups/evms/2004-06-05. Use the -D option to use the metadata backup file that is most recent to but not older than a given date. For example, let's say there are three metadata backup files in the default metadata backup directory:
The following command will restore all the metadata from evms-metadata-2004-04-10-08.28.12. evms_metadata_restore -a -D 2004/04/10 The following command will restore all the metadata from file evms-metadata-2004-07-07.18.33. evms_metadata_restore -a -D "2004/04/10 06:00:00" Specify the name of the particular thing (object, container volume) on the command line when you run evms_metadata_restore. For example, let's say that hda5_bbr is built from segment hda5, which comes from disk hda. The following command, by default, recursively restores the metadata for the things on which hda5_bbr is built, and then restores the metadata needed to build hda5_bbr: evms_metadata_restore hda5_bbr In this example, evms_metadata_restore will read the latest metadata backup file. It will first restore onto hda the metadata necessary to build hda5. Then it will restore onto hda5 the metadata that are needed to build hda5_bbr. evms_metadata_restore can handle multiple names of things on the command line. So, for example, you could run: evms_metadata_restore hda5_bbr hda6_bbr hda7 md/md0 Use the -s option to restore the metadata to build a particular thing (object, container, volume) and not recursively restore the metadata to build the things on which the particular thing depends. Using the example from the previous section, the following command will restore only the metadata needed to build hda5_bbr. evms_metadata_restore -s hda5_bbr In this example, evms_metadata_restore will read the latest metadata backup file and then restore onto hda5 the metadata that are needed to build hda5_bbr. Use the -p option to restore the metadata that were written to a particular thing rather than the metadata to build the thing itself. The -p option effectively restores the metadata to build the parents of the given thing. Note that in order to restore the metadata to the given thing that the thing must already exist so that evms_metadata_restore can write the metadata to it. For example, let's say that volume /dev/evms/Data is built from hda5_bbr, which is built from segment hda5, which comes from disk hda5. The following command will get the latest metadata backup file and restore the metadata for all of the things that are built from object hda5. evms_metadata_restore -p hda5 In this example, evms_metadata_restore will restore onto hda5 the metadata needed to build hda5_bbr. It will then restore onto hda5_bbr the metadata needed to build volume /dev/evms/Data. Use the combination of the -p and -s options to restore only the metadata that were written to a particular thing and not recursively restore the metadata to the thing's parents. Note that in order to restore the metadata to the given thing, that the thing must already exist so that evms_metadata_restore can write the metadata to it. Using the example from the previous section, the following command will get the latest metadata file and restore the metadata for all of the things that are built from object hda5, that is, hda5_bbr. evms_metadata_restore -p -s hda5 The command will not restore the metadata to build volume /dev/evms/Data. The following sections describe some cautions you need to be aware of when backing up and restoring EVMS metadata. EVMS saves the chunks of metadata in the backup file in entries that say "this chunk of metadata to build object named 'abc' is written to object named 'xyz' at offset n for a length of m." The metadata are not saved with physical offsets on the disk. Nor are they saved positionally, such as "this is the metadata for the third partition on the disk." Because the metadata saved in the backup file are name based, when evms_metadata_restore writes metadata to an object, it looks up the object by name. If the object names have changed from the names that were saved in the metadata backup file, then you run the risk of evms_metadata_restore not being able to find the object or, worse yet, writing the metadata to the wrong object. Objects can change names for several reasons, including:
Make sure you are restoring the metadata to objects with the correct names. As mentioned in the previous section, the metadata entries in the backup file have the offset of the metadata from the beginning of the object. If the size of the object onto which the metadata are to be written has changed, the metadata can end up being written to the wrong location. If the object is smaller than it was when the metadata were backed up, evms_metadata_restore can fail to write the metadata because the offset and length of the metadata can go beyond the end of the object. If the object is larger than it was when the metadata were backed up, evms_metadata_restore can end up putting the metadata in the wrong location on the object. Several volume management schemes write their metadata at or near the end of an object--Multi-Disk (MD), the EVMS Cluster Segment Manager, the EVMS Drive Link feature, the EVMS Snapshot feature, and EVMS volumes. Those plug-ins look for their metadata at the end of the object. evms_metadata_restore will write the metadata to the offset specified in the entry in the backup file. If the object is larger, the metadata will not be written at the end of the object. The plug-ins will not find their metadata at the end of the object and will therefore not produce the region/feature/volume that should be made from the object. You should restore the metadata to objects of the same size as when the metadata were backed up to avoid problems with the metadata ending up in the wrong location. For most situations, it is better to use the features of MD to restore a RAID array rather than using evms_metadata_restore. For example, if one of the disks in a RAID5 array fails, it is better to replace the failed disk with a new disk and then add the new disk as a spare to the array so that it can get synced into the array. This has two advantages. One is that MD will place the "superblock," which defines the disk as a member of the array, on the correct location of the new disk, which is near the end. If the new disk is bigger than the one it is replacing, MD will put the superblock at its correct location near the end of the new disk. If instead you used evms_metadata_restore, the superblock will be written to the location for the old disk and its size, not to the correct location for the new disk and its size, as discussed in the previous section. When MD then tries to discover the new disk, it will not find the superblock and will not treat the new disk as a member of the array. Another advantage is that when MD syncs a new disk into an array, it syncs all the data on the disk. evms_metadata_restore only restores metadata; it will not restore the data on the disk. The DOS plug-in is the most commonly used EVMS segment manager plug-in. The DOS plug-in supports DOS disk partitioning as well as:
The DOS plug-in reads metadata and constructs segment storage objects that provide mappings to disk partitions. The DOS plug-in provides compatibility with DOS partition tables. The plug-in produces EVMS segment storage objects that map primary partitions described by the MBR partition table and logical partitions described by EBR partition tables. DOS partitions have names that are constructed from two pieces of information:
Take, for example, partition name hda1, which describes a partition that is found on device hda in the MBR partition table. DOS partition tables can hold four entries. Partition numbers 1-4 refer to MBR partition records. Therefore, our example is telling us that partition hda1 is described by the very first partition record entry in the MBR partition table. Logical partitions, however, are different than primary partitions. EBR partition tables are scattered across a disk but are linked together in a chain that is first located using an extended partition record found in the MBR partition table. Each EBR partition table contains a partition record that describes a logical partition on the disk. The name of the logical partition reflects its position in the EBR chain. Because the MBR partition table reserves numerical names 1-4, the very first logical partition is always named 5. The next logical partition, found by following the EBR chain, is called 6, and so forth. So, the partition hda5 is a logical partition that is described by a partition record in the very first EBR partition table. While discovering DOS partitions, the DOS plug-in also looks for OS/2 DLAT metadata to further determine if the disk is an OS/2 disk. An OS/2 disk has additional metadata and the metadata is validated during recovery. This information is important for the DOS plug-in to know because an OS/2 disk must maintain additional partition information. (This is why the DOS plug-in asks, when being assigned to a disk, if the disk is a Linux disk or an OS/2 disk.) The DOS plug-in needs to know how much information must be kept on the disk and what kind of questions it should ask the user when obtaining the information. An OS/2 disk can contain compatibility volumes as well as logical volumes. A compatibility volume is a single partition with an assigned drive letter that can be mounted. An OS/2 logical volume is a drive link of 1 or more partitions that have software bad-block relocation at the partition level. Embedded partitions, like those found on a SolarisX86 disk or a BSD compatibility disk, are found within a primary partition. Therefore, the DOS plug-in inspects primary partitions that it has just discovered to further determine if any embedded partitions exist. Primary partitions that hold embedded partition tables have partition type fields that indicate this. For example, a primary partition of type 0xA9 probably has a BSD partition table that subdivides the primary partition into BSD partitions. The DOS plug-in looks for a BSD disk label and BSD data partitions in the primary partition. If the DOS plug-in finds a BSD disk label, it exports the BSD partitions. Because this primary partition is actually just a container that holds the BSD partitions, and not a data partition itself, it is not exported by the DOS plug-in. Embedded partitions are named after the primary partition they were discovered within. As an example, hda3.1 is the name of the first embedded partition found within primary partition hda3. Assigning a segment manager to a disk means that you want the plug-in to manage partitions on the disk. In order to assign a segment manager to a disk, the plug-in needs to create and maintain the appropriate metadata, which is accomplished through the "disk type" option. When you specify the "disk type" option and choose Linux or OS/2, the plug-in knows what sort of metadata it needs to keep and what sort of questions it should ask when creating partitions. An additional OS/2 option is the "disk name" option, by which you can provide a name for the disk that will be saved in OS/2 metadata and that will be persistent across reboots. There are two basic DOS partition types:
Every partition table has room for four partition records; however, there are a few rules that impose limits on this. An MBR partition table can hold four primary partition records unless you also have logical partitions. In this case, one partition record is used to describe an extended partition and the start of the EBR chain that in turn describes logical partitions. Because all logical partitions must reside in the extended partition, you cannot allocate room for a primary partition within the extended partition and you cannot allocate room for a logical partition outside or adjacent to this area. Lastly, an EBR partition table performs two functions:
EBR partition tables use at most two entries. When creating a DOS partition, the options you are presented with depend on the kind of disk you are working with. However, both OS/2 disks and Linux disks require that you choose a freespace segment on the disk within which to create the new data segment. The create options are:
Additional OS/2 options are the following:
A partition is a physically contiguous run of sectors on a disk. You can expand a partition by adding unallocated sectors to the initial run of sectors on the disk. Because the partition must remain physically contiguous, a partition can only be expanded by growing into an unused area on the disk. These unused areas are exposed by the DOS plug-in as freespace segments. Therefore, a data segment is only expandable if a freespace segment immediately follows it. Lastly, because a DOS partition must end on a cylinder boundary, DOS segments are expanded in cylinder size increments. This means that if the DOS segment you want to expand is followed by a freespace segment, you might be unable to expand the DOS segment if the freespace segment is less than a cylinder in size. There is one expand option, as follows:
A partition is shrunk when sectors are removed from the end of the partition. Because a partition must end on a cylinder boundary, a partition is shrunk by removing cylinder amounts from the end of the segment. There is one shrink option, as follows:
The Multi-Disk (MD) driver in the Linux kernel and the MD plug-in in EVMS provide a software implementation of RAID (Redundant Array of Inexpensive Disks). The basic idea of software RAID is to combine multiple hard disks into an array of disks in order to improve capacity, performance, and reliability. The RAID standard defines a wide variety of methods for combining disks into a RAID array. In Linux, MD implements a subset of the full RAID standard, including RAID-0, RAID-1, RAID-4, and RAID-5. In addition, MD also supports additional combinations called Linear-RAID and Multipath. In addition to this appendix, more information about RAID and the Linux MD driver can be found in the Software RAID HOWTO at www.tldp.org/HOWTO/Software-RAID-HOWTO.html. All RAID levels are used to combine multiple devices into a single MD array. The MD plug-in is a region-manager, so EVMS refers to MD arrays as "regions." MD can create these regions using disks, segments or other regions. This means that it's possible to create RAID regions using other RAID regions, and thus combine multiple RAID levels within a single volume stack. The following subsections describe the characteristics of each Linux RAID level. Within EVMS, these levels can be thought of as sub-modules of the MD plug-in. Linear-RAID regions combine objects by appending them to each other. Writing (or reading) linearly to the MD region starts by writing to the first child object. When that object is full, writes continue on the second child object, and so on until the final child object is full. Child objects of a Linear-RAID region do not have to be the same size. Advantage:
Disadvantages:
RAID-0 is usually referred to as "striping." This means that data in a RAID-0 region is evenly distributed and interleaved on all the child objects. For example, when writing 16 KB of data to a RAID-0 region with three child objects and a chunk-size of 4 KB, the data would be written as follows:
Advantages:
Disadvantage:
RAID-1 is usually referred to as "mirroring." Each child object in a RAID-1 region contains an identical copy of the data in the region. A write to a RAID-1 region results in that data being written simultaneously to all child objects. A read from a RAID-1 region can result in reading the data from any one of the child objects. Child objects of a RAID-1 region do not have to be the same size, but the size of the region will be equal to the size of the smallest child object. Advantages:
Disadvantages:
RAID-4/5 is often referred to as "striping with parity." Like RAID-0, the data in a RAID-4/5 region is striped, or interleaved, across all the child objects. However, in RAID-4/5, parity information is also calculated and recorded for each stripe of data in order to provide redundancy in case one of the objects is lost. In the event of a disk crash, the data from that disk can be recovered based on the data on the remaining disks and the parity information. In RAID-4 regions, a single child object is used to store the parity information for each data stripe. However, this can cause an I/O bottleneck on this one object, because the parity information must be updated for each I/O-write to the region. In RAID-5 regions, the parity is spread evenly across all the child objects in the region, thus eliminating the parity bottleneck in RAID-4. RAID-5 provides four different algorithms for how the parity is distributed. In fact, RAID-4 is often thought of as a special case of RAID-5 with a parity algorithm that simply uses one object instead of all objects. This is the viewpoint that Linux and EVMS use. Therefore, the RAID-4/5 level is often just referred to as RAID-5, with RAID-4 simply being one of the five available parity algorithms. Advantages and disadvantages
A multipath region consists of one or more objects, just like the other RAID levels. However, in multipath, the child objects actually represent multiple physical paths to the same physical disk. Such setups are often found on systems with fiber-attached storage devices or SANs. Multipath is not actually part of the RAID standard, but was added to the Linux MD driver because it provides a convenient place to create "virtual" devices that consist of multiple underlying devices. The previous RAID levels can all be created using a wide variety of storage devices, including generic, locally attached disks (for example, IDE and SCSI). However, Multipath can only be used if the hardware actually contains multiple physical paths to the storage device, and such hardware is usually available on high-end systems with fiber-or network-attached storage. Therefore, if you don't know whether you should be using the Multipath module, chances are you don't need to use it. Like RAID-1 and RAID-4/5, Multipath provides redundancy against hardware failures. However, unlike these other RAID levels, Multipath protects against failures in the paths to the device, and not failures in the device itself. If one of the paths is lost (for example, a network adapter breaks or a fiber-optic cable is removed), I/O will be redirected to the remaining paths. Like RAID-0 and RAID-4/5, Multipath can provide I/O performance improvements by load balancing I/O requests across the various paths. The procedure for creating a new MD region is very similar for all the different RAID levels. When using the EVMS GUI or Ncurses, first choose the ActionsCreate Region menu item. A list of region-managers will open, and each RAID level will appear as a separate plug-in in this list. Select the plug-in representing the desired RAID level. The next panel will list the objects available for creating a new RAID region. Select the desired objects to build the new region. If the selected RAID level does not support any additional options, then there are no more steps, and the region will be created. If the selected RAID level has extra creation options, the next panel will list those options. After selecting the options, the region will be created. When using the CLI, use the following command to create a new region:
For <plugin>, the available plug-in names are "MDLinearRegMgr," "MDRaid0RegMgr," "MDRaid1RegMgr," "MDRaid5RegMgr," and "MD Multipath." The available options are listed in the following sections. If no options are available or desired, simply leave the space blank between the curly braces. The Linear-RAID and Multipath levels provide no extra options for creation. The remaining RAID levels provide the options listed below. RAID-0 has the following option:
RAID-1 has the following option:
RAID-4/5 have the following options:
An active object in a RAID region is one that is actively used by the region and contains data or parity information. When creating a new RAID region, all the objects selected from the main available-objects panel will be active objects. Linear-RAID and RAID-0 regions only have active objects, and if any of those active objects fail, the region is unavailable. On the other hand, the redundant RAID levels (1 and 4/5) can have spare objects in addition to their active objects. A spare is an object that is assigned to the region, but does not contain any live data or parity. Its primary purpose is to act as a "hot standby" in case one of the active objects fails. In the event of a failure of one of the child objects, the MD kernel driver removes the failed object from the region. Because these RAID levels provide redundancy (either in the form of mirrored data or parity information), the whole region can continue providing normal access to the data. However, because one of the active objects is missing, the region is now "degraded." If a region becomes degraded and a spare object has been assigned to that region, the kernel driver will automatically activate that spare object. This means the spare object is turned into an active object. However, this newly active object does not have any data or parity information, so the kernel driver must "sync" the data to this object. For RAID-1, this means copying all the data from one of the current active objects to this new active object. For RAID-4/5, this means using the data and parity information from the current active objects to fill in the missing data and parity on the new active object. While the sync process is taking place, the region remains in the degraded state. Only when the sync is complete does the region return to the full "clean" state. You can follow the progress of the sync process by examining the /proc/mdstat file. You can also control the speed of the sync process using the files /proc/sys/dev/raid/speed_limit_min and /proc/sys/dev/raid/speed_limit_max. To speed up the process, echo a larger number into the speed_limit_min file. As discussed above, a spare object can be assigned to a RAID-1 or RAID-4/5 region when the region is created. In addition, a spare object can also be added to an already existing RAID region. The effect of this operation is the same as if the object were assigned when the region was created. If the RAID region is clean and operating normally, the kernel driver will add the new object as a regular spare, and it will act as a hot-standby for future failures. If the RAID region is currently degraded, the kernel driver will immediately activate the new spare object and begin syncing the data and parity information. For both RAID-1 and RAID-4/5 regions, use the "addspare" plug-in function to add a new spare object to the region. The only argument is the name of the desired object, and only one spare object can be added at a time. For RAID-1 regions, the new spare object must be at least as big as the region, and for RAID-4/5 regions, the new spare object must be at least as big as the smallest active object. Spare objects can be added while the RAID region is active and in use. If a RAID-1 or RAID-4/5 region is clean and operating normally, and that region has a spare object, the spare object can be removed from the region if you need to use that object for another purpose. For both RAID-1 and RAID-4/5 regions, use the "remspare" plug-in function to remove a spare object from the region. The only argument is the name of the desired object, and only one spare object can be removed at a time. After the spare is removed, that object will show up in the Available-Objects list in the EVMS user interfaces. Spare objects can be removed while the RAID region is active and in use. In RAID-1 regions, every active object has a full copy of the data for the region. This means it is easy to simply add a new active object, sync the data to this new object, and thus increase the "width" of the mirror. For instance, if you have a 2-way RAID-1 region, you can add a new active object, which will increase the region to a 3-way mirror, which increases the amount of redundancy offered by the region. The first process of adding a new active object can be done in one of two ways. First, the "addactive" plug-in function adds any available object in EVMS to the region as a new active object. The new object must be at least as big as the size of the RAID-1 region. Second, if the RAID-1 region has a spare object, that object can be converted to an active member of the region using the "activatespare" plug-in function. As discussed in the previous section, if one of the active objects in a RAID-1 or RAID-4/5 region has a problem, that object will be kicked out and the region will become degraded. A problem can occur with active objects in a variety of ways. For instance, a disk can crash, a disk can be pulled out of the system, a drive cable can be removed, or one or more I/Os can cause errors. Any of these will result in the object being kicked out and the RAID region becoming degraded. If a disk has completely stopped working or has been removed from the machine, EVMS obviously will no longer recognize that disk, and it will not show up as part of the RAID region when running the EVMS user interfaces. However, if the disk is still available in the machine, EVMS will likely be able to recognize that the disk is assigned to the RAID region, but has been removed from any active service by the kernel. This type of disk is referred to as a faulty object. Faulty objects are no longer usable by the RAID region, and should be removed. You can remove faulty objects with the "remfaulty" plug-in function for both RAID-1 and RAID-4/5. This operation is very similar to removing spare objects. After the object is removed, it will appear in the Available-Objects list in the EVMS user interfaces. Faulty objects can be removed while the RAID region is active and in use. Sometimes a disk can have a temporary problem that causes the disk to be marked faulty and the RAID region to become degraded. For instance, a drive cable can come loose, causing the MD kernel driver to think the disk has disappeared. However, if the cable is plugged back in, the disk should be available for normal use. However, the MD kernel driver and the EVMS MD plug-in will continue to indicate that the disk is a faulty object because the disk might have missed some writes to the RAID region and would therefore be out of sync with the rest of the disks in the region. In order to correct this situation, the faulty object should be removed from the RAID region (as discussed in the previous section). The object will then show up as an Available-Object. Next, that object should be added back to the RAID region as a spare (as discussed in Section B.3.1, "Adding spare objects". When the changes are saved, the MD kernel driver will activate the spare and sync the data and parity. When the sync is complete, the RAID region will be operating in its original, normal configuration. This procedure can be accomplished while the RAID region is active and in use. EVMS provides the ability to manually mark a child of a RAID-1 or RAID-4/5 region as faulty. This has the same effect as if the object had some problem or caused I/O errors. The object will be kicked out from active service in the region, and will then show up as a faulty object in EVMS. It can then be removed from the region as discussed in the previous sections. There are a variety of reasons why you might want to manually mark an object faulty. One example would be to test failure scenarios to learn how Linux and EVMS deal with the hardware failures. Another example would be that you want to replace one of the current active objects with a different object. To do this, you would add the new object as a spare, then mark the current object faulty (causing the new object to be activated and the data to be resynced), and finally remove the faulty object. EVMS allows you to mark an object faulty in a RAID-1 region if there are more than one active objects in the region. EVMS allows you to mark an object faulty in a RAID-4/5 region if the region has a spare object. Use the "markfaulty" plug-in function for both RAID-1 and RAID-4/5. This command can be used while the RAID region is active and in use. RAID regions can be resized in order to expand or shrink the available data space in the region. Each RAID level has different characteristics, and thus each RAID level has different requirements for when and how they can expand or shrink. See Chapter 16. "Expanding and shrinking volumes" for general information about resizing EVMS volumes and objects. A Linear-RAID region can be expanded in two ways. First, if the last child object in the Linear-RAID region is expandable, then that object can be expanded, and the RAID region can expand into that new space. Second, one or more new objects can be added to the end of the region. Likewise, a Linear-RAID region can be shrunk in two ways. If the last child object in the region is shrinkable, then that object can be shrunk, and the RAID region will shrink by the same amount. Also, one or more objects can be removed from the end of the RAID region (but the first object in the region cannot be removed). Linear-RAID regions can be resized while they are active and in use. You can expand a RAID-0 region by adding one new object to the region. You can shrink a RAID-0 region by removing up to N-1 of the current child objects in a region with N objects. Because RAID-0 regions stripe across the child objects, when a RAID-0 region is resized, the data must be "re-striped" to account for the new number of objects. This means the MD plug-in will move each chunk of data from its location in the current region to the appropriate location in the expanded region. Be forewarned, the re-striping process can take a long time. At this time, there is no mechanism for speeding up or slowing down the re-striping process. The EVMS GUI and text-mode user interface will indicate the progress of the re-striping. Please do not attempt to interrupt the re-striping before it is complete, because the data in the RAID-0 region will likely become corrupted. RAID-0 regions must be deactivated before they are resized in order to prevent data corruption while the data is being re-striped. IMPORTANT: Please have a suitable backup available before attempting a RAID-0 resize. If the re-striping process is interrupted before it completes (for example, the EVMS process gets killed, the machine crashes, or a child object in the RAID region starts returning I/O errors), then the state of that region cannot be ensured in all situations. EVMS will attempt to recover following a problem during a RAID-0 resize. The MD plug-in does keep track of the progress of the resize in the MD metadata. Each time a data chunk is moved, the MD metadata is updated to reflect which chunk is currently being processed. If EVMS or the machine crashes during a resize, the next time you run EVMS the MD plug-in will try to restore the state of that region based on the latest metadata information. If an expand was taking place, the region will be "rolled back" to its state before the expand. If a shrink was taking place, the shrink will continue from the point it stopped. However, this recovery is not always enough to ensure that the entire volume stack is in the correct state. If the RAID-0 region is made directly into a volume, then it will likely be restored to the correct state. On the other hand, if the RAID region is a consumed-object in an LVM container, or a child-object of another RAID region, then the metadata for those plug-ins might not always be in the correct state and might be at the wrong location on the RAID region. Thus, the containers, objects, and volumes built on top of the RAID-0 region might not reflect the correct size and might not even be discovered. A RAID-1 region can be resized if all of the child objects can be simultaneously resized by the same amount. RAID-1 regions cannot be resized by adding additional objects. This type of operation is referred to as "adding active objects," and is discussed in Section B.3.3, "Adding active objects to RAID-1". RAID-1 regions must be deactivated before they are resized. Resizing a RAID-4/5 region follows the same rules and restrictions for resizing a RAID-0 region. Expand a RAID-4/5 region by adding one new object to the region. Shrink a RAID-4/5 region by removing up to N-1 of the current child objects in a region with N objects. See Section B.5.2, "RAID-0" for information about how to perform this function. Like RAID-0, RAID-4/5 regions must be deactivated before they are resized. The MD plug-in allows the child objects of a RAID region to be replaced with other available objects. This is accomplished using the general EVMS replace function. Please see Chapter 22. "Replacing objects" for more detailed information about how to perform this function. For all RAID levels, the replacement object must be at least as big as the child object being replaced. If the replacement object is bigger than the child object being replaced, the extra space on the replacement object will be unused. In order to perform a replace operation, any volumes that comprise the RAID region must be unmounted. This capability is most useful for Linear-RAID and RAID-0 regions. It is also allowed with RAID-1 and RAID-4/5, but those two RAID levels offer the ability to mark objects faulty, which accomplishes the same end result. Because that process can be done while the region is in use, it is generally preferable to object-replace, which must be done with the region deactivated. The LVM plug-in combines storage objects into groups called containers. From these containers, new storage objects can be created, with a variety of mappings to the consumed objects. Containers allow the storage capacity of several objects to be combined, allow additional storage to be added in the future, and allow for easy resizing of the produced objects. The Linux LVM plug-in is compatible with volumes and volume groups from the original Linux LVM tools from Sistina Software. The original LVM is based on the concept of volume groups. A volume group (VG) is a grouping of physical volumes (PVs), which are usually disks or disk partitions. The volume group is not directly usable as storage space; instead, it represents a pool of available storage. You create logical volumes (LVs) to use this storage. The storage space of the LV can map to one or more of the group's PVs. The Linux LVM concepts are represented by similar concepts in the EVMS LVM plug-in. A volume group is called a container, and the logical volumes that are produced are called regions. The physical volumes can be disks, segments, or other regions. Just as in the original LVM, regions can map to the consumed objects in a variety of ways. Containers are created with an initial set of objects. In the LVM plug-in, the objects can be disks, segments, or regions. LVM has two options for creating containers. The value of these options cannot be changed after the container has been created. The options are:
You can add objects to existing LVM containers in order to increase the pool of storage that is available for creating regions. A single container can consume up to 256 objects. Because the name and PE size of the containers are set when the container is created, no options are available when you add new objects to a container. Each object must be large enough to hold five physical extents. If an object is not large enough to satisfy this requirement, the LVM plug-in will not allow the object to be added to the container. You can remove a consumed object from its container as long as no regions are mapped to that object. The LVM plug-in does not allow objects that are in use to be removed their their container. If an object must be removed, you can delete or shrink regions, or move extents, in order to free the object from use. No options are available for removing objects from LVM containers. In addition to adding new objects to an LVM container, you can also expand the space in a container by expanding one of the existing consumed objects (PVs). For example, if a PV is a disk-segment with freespace immediately following it on the disk, you can expand that segment, which will increase the amount of freespace in the container. Likewise, if a PV is a RAID-0 or RAID-5 region, you can expand that region by adding additional objects, which in turn increases the freespace in the container. When using the GUI or text-mode UIs, PV-expand is performed by expanding the container. If any of the existing PVs are expandable, they will appear in the expand-points list. Choose the PV to expand, and then the options for expanding that object. After the PV has expanded, the container's freespace will reflect the additional space available on that PV. When using the CLI, PV-expand is performed by expanding the appropriate object directly. The CLI and the EVMS engine will route the necessary commands so the container is expanded at the same time. The options for expanding a PV are dependent on the plug-in that owns that PV object. Please see the appropriate plug-in's appendix for more details on options for that object. In addition to removing existing objects from an LVM container, you can also reduce the size of a container by shrinking one of the existing consumed objects (PVs). This is only allowed if the consumed object has physical extents (PEs) at the end of the object that are not allocated to any LVM regions. In this case, LVM2 will allow the object to shrink by the number of unused PEs at the end of that object. For example, if a PV is a desk-segment, you can shrink that segment, which will decrease the amount of freespace in the container. Likewise, if a PV is a RAID-0 or RAID-5 region, you can shrink that region by removing one of the objects, which in turn decreases the freespace in the container. When using the GUI or text-mode UIs, PV-shrink is performed by shrinking the container. If any of the existing PVs are shrinkable, they will appear in the shrink-points list. Choose the PV to shrink, and then the options for shrinking that object. After the PV has shrunk, the container's freespace will reflect the reduced space available on that PV. When using the CLI, PV-shrink is performed by shrinking the appropriate object directly. The CLI and the EVMS engine will route the necessary commands so the container is shrunk at the same time. The options for shrinking a PV are dependent on the plug-in that owns that PV object. Please see the appropriate plug-in's appendix for more details on options for that object. You can delete a container as long as the container does not have any produced regions. The LVM plug-in does not allow containers to be deleted if they have any regions. No options are available for deleting LVM containers. You can rename an existing LVM container. When renaming an LVM container, all of the regions produced from that container will automatically have their names changed as well, because the region names include the container name. In the EVMS GUI and text-mode UIs, this is done using the modify properties command, which is available through the "Actions" menu or the context-sensitive pop-up menus. In the EVMS CLI, this is done using the set command. See Section C.3.6, "Renaming LVM regions" for more information about the effects of renaming the regions. You create LVM regions from the freespace in LVM containers. If there is at least one extent of freespace in the container, you can create a new region. The following options are available for creating LVM regions:
You can expand an existing LVM region if there are unused extents in the container. If a region is striped, you can expand it only by using free space on the objects it is striped across. If a region was created with the contiguous option, you can only expand it if there is physically contiguous space following the currently allocated space. The following options are available for expanding LVM regions:
You can shrink an existing LVM region by removing extents from the end of the region. Regions must have at least one extent, so regions cannot be shrunk to zero. The following options are available when shrinking LVM regions. Because regions are always shrunk by removing space from the end of the region, a list of objects cannot be specified in this command.
You can delete an existing LVM region as long as it is not currently a compatibility volume, an EVMS volume, or consumed by another EVMS plug-in. No options are available for deleting LVM regions. The LVM plug-in lets you change the logical-to-physical mapping for an LVM region and move the necessary data in the process. This capability is most useful if a PV needs to be removed from a container. There are currently two LVM plug-in functions for moving regions: move_pv and move_extent. When a PV needs to be removed from a container, all PEs on that PV that are allocated to regions must be moved to other PVs. The move_pv command lets you move PEs to other PVs. move_pv is targeted at the LVM container and the desired PV is used as the selected object. The following options are available:
In addition to moving all the extents from one PV, the LVM plug-in provides the ability to move single extents. This allows a fine-grain tuning of the allocation of extents. This command is targeted at the region owning the extent to move. There are three required options for the move_extent command:
To determine the source LE and target PE, it is often helpful to view the extended information about the region and container in question. The following are command-line options that can be used to gather this information: To view the map of LEs in the region, enter this command:
To view the list of PVs in the container, enter this command:
To view the current PE map for the desired target PV, enter this command:
# is the number of the target PV in the container. This information is also easily obtainable in the GUI and Text-Mode UIs by using the "Display Details" item in the context-popup menus for the desired region and container. You can rename an existing LVM region. In the EVMS GUI and text-mode UIs, this is done using the modify properties command, which is available through the "Actions" menu or the context-sensitive pop-up menus. In the EVMS CLI, this is done using the set command. If the renamed LVM region has a compatibility volume on it, then the name of that compatibility volume will also change. In order for this to work correctly, that volume must be unmounted before the name is changed. Also, be sure to update your /etc/fstab file if the volume is listed, or the volume won't be mounted properly the next time the system boots. If the renamed LVM region has an EVMS volume or another storage object built on it, then the region's name change will be transparent to the upper layers. In this case, the rename can be done while the volume is mounted. The LVM2 plug-in provides compatibility with the new volume format introduced by the LVM2 tools from Red Hat (previously Sistina). This plug-in is very similar in functionality to the LVM plug-in. The primary difference is the new, improved metadata format. LVM2 is still based on the concept of volume groups (VGs), which are constructed from physical volumes (PVs) and produce logical volumes (LVs). Just like the LVM plug-in, the LVM2 plug-in represents volume groups as EVMS containers and represents logical volumes as EVMS regions. LVM2 containers combine storage objects (disks, segments, or other regions) to create a pool of freespace. Regions are then created from this freespace, with a variety of mappings to the consumed objects. Containers are created with an initial set of objects. These objects can be disks, segments, or regions. There are two options available when creating an LVM2 container:
You can add objects to existing LVM containers in order to increase the pool of storage that is available for creating regions. Because the name and extent-size are set when the container is created, no options are available when you add new objects to a container. Each object must be large enough to hold at least one physical extent. If an object is not large enough to satisfy this requirement, the LVM2 plug-in will not allow the object to be added to the container. You can remove a consumed object from its container as long as no regions are mapped to that object. The LVM2 plug-in does not allow objects that are in use to be removed from their container. If an object must be removed, you can delete or shrink regions, or move extents, in order to free the object from use. No options are available for removing objects from LVM containers. In addition to adding new objects to an LVM2 container, you can also expand the space in a container by expanding one of the existing consumed objects (PVs). For example, if a PV is a disk-segment with freespace immediately following it on the disk, you can expand that segment, which will increase the amount of freespace in the container. Likewise, if a PV is a RAID-0 or RAID-5 region, you can expand that region by adding additional objects, which in turn increases the freespace in the container. When using the GUI or text-mode UIs, PV-expand is performed by expanding the container. If any of the existing PVs are expandable, they will appear in the expand-points list. Choose the PV to expand, and then the options for expanding that object. After the PV has expanded, the container's freespace will reflect the additional space available on that PV. When using the CLI, PV-expand is performed by expanding the appropriate object directly. The CLI and the EVMS engine will route the necessary commands so the container is expanded at the same time. The options for expanding a PV are dependent on the plug-in that owns that PV object. Please see the appropriate plug-in's appendix for more details on options for that object. In addition to removing existing objects from an LVM2 container, you can also reduce the size of a container by shrinking one of the existing consumed objects (PVs). This is only allowed if the consumed object has physical extents (PEs) at the end of the object that are not allocated to any LVM2 regions. In this case, LVM2 will allow the object to shrink by the number of unused PEs at the end of that object. For example, if a PV is a desk-segment, you can shrink that segment, which will decrease the amount of freespace in the container. Likewise, if a PV is a RAID-0 or RAID-5 region, you can shrink that region by removing one of the objects, which in turn decreases the freespace in the container. When using the GUI or text-mode UIs, PV-shrink is performed by shrinking the container. If any of the existing PVs are shrinkable, they will appear in the shrink-points list. Choose the PV to shrink, and then the options for shrinking that object. After the PV has shrunk, the container's freespace will reflect the reduced space available on that PV. When using the CLI, PV-shrink is performed by shrinking the appropriate object directly. The CLI and the EVMS engine will route the necessary commands so the container is shrunk at the same time. The options for shrinking a PV are dependent on the plug-in that owns that PV object. Please see the appropriate plug-in's appendix for more details on options for that object. You can delete a container as long as the container does not have any produced regions. The LVM2 plug-in does not allow containers to be deleted if they have any regions. No options are available for deleting LVM2 containers. You can rename an existing LVM2 container. When renaming an LVM2 container, all of the regions produced from that container will automatically have their names changed as well, because the region names include the container name. In the EVMS GUI and text-mode UIs, this is done using the modify properties command, which is available through the "Actions" menu or the context-sensitive pop-up menus. In the EVMS CLI, this is done using the set command. See Section D.2.5, "Renaming LVM2 regions" for more information about the effects of renaming the regions. You create LVM2 regions from the freespace in LVM2 containers. If there is at least one extent of freespace in the container, you can create a new region. The following options are available for creating LVM2 regions:
You can expand an existing LVM region if there are any unused extents in the container. The following options are available for expanding LVM regions.
You can shrink an existing LVM region by removing extents from the end of the region. Regions must have at least one extent, so regions cannot be shrunk to zero. The following options are available when shrinking LVM regions. Because regions are always shrunk by removing space from the end of the region, a list of objects cannot be specified in this command.
You can delete an existing LVM region as long as it is not currently a compatibility volume, an EVMS volume, or consumed by another EVMS plug-in. No options are available for deleting LVM regions. You can rename an existing LVM2 region. In the EVMS GUI and text-mode UIs, this is done using the modify properties command, which is available through the "Actions" menu or the context-sensitive pop-up menus. In the EVMS CLI, this is done using the set command. If the renamed LVM2 region has a compatibility volume on it, then the name of that compatibility volume will also change. In order for this to work correctly, that volume must be unmounted before the name is changed. Also, be sure to update your /etc/fstab file if the volume is listed, or the volume won't be mounted properly the next time the system boots. If the renamed LVM2 region has an EVMS volume or another storage object built on it, then the region's name change will be transparent to the upper layers. In this case, the rename can be done while the volume is mounted. You can move all or parts of an LVM2 region around within the container so that it's physically located on a different area of the PV or on completely different PVs. Before moving an LVM2 region, you should first understand a bit about how the storage space of the region maps to the storage space of the container's PVs. Starting with a comparison, in the LVM1 plug-in, each region was made of a list of logical extents, and each logical extent mapped to exactly one physical extent on one of the PVs. However, in LVM2, the mapping is not quite so fine-grained. Each LVM2 region is made of one or more mappings, and each mapping can consist of multiple contiguous physical extents on one PV. You can use the display details command in the GUI or Text-Mode UI, or the query:ei,<region_name>,Mappings command in the CLI, to view details about an LVM2 region. This information includes a list of the mappings that define that region, information about each mapping (linear, striped, chunk-size) and how they are laid out on the PVs. The LVM1 plug-in allows you to move each logical extent individually. See Section C.3.5, "Moving LVM regions". The LVM2 plug-in, however, allows you to move each logical mapping as a whole. The benefit of this is that it takes much less effort to move large portions of an LVM2 region, because an average region has few logical mappings, whereas in LVM1, each logical extent is treated independently, and there are often very many extents. The downside is that in order to move a mapping, there must be an equivalent amount of consecutive, unused space on one PV in the container. To move an LVM2 region's mapping, use the "move_mapping" plug-in task, which is described in Chapter 20. "Plug-in operations tasks". This task has three or four options, depending on whether the region is linear or striped.
You can move multiple mappings in the same EVMS session. When you save changes, the data will be copied from the current location to the new location. When the copy is complete, the metadata will be updated to reflect the new location. This copying can be done while the volumes that are built on this region are mounted and in use. However, the EVMS session must be left open while the move is in progress. If the EVMS session closes or crashes (or if the machine crashes), the move that was currently in progress will effectively be cancelled. The next time you start EVMS, the region will have reverted back to the state it was in before that move begins. The data on that region should not be affected. As stated in the previous section, the one downside to the move-mapping plug-in task is that a mapping must be moved as a whole. If there isn't enough contiguous freespace somewhere in the container to hold the mapping, it won't be possible to move that mapping. To fix this problem, a plug-in task, described in Chapter 20. "Plug-in operations tasks" is available that allows you to split one mapping into two mappings. The underlying location of the region does not change. This allows you to break a mapping into smaller pieces so it can be moved to smaller freespace areas in the container. The plug-in task is called "split_mapping," and has the following options:
In addition to splitting a mapping, there is a plug-in task to merge all mappings that are consecutive on the PVs. This task is called "merge_mapping" and has no options. The task merges all mappings in the region that can be merged. The Cluster Segment Manager (CSM) is the EVMS plug-in that identifies and manages cluster storage. The CSM protects disk storage objects by writing metadata at the start and end of the disk, which prevents other plug-ins from attempting to use the disk. Other plug-ins can look at the disk, but they cannot see their own metadata signatures and cannot consume the disk. The protection that CSM provides allows the CSM to discover cluster storage and present it in an appropriate fashion to the system. All cluster storage disk objects must be placed in containers that have the following attributes:
The CSM plug-in reads metadata and constructs containers that consume the disk object. Each disk provides a usable area, mapped as an EVMS data segment, but only if the disk is accessible to the node viewing the storage. The CSM plug-in performs these operations:
Assigning a segment manager to a disk means that you want the plug-in to manage partitions on the disk. In order to do this, the plug-in needs to create and maintain appropriate metadata. The CSM creates the follow three segments on the disk:
The CSM collects the information it needs to perform the assign operation with the following options:
Note that you would typically assign the CSM to a disk when you want to add a disk to an existing CSM container. If you are creating a new container, you have a choice of using either: Actions->Create->Container or Actions->Add->Segment Manager. If the container doesn't exist, it will be created for the disk. If the container already exists, the disk will be added to it. Unassigning a CSM plug-in results in the CSM removing its metadata from the specified disk storage object. The result is that the disk has no segments mapped and appears as a raw disk object. The disk is removed from the container that consumed it and the data segment is removed as well. An existing CSM container cannot be deleted if it is producing any data segments, because other EVMS plug-ins might be building higher-level objects on the CSM objects. To delete a CSM container, first remove disk objects from the container. When the last disk is removed, the container is also removed. The JFS FSIM lets EVMS users create and manage JFS file systems from within the EVMS interfaces. In order to use the JFS FSIM, version 1.0.9 or later of the JFS utilities must be installed on your system. The latest version of JFS can be found at http://oss.software.ibm.com/jfs/. JFS file systems can be created with mkfs on any EVMS or compatibility volume (at least 16 MB in size) that does not already have a file system. The following options are available for creating JFS file systems:
The following options are available for checking JFS file systems with fsck:
A JFS file system can be removed from its volume if the file system is unmounted. This operation involves erasing the superblock from the volume so the file system will not be recognized in the future. There are no options available for removing file systems. A JFS file system is automatically expanded when its volume is expanded. However, JFS only allows the volume to be expanded if it is mounted, because JFS performs all of its expansions online. In addition, JFS only allows expansions if version 1.0.21 or later of the JFS utilities are installed. The XFS FSIM lets EVMS users create and manage XFS file systems from within the EVMS interfaces. In order to use the XFS FSIM, version 2.0.0 or later of the XFS utilities must be installed on your system. The latest version of XFS can be found at http://oss.sgi.com/projects/xfs/. XFS file systems can be created with mkfs on any EVMS or compatibility volume that does not already have a file system. The following options are available for creating XFS file systems:
The following options are available for checking XFS file systems with fsck:
An XFS file system can be removed from its volume if the file system is unmounted. This operation involves erasing the superblock from the volume so the file system will not be recognized in the future. There are no options available for removing file systems. An XFS file system is automatically expanded when its volume is expanded. However, XFS only allows the volume to be expanded if it is mounted, because XFS performs all of its expansions online. The ReiserFS FSIM lets EVMS users create and manage ReiserFS file systems from within the EVMS interfaces. In order to use the ReiserFS FSIM, version 3.x.0 or later of the ReiserFS utilities must be installed on your system. In order to get full functionality from the ReiserFS FSIM, use version 3.x.1b or later. The latest version of ReiserFS can be found at http://www.namesys.com/. ReiserFS file systems can be created with mkfs on any EVMS or compatibility volume that does not already have a file system. The following option is available for creating ReiserFS file systems:
The following option is available for checking XFS file systems with fsck:
A ReiserFS file system can be removed from its volume if the file system is unmounted. This operation involves erasing the superblock from the volume so the file system will not be recognized in the future. There are no options available for removing file systems. A ReiserFS file system is automatically expanded when its volume is expanded. ReiserFS file systems can be expanded if the volume is mounted or unmounted. The Ext-2/3 FSIM lets EVMS users create and manage Ext2 and Ext3 file systems from within the EVMS interfaces. In order to use the Ext-2/3 FSIM, the e2fsprogs package must be installed on your system. The e2fsprogs package can be found at http://e2fsprogs.sourceforge.net/. Ext-2/3 file systems can be created with mkfs on any EVMS or compatibility volume that does not already have a file system. The following options are available for creating Ext-2/3 file systems:
The following options are available for checking Ext-2/3 file systems with fsck:
An Ext-2/3 file system can be removed from its volume if the file system is unmounted. This operation involves erasing the superblock from the volume so the file system will not be recognized in the future. There are no options available for removing file systems. The OpenGFS FSIM lets EVMS users create and manage OpenGFS file systems from within the EVMS interfaces. In order to use the OpenGFS FSIM, the OpenGFS utilities must be installed on your system. Go to http://sourceforge.net/projects/opengfs for the OpenGFS project. OpenGFS file systems can be created with mkfs on any EVMS or compatibility volume that does not already have a file system and that is produced from a shared cluster container. The following options are available for creating OpenGFS file systems:
The OpenGFS FSIM only takes care of file system operations. It does not take care of OpenGFS cluster and node configuration. Before the volumes can be mounted, you must configure the cluster and node separately after you have made the file system and saved the changes. The OpenGFS utility for checking the file system has no additional options. An OpenGFS file system can be removed from its volume if the file system is unmounted. This operation involves erasing the superblock from the volume, erasing the log headers for the journal volumes, and erasing the control block on the cluster configuration volume associated with the file system volume so that the file system will not be recognized in the future. There are no options available for removing file systems. The NTFS FSIM lets EVMS users create and manage Windows® NT® file systems from within the EVMS interfaces. NTFS file systems can be created with mkfs on any EVMS or compatibility volume that is at least 1 MB in size and that does not already have a file system. The following options are available for creating NTFS file systems:
The NTFS FSIM can run the ntfsfix utility on an NTFS file system. ntfsfix fixes NTFS partitions altered in any manner with the Linux NTFS driver. ntfsfix is not a Linux version of chkdsk. ntfsfix only tries to leave the NTFS partition in a not-so-inconsistent state after the NTFS driver has written to it. Running ntfsfix after mounting an NTFS volume read-write is recommended for reducing the chance of severe data loss when Windows NT or Windows 2000 tries to remount the affected volume. In order to use ntfsfix, you must unmount the NTFS volume. After running ntfsfix, you can safely reboot into Windows NT or Windows 2000. Please note that ntfsfix is not an fsck-like tool. ntfsfix is not guaranteed to fix all the alterations provoked by the NTFS driver. The following option is available for running ntfsfix on an NTFS file system:
The NTFS FSIM can run the ntfsclone utility to copy an NTFS file system from one volume to another. ntfsclone is faster than dd because it only copies the files and the file system data instead of the entire contents of the volume. The following options are available for running ntfsclone on an NTFS file system:
An NTFS file system can be removed from its volume if the file system is unmounted. This operation involves erasing the superblock from the volume so the file system will not be recognized in the future. There are no options available for removing file systems. |