Tuning Guide
Tuning Guide
StorNext 3.1.4®
StorNext
6-01376-12
StorNext 3.1.4 File System Tuning Guide, 6-01376-12, Ver. A, Rel. 3.1.4, September 2009, Made in USA.
Quantum Corporation provides this publication “as is” without warranty of any kind, either express or implied,
including but not limited to the implied warranties of merchantability or fitness for a particular purpose. Quantum
Corporation may revise this publication from time to time without notice.
COPYRIGHT STATEMENT
US Patent No: 5,990,810 applies. Other Patents pending in the US and/or other countries.
StorNext is either a trademark or registered trademark of Quantum Corporation in the US and/or other countries.
Your right to copy this manual is limited by copyright law. Making copies or adaptations without prior written
authorization of Quantum Corporation is prohibited by law and constitutes a punishable violation of the law.
TRADEMARK STATEMENT
Quantum, DLT, DLTtape, the Quantum logo, and the DLTtape logo are all registered trademarks of Quantum
Corporation.
RAID Cache The single most important RAID tuning component is the cache
Configuration 0 configuration. This is particularly true for small I/O operations.
Contemporary RAID systems such as the EMC CX series and the various
Engenio systems provide excellent small I/O performance with properly
tuned caching. So, for the best general purpose performance
characteristics, it is crucial to utilize the RAID system caching as fully as
possible.
For example, write-back caching is absolutely essential for metadata
stripe groups to achieve high metadata operations throughput.
However, there are a few drawbacks to consider as well. For example,
read-ahead caching improves sequential read performance but might
reduce random performance. Write-back caching is critical for small write
performance but may limit peak large I/O throughput.
operations involve a very high rate of small writes to the metadata disk,
so disk latency is the critical performance factor. Write-back caching can
be an effective approach to minimizing I/O latency and optimizing
metadata operations throughput. This is easily observed in the hourly
File System Manager (FSM) statistics reports in the cvlog file. For
example, here is a message line from the cvlog file:
PIO HiPriWr SUMMARY SnmsMetaDisk0 sysavg/350 sysmin/333 sysmax/367
This statistics message reports average, minimum, and maximum write
latency (in microseconds) for the reporting period. If the observed
average latency exceeds 500 microseconds, peak metadata operation
throughput will be degraded. For example, create operations may be
around 2000 per second when metadata disk latency is below 500
microseconds. However, if metadata disk latency is around 5
milliseconds, create operations per second may be degraded to 200 or
worse.
Another typical write caching approach is a “write-through.” This
approach involves synchronous writes to the physical disk before
returning a successful reply for the I/O operation. The write-through
approach exhibits much worse latency than write-back caching; therefore,
small I/O performance (such as metadata operations) is severely
impacted. It is important to determine which write caching approach is
employed, because the performance observed will differ greatly for small
write I/O operations.
In some cases, large write I/O operations can also benefit from caching.
However, some SNFS customers observe maximum large I/O
throughput by disabling caching. While this may be beneficial for special
large I/O scenarios, it severely degrades small I/O performance;
therefore, it is suboptimal for general-purpose file system performance.
RAID Read-Ahead RAID read-ahead caching is a very effective way to improve sequential
Caching 0 read performance for both small (buffered) and large (DMA) I/O
operations. When this setting is utilized, the RAID controller pre-fetches
disk blocks for sequential read operations. Therefore, subsequent
application read operations benefit from cache speed throughput, which
is faster than the physical disk throughput.
This is particularly important for concurrent file streams and mixed I/O
streams, because read-ahead significantly reduces disk head movement
that otherwise severely impacts performance.
RAID Level, Segment Configuration settings such as RAID level, segment size, and stripe size
Size, and Stripe Size 0 are very important and cannot be changed after put into production, so it is
critical to determine appropriate settings during initial configuration.
The best RAID level to use for high I/O throughput is usually RAID5.
The stripe size is determined by the product of the number of disks in the
RAID group and the segment size. For example, a 4+1 RAID5 group with
64K segment size results in a 256K stripe size. The stripe size is a very
critical factor for write performance because I/Os smaller than the stripe
size may incur a read/modify/write penalty. It is best to configure
RAID5 settings with no more than 512K stripe size to avoid the read/
modify/write penalty. The read/modify/write penalty is most
noticeable in the absence of “write-back” caching being performed by the
RAID controller.
The RAID stripe size configuration should typically match the SNFS
StripeBreadth configuration setting when multiple LUNs are utilized in a
stripe group. However, in some cases it might be optimal to configure the
SNFS StripeBreadth as a multiple of the RAID stripe size, such as when
the RAID stripe size is small but the user's I/O sizes are very large.
However, this will be suboptimal for small I/O performance, so may not
be suitable for general purpose usage.
RAID1 mirroring is the best RAID level for metadata and journal storage
because it is most optimal for very small I/O sizes. Quantum
recommends using fibre channel or SAS disks (as opposed to SATA) for
metadata and journal due to the higher IOPS performance and reliability.
It is also very important to allocate entire physical disks for the Metadata
and Journal LUNs in ordep to avoid bandwidth contention with other I/
O traffic. Metadata and Journal storage requires very high IOPS rates
(low latency) for optimal performance, so contention can severely impact
IOPS (and latency) and thus overall performance. If Journal I/O exceeds
1ms average latency, you will observe significant performance
degradation.
It can be useful to use a tool such as lmdd to help determine the storage
system performance characteristics and choose optimal settings. For
example, varying the stripe size and running lmdd with a range of I/O
sizes might be useful to determine an optimal stripe size multiple to
configure the SNFS StripeBreadth.
Some storage vendors now provide RAID6 capability for improved
reliability over RAID5. This may be particularly valuable for SATA disks
where bit error rates can lead to disk problems. However, RAID6
typically incurs a performance penalty compared to RAID5, particularly
for writes. Check with your storage vendor for RAID5 versus RAID6
recommendations.
It is always valuable to understand the file size mix of the target dataset
as well as the application I/O characteristics. This includes the number of
concurrent streams, proportion of read versus write streams, I/O size,
sequential versus random, Network File System (NFS) or Common
Internet File System (CIFS) access, and so on.
For example, if the dataset is dominated by small or large files, various
settings can be optimized for the target size range.
Similarly, it might be beneficial to optimize for particular application I/O
characteristics. For example, to optimize for sequential 1MB I/O size it
would be beneficial to configure a stripe group with four 4+1 RAID5
LUNs with 256K stripe size.
However, optimizing for random I/O performance can incur a
performance trade-off with sequential I/O.
Furthermore, NFS and CIFS access have special requirements to consider
as described in the Direct Memory Access (DMA) I/O Transfer section.
Direct Memory Access To achieve the highest possible large sequential I/O transfer throughput,
(DMA) I/O Transfer 0 SNFS provides DMA-based I/O. To utilize DMA I/O, the application
must issue its reads and writes of sufficient size and alignment. This is
called well-formed I/O. See the mount command settings
auto_dma_read_length and auto_dma_write_length, described in the
Mount Command Options on page 19.
Buffer Cache 0 Reads and writes that aren't well-formed utilize the SNFS buffer cache.
This also includes NFS or CIFS-based traffic because the NFS and CIFS
daemons defeat well-formed I/Os issued by the application.
There are several configuration parameters that affect buffer cache
performance. The most critical is the RAID cache configuration because
buffered I/O is usually smaller than the RAID stripe size, and therefore
incurs a read/modify/write penalty. It might also be possible to match
the RAID stripe size to the buffer cache I/O size. However, it is typically
most important to optimize the RAID cache configuration settings
described earlier in this document.
It is usually best to configure the RAID stripe size no greater than 256K
for optimal small file buffer cache performance.
For more buffer cache configuration settings, see Mount Command
Options on page 19.
NFS / CIFS 0 It is best to isolate NFS and/or CIFS traffic off of the metadata network to
eliminate contention that will impact performance. For optimal
performance it is necessary to use 1000BaseT instead of 100BaseT. On
NFS clients, use the vers=3, rsize=262144 and wsize=262144 mount
options, and use TCP mounts instead of UDP. When possible, it is also
best to utilize TCP Offload capabilities as well as jumbo frames.
It is best practice to have clients directly attached to the same network
switch as the NFS or CIFS server. Any routing required for NFS or CIFS
traffic incurs additional latency that impacts performance.
It is critical to make sure the speed/duplex settings are correct, because
this severely impacts performance. Most of the time auto-detect is the
correct setting. Some managed switches allow setting speed/duplex (for
example 1000Mb/full,) which disables auto-detect and requires the host to
be set exactly the same. However, if the settings do not match between
switch and host, it severely impacts performance. For example, if the
switch is set to auto-detect but the host is set to 1000Mb/full, you will
observe a high error rate along with extremely poor performance. On
Linux, the ethtool tool can be very useful to investigate and adjust speed/
duplex settings.
It can be useful to use a tool like netperf to help verify the Metadata
Network performance characteristics. For example, if netperf -t TCP_RR
reports less than 15,000 transactions per second capacity, a performance
penalty may be incurred. You can also use the netstat tool to identify tcp
retransmissions impacting performance. The cvadmin “latency-test” tool
is also useful for measuring network latency.
Note the following configuration requirements for the metadata network:
• In cases where gigabit networking hardware is used and maximum
StorNext performance is required, a separate, dedicated switched
Ethernet LAN is recommended for the StorNext metadata network. If
maximum StorNext performance is not required, shared gigabit
networking is acceptable.
• A separate, dedicated switched Ethernet LAN is mandatory for the
metadata network if 100 Mbit/s or slower networking hardware is
used.
• StorNext does not support file system metadata on the same network
as iSCSI, NFS, CIFS, or VLAN data when 100 Mbit/s or slower
networking hardware is used.
The CPU power and memory capacity of the MDC System are important
performance factors, as well as the number of file systems hosted per
system. In order to ensure fast response time it is necessary to use
dedicated systems, limit the number of file systems hosted per system
(maximum 8), and have an adequate CPU and memory.
Some metadata operations such as file creation can be CPU intensive, and
benefit from increased CPU power. The MDC platform is important in
these scenarios because lower clock- speed CPUs such as Sparc degrade
performance.
Other operations can benefit greatly from increased memory, such as
directory traversal. SNFS provides three config file settings that can be
used to realize performance gains from increased memory:
BufferCacheSize, InodeCacheSize, and ThreadPoolSize.
FSM Configuration File The following FSM configuration file settings are explained in greater
Settings 0 detail in the cvfs_config man page. For a sample FSM configuration file,
see Sample FSM Configuration File on page 28.
The examples in the following sections are excerpted from the sample
configuration file from Sample FSM Configuration File on page 28.
Stripe Groups 0
Splitting apart data, metadata, and journal into separate stripe groups is
usually the most important performance tactic. The create, remove, and
allocate (e.g., write) operations are very sensitive to I/O latency of the
journal stripe group. Configuring a separate stripe group for journal
greatly benefits the speed of these operations because disk seek latency is
minimized. However, if create, remove, and allocate performance aren't
critical, it is okay to share a stripe group for both metadata and journal,
but be sure to set the exclusive property on the stripe group so it doesn't
get allocated for data as well. It is recommended that you assign only a
single LUN for each journal or metadata stripe group. Multiple metadata
stripe groups can be utilized to increase metadata I/O throughput
through concurrency. RAID1 mirroring is optimal for metadata and
journal storage. Utilizing the write-back caching feature of the RAID
system (as described previously) is critical to optimizing performance of
the journal and metadata stripe groups.
Example:
[stripeGroup RegularFiles]
Status UP
Exclusive No ##Non-Exclusive stripeGroup for all Files##
Read Enabled
Write Enabled
StripeBreadth 256K
MultiPathMethod Rotate
Node CvfsDisk6 0
Node CvfsDisk7 1
[StripeGroup MetaFiles]
Status UP
MetaData Yes
Journal No
Exclusive Yes
Read Enabled
Write Enabled
StripeBreadth 256K
MultiPathMethod Rotate
Node CvfsDisk0 0
[StripeGroup JournFiles]
Status UP
Journal Yes
MetaData No
Exclusive Yes
Read Enabled
Write Enabled
StripeBreadth 256K
MultiPathMethod Rotate
Node CvfsDisk1 0
Affinities 0
Affinities are another stripe group feature that can be very beneficial.
Affinities can direct file allocation to appropriate stripe groups according
to performance requirements. For example, stripe groups can be set up
with unique hardware characteristics such as fast disk versus slow disk,
or wide stripe versus narrow stripe. Affinities can then be employed to
steer files to the appropriate stripe group.
For optimal performance, files that are accessed using large DMA-based
I/O could be steered to wide-stripe stripe groups. Less performance-
critical files could be steered to slow disk stripe groups. Small files could
be steered to narrow-stripe stripe groups.
Example:
[stripeGroup AudioFiles]
Status UP
Exclusive Yes ##These two lines set Exclusive stripeGroup ##
Affinity AudFiles ##for Audio Files Only##
Read Enabled
Write Enabled
StripeBreadth 1M
MultiPathMethod Rotate
Node CvfsDisk4 0
Node CvfsDisk5 1
StripeBreadth 0
This setting must match the RAID stripe size or be a multiple of the RAID
stripe size. Matching the RAID stripe size is usually the most optimal
setting. However, depending on the RAID performance characteristics
and application I/O size, it might be beneficial to use a multiple of the
RAID stripe size. For example, if the RAID stripe size is 256K, the stripe
group contains 4 LUNs, and the application to be optimized uses DMA I/
O with 8MB block size, a StripeBreadth setting of 2MB might be optimal.
In this example the 8MB application I/O is issued as 4 concurrent 2MB I/
Os to the RAID. This concurrency can provide up to a 4X performance
increase. This typically requires some experimentation to determine the
RAID characteristics. The lmdd utility can be very helpful. Note that this
setting is not adjustable after initial file system creation.
Optimal range for the StripeBreadth setting is 128K to multiple
megabytes, but this varies widely. This setting cannot be changed after being
put into production, so its important to choose the setting carfefully during
initial configuration.
Example:
[stripeGroup VideoFiles]
Status UP
Exclusive Yes ##These Two lines set Exclusive stripeGroup##
Affinity VidFiles ##for Video Files Only##
Read Enabled
Write Enabled
StripeBreadth 4M
MultiPathMethod Rotate
Node CvfsDisk2 0
Node CvfsDisk3 1
BufferCacheSize 0
InodeCacheSize 0
This setting consumes about 800-1000 bytes of memory times the number
specified. Increasing this value can reduce latency of any metadata
operation by performing a hot cache access to inode information instead
of an I/O to get inode info from disk, about 100 to 1000 times faster. It is
especially important to increase this setting if metadata I/O latency is
high, (for example, more than 2ms average latency). You should try to
size this according to the sum number of working set files for all clients.
Optimal settings for InodeCacheSize range from 16K to 128K.
Example: InodeCacheSize 16K # 1000-1200 bytes each, default 8K
ThreadPoolSize 0
ForcestripeAlignment 0
This setting should always be set to Yes. This is critical if the largest
StripeBreadth defined is greater than 1MB. Note that this setting is not
adjustable after initial file system creation.
Example: ForcestripeAlignment Yes
FsBlockSize 0
The FsBlockSize (FSB), metadata disk size, and JournalSize settings all
work together. For example, the FsBlockSize must be set correctly in
order for the metadata sizing to be correct. JournalSize is also dependent
on the FsBlockSize.
For FsBlockSize the optimal settings for both performance and space
utilization are in the range of 16K or 64K.Settings greater than 64K are not
recommended because performance will be adversely impacted due to
inefficient metadata I/O operations. Values less than 16K are not
recommended in most scenarios because startup and failover time may
be adversely impacted. Setting FsBlockSize to higher values is important
for multiterabyte file systems for optimal startup and failover time.
Note: This is particularly true for slow CPU clock speed metadata
servers such as Sparc. However, values greater than 16K can
severely consume metadata space in cases where the file-to-
directory ratio is low (e.g. less than 100 to 1).
For metadata disk size, you must have a minimum of 25 GB, with more
space allocated depending on the number of files per directory and the
size of your file system.
The following table shows suggested FsBlockSize (FSB) settings and
metadata disk space based on the average number of files per directory
and file system size. The amount of disk space listed for metadata is in
addition to the 25 GB minimum amount. Use this table to determine the
setting for your configuration.
Average No.
of Files Per File System SIze: Less File System Size: 10TB
Directory Than 10TB or Larger
JournalSize 0
The optimal settings for JournalSize are in the range between 16M and
64M, depending on the FsBlockSize. Avoid values greater than 64M due
to potentially severe impacts on startup and failover times. Values at the
higher end of the 16M-64M range may improve performance of metadata
operations in some cases, although at the cost of slower startup and
failover time.
The following table shows recommended settings. Choose the setting that
corresponds to your configuration.
FsBlockSize JournalSize
16KB 16MB
64KB 64MB
SNFS Tools 0 The snfsdefrag tool is very useful to identify and correct file extent
fragmentation. Reducing extent fragmentation can be very beneficial for
performance. You can use this utility to determine whether files are
fragmented, and if so, fix them. If your files are prone to fragmentation
you should also use the FSM config file tuning options to minimize
fragmentation. These global configuration settings are InodeExpandMin,
InodeExpandInc, and InodeExpandMax. (For more information, see the
man cvfs_config page.) The snfsdefrag man page explains the command
options in greater detail.
FSM hourly statistics reporting is another very useful tool. This can show
you the mix of metadata operations being invoked by client processes, as
well as latency information for metadata operations and metadata and
journal I/O. This information is easily accessed in the cvlog log files. All
of the latency oriented stats are reported in microsecond units.
It also possible to trigger an instant FSM statistics report by setting the
Once Only debug flag using cvadmin. For example:
cvadmin -F snfs1 -e ‘debug 0x01000000’ ; tail -100 /usr/cvfs/data/snfs1/log/cvlog
SNFS External API 0 The SNFS External API might be useful in some scenarios because it
offers programmatic use of special SNFS performance capabilities such as
affinities, preallocation, and quality of service. For more information, see
the Quality of Service chapter of the StorNext User’s Guide API Guide.
Hardware Configuration 0 SNFS Distributed LAN can easily fill several Gigabit Ethernets with data,
so take special care when selecting and configuring the switches used to
interconnect SNFS Distributed LAN clients and servers. Ensure that your
network switches have enough internal bandwidth to handle all of the
It can be useful to use a tool like netperf to help verify the performance
characteristics of each Distributed LAN network. (When using netperf,
on a system with multiple NICs, take care to specify the right IP
addresses in order to ensure the network being tested is the one you will
be running Distributed LAN over. For example, if netperf -t TCP_RR
reports less than 15,000 transactions per second capacity, a performance
penalty might be incurred. Multiple copies of netperf can also be run in
parallel to determine the performance characteristics of multiple NICs.
Network Configuration For maximum throughput, SNFS distributed LAN can utilize multiple
and Topology 0 NICs on both clients and servers. In order to take advantage of this
feature, each of the NICs on a given host must be on a different IP
subnetwork. (This is a requirement of TCP/IP routing, not of SNFS -
TCP/IP can't utilize multiple NICs on the same subnetwork.) An
example of this is shown in the following illustration.
Switch Switch
A B
10.0.0.x 192.168.9.x
192.168.9.45 Distributed
192.168.9.44 Distributed
192.168.9.43
LAN
Distributed
LAN
10.0.0.45 Client
10.0.0.44
LAN
Client
C1
10.0.0.43 Client
C1
C1
10.0.0.57 Distributed
10.0.0.56 Distributed
10.0.0.55
LAN
Distributed
LAN
Client
LAN
Client
C2
Client
C2
C2
In the diagram there are two subnetworks: the blue subnetwork (10.0.0.x)
and the red subnetwork (192.168.9.x). Servers such as S1 are connected to
both the blue and red subnetworks, and can each provide up to
2 GByte/s of throughput to clients. (The three servers shown would thus
provide an aggregate of 6 GByte/s.)
Clients such as C1 are also connected to both the blue and red
subnetworks, and can each get up to 2 GByte/s of throughput. Clients
such as C2 are connected only to the blue subnetwork, and thus get a
maximum of 1 GByte/s of throughput. SNFS automatically load-
balances among NICs and servers to maximize throughput for all clients.
Note: The diagram shows separate physical switches used for the
two subnetworks. They can, in fact, be the same switch,
provided it has sufficient internal bandwidth to handle the
aggregate traffic.
Performance 0 DLC outperforms NFS and CIFS for single-stream I/O and provides
higher aggregate bandwidth. For inferior NFS client implementations,
the difference can be more than a factor of two. DLC also makes
extremely efficient use of multiple NICs (even for single streams,)
whereas legacy NAS protocols allow only a single NIC to be used. In
addition, DLC clients communicate directly with StorNext metadata
controllers instead of going through an intermediate server, thereby
lowering IOP latency.
Fault tolerance 0 DLC handles faults transparently, where possible. If an I/O is in progress
and a NIC fails, the I/O is retried on another NIC (if one is available). If a
Distributed LAN Server fails while an I/O is in flight, the I/O is retried
on another server (if one is running). When faults occur, applications
performing I/O will experience a delay but not an error, and no
administrative intervention is required to continue operation. These fault
tolerance features are automatic and require no configuration.
Load Balancing 0 DLC automatically makes use of all available Distributed LAN Servers in
an active/active fashion, and evenly spreads I/O across them. If a server
goes down or one is added, the load balancing system automatically
adjusts to support the new configuration.
Client Scalability 0 As the following table shows, DLC supports a significantly larger number
of clients than legacy NAS protocols:
Robustness and
Stability 0 The code path for DLC is simpler, involves fewer file system stacks, and
is not integrated with kernel components that constantly change with
every operating system release (for example, the Linux NFS code).
Therefore, DLC provides increased stability that is comparable to the
StorNext SAN Client.
Consistent
Security Model 0 DLC clients have the same security model as StorNext SAN clients. When
CIFS and NFS are used, some security models aren’t supported. (For
example, Windows ACLs are not accessible when running UNIX Samba
servers.)
The more cvnodes that there are encached on the client, the fewer trips
the client has to make over the wire to contact the FSM.
Each cvnode is approximately 1462 bytes in size and is allocated from the
non-paged pool. The cvnode cache is periodically purged so that unused
entries are freed. The decision to purge the cache is made based on the
Low, High, and Max water mark values. The 'Low' default is 1024, the
'High' default is 3072, and the 'Max' default is 4096.
These values should be adjusted so that the cache does not bloat and
consume more memory than it should. These values are highly
dependent on the customers work load and access patterns. Values of 512
for the High water mark will cause the cvnode cache to be purged when
more than 512 entries are present. The cache will be purged until the low
water mark is reached, for example 128. The Max water mark is for
situations where memory is very tight. The normal purge algorithms
takes access time into account when determining a candidate to evict
from the cache; in tight memory situations (when there are more than
'max' entries in the cache), these constraints are relaxed so that memory
can be released. A value of 1024 in a tight memory situation should work.
#
# Globals Defaulted
*************************************************************************
[Disk CvfsDisk0]
Status UP
Type MetaDrive
[Disk CvfsDisk1]
Status UP
Type JournalDrive
[Disk CvfsDisk2]
Status UP
Type VideoDrive
[Disk CvfsDisk3]
Status UP
Type VideoDrive
[Disk CvfsDisk4]
Status UP
Type VideoDrive
[Disk CvfsDisk5]
Status UP
Type VideoDrive
[Disk CvfsDisk6]
Status UP
Type VideoDrive
[Disk CvfsDisk7]
Status UP
Type VideoDrive
[Disk CvfsDisk8]
Status UP
Type VideoDrive
[Disk CvfsDisk9]
Status UP
Type VideoDrive
[Disk CvfsDisk10]
Status UP
Type AudioDrive
[Disk CvfsDisk11]
Status UP
Type AudioDrive
[Disk CvfsDisk12]
Status UP
Type AudioDrive
[Disk CvfsDisk13]
Status UP
Type AudioDrive
[Disk CvfsDisk14]
Status UP
Type DataDrive
[Disk CvfsDisk15]
Status UP
Type DataDrive
[Disk CvfsDisk16]
Status UP
Type DataDrive
[Disk CvfsDisk17]
Status UP
Type DataDrive
# *************************************************************************
# A stripe section for defining stripe groups.
# *************************************************************************
[StripeGroup MetaFiles]
Status UP
MetaData Yes
Journal No
Exclusive Yes
Read Enabled
Write Enabled
StripeBreadth 256K
MultiPathMethod Rotate
Node CvfsDisk0 0
[StripeGroup JournFiles]
Status UP
Journal Yes
MetaData No
Exclusive Yes
Read Enabled
Write Enabled
StripeBreadth 256K
MultiPathMethod Rotate
Node CvfsDisk1 0
[StripeGroup VideoFiles]
Status UP
Exclusive Yes##Exclusive StripeGroup for Video Files Only##
Affinity VidFiles
Read Enabled
Write Enabled
StripeBreadth 4M
MultiPathMethod Rotate
Node CvfsDisk2 0
Node CvfsDisk3 1
Node CvfsDisk4 2
Node CvfsDisk5 3
Node CvfsDisk6 4
Node CvfsDisk7 5
Node CvfsDisk8 6
Node CvfsDisk9 7
[StripeGroup AudioFiles]
Status UP
Exclusive Yes##Exclusive StripeGroup for Audio File Only##
Affinity AudFiles
Read Enabled
Write Enabled
StripeBreadth 1M
MultiPathMethod Rotate
Node CvfsDisk10 0
Node CvfsDisk11 1
Node CvfsDisk12 2
Node CvfsDisk13 3
[StripeGroup RegularFiles]
Status UP
Exclusive No##Non-Exclusive StripeGroup for all Files##
Read Enabled
Write Enabled
StripeBreadth 256K
MultiPathMethod Rotate
Node CvfsDisk14 0
Node CvfsDisk15 1
Node CvfsDisk16 2
Node CvfsDisk17 3