استوریج های سری VNX شرکت EMC

  • 0

استوریج های سری VNX شرکت EMC

EMC VNX

Home, Optimizer, Benchmarks, Server Systems, Systems Architecture, Processors, Storage,
Storage Overview, System View of Storage, SQL Server View of Storage, File Layout,

PCI-ESASFCHDDSSD Technology RAID ControllersDirect-Attach,
SAN, Dell MD3200, EMC CLARiiON AX4, CLARiiON CX4, VNX, V-Max,
HP P2000, EVA, P9000/VSP, Hitachi AMS
SSD products: SATA SSDs, PCI-E SSDs , Fusion iO , other SSD

Updated 2013-10

Update 2013-10
A link to Storage Review EMC Next Generation VNX Series Released … by Brian Beeler. While the preliminary slidedecks mention VNX2, the second generation VNX is jsut VNX. No 2 at the end.

In the table below, the second generation VNX specifications (per SP except as noted). The VNX5200 will come out next year?

VNX5200 VNX5400 VNX5600 VNX5800 VNX7600 VNX8000
Max FE ports 12? 16? 20 20 20 40
Max UltraFlex I/O 2 4 5 5 5 11
embedded I/O 2 SAS? 2 SAS 2 SAS 2 SAS 2 SAS none
Max SAS 2 6 6 6 16
Max FAST Cache 600GB 1TB 2TB 3TB 4.2TB 4.2TB
Max drive 125 250 500 750 1000 1500*
Xeon E5 1.2GHz 4c 1.8GHz 4c 2.4GHz 4c 2.0GHz 6c 2.2GHz 8c 2×2.7G 8c
Memory 16GB 16GB 24GB 32GB 64GB 128GB
Cores 4 4 4 6 8 16

 

Below is the back-end of the VNX5400 DPE. The DPE has 2 SPs. Each SP has 5 slots? One the top are the power supply, battery backup unit (BBU) and SAS module with 2-ports. One the bottom, the first module is for management.

EMC VNX

Close-up

EMC VNX

Below is the EMC VNX 8000 SPE back-end

EMC VNX

I/O module options for the VNX are: quad-port 8Gb/s FC, quad-port 1Gb/s Ethernet, dual-port 10GbE. The VNX 5600 and up can also support a quad-port 6Gb/s SAS module.

 

Updated 2013-02

While going through the Flash Management Summit 2012 slide decks, I came across the session Flash Implications in Enterprise Storage Designs by Denis Vilfort of EMC, that provided information on performance of the CLARiiON, VNX, a VNX2 and VNX Future.

A common problem with SAN vendors is that it is almost impossible to find meaningful performance information on their storage systems. The typical practice is to cited some meaningless numbers like IOPS to cache or the combined IO bandwidth of the FC ports, conveying the impression of massive IO bandwidth, while actually guaranteeing nothing.

VNX (Original)

The original VNX was introduced in early 2011? The use of the new Intel Xeon 5600 (Westmere-EP) processors was progressive. The decision to employ only a single socket was not.

EMC VNX

Basic IO functionality does not require huge CPU resources. But the second socket would double memory bandwidth, which is necessary for driving IO. Data read from storage must first be written to memory before being sent to host? The second processor would also better support a second IOH. Finally, the additional CPU resources would support the value-add features that SAN vendors so desparately try to promote.

EMC did provide the table below on their VNX mid-range systems in the document “VNX: Storage Technology High Bandwidth Application” (h8929) showing the maximum number of front-end FC and back-end SAS channels along with the IO bandwidths for several categories. This is actually unusual for a SAN storage vendor, so good for EMC. Unfortunately, there is no detailed explanation of the IO patterns for each category.

EMC VNX

Now obviously the maximum IO bandwidth can be reached in the maximum configuration, that is with all IO channels and all drive bays populated. There is also no question that maximum IO bandwidth requires all back-end IO ports populated and a sufficient number of front-end ports populated. (The VNX systems may support more front-end ports than necessary for configuration flexibility?)

However, it should not be necessary to employ the full set of hard disks to reach maximum IO bandwidth. This is because SAN systems are designed for capacity and IOPS. There are Microsoft Fast Track Data Warehouse version 3.0 and 4.0 documents for the EMC VNX 5300 or 5500 system. Unfortunately Microsoft has backed away from the bare table scan test of disk rate in favor of a composite metric. But it does seem to indicate that 30-50MB/s per disk is possible in the VNX.

What is needed is a document specifying the configuration strategy for high bandwidth specific to SQL Server. This includes the number and type of front-end ports, the number of back-end SAS buses, the number of disk array enclosures (DAE) on each SAS bus, the number of disks in each RAID group and other details for each significant VNX model. It is also necessary to configure the SQL Server database file layout to match the storage system structure, but that should be our responsibility as DBA.

It is of interest to note that the VNX FTDW reference architectures do not employ Fast Cache (flash caching) and (auto) tiered-storage. Both of these are an outright waste of money on DW systems and actually impedes performance. It can make good sense to employ a mix of 10K/15K HDD and SSD in the DW storage system, but we should use the SQL Server storage engine features (filegroups and partitioning) to place data accordingly.

A properly configured OLTP system should also employ separate HDD and SSD volumes, again using of filegroups and partitioning to place data correctly. The reason is that the database engine itself is a giant data cache, with perhaps as much as 1000GB of memory. What do we really expect to be in the 16-48GB SAN cache that is not in the 1TB database buffer cache? The IO from the database server is likely to be very misleading in terms of what data is important and whether it should be on SSD or HDD.

CLARiiON, VNX, VNX2, VNX Future Performance

Below are performance characteristics of EMC mid-range for CLARiiON, VNX, VNX2 and VNX Future. This is why I found the following diagrams highly interesting and noteworthy. Here, the CLARiiON bandwidth is cited as 3GB/s and the current VNX as 12GB/s (versus 10GB/s in the table above).

EMC VNX

I am puzzled that the VNX is only rated at 200K IOPS. That would correspond to 200 IOPS per disk and 1000 15K HDDs at low queue depth. I would expect there to be some capability to support short-stroke and high-queue depth to achieve greater than 200 IOPS per 15K disk.

The CLARiiON CX4-960 supported 960 HDD. Yet the IOPS cited corresponds to the queue depth 1 performance of 200 IOPS x 200 HDD = 40K. Was there some internal issue in the CLARiiON. I do recall a CX3-40 generating 30K IOPS over 180 x 15K HDD.

A modern SAS controller can support 80K IOPS, so the VNX 7500 with 8 back-end SAS buses should handle more than 200K IOPS (HDD or SSD), perhaps as high as 640K? So is there some limitation in the VNX storage processor (SP), perhaps the inter-SP communication? or a limitation of write-cache which requires write to memory in both SP?

VNX2?

Below (I suppose) is the architecture of the new VNX2. (Perhaps VNX2 will come out in May with EMC World?) In addition to transitioning from Intel Xeon 5600 (Westmere) to E5-2600 series (Sandy Bridge EP), the diagram indicates that the new VNX2 will be dual-processor (socket) instead of single socket on the entire line of the original VNX. Considering that the 5500 and up are not entry systems, this was disappointing.

EMC VNX

VNX2 provides 5X increase in IOPS to 1M and 2.3X in IO bandwidth to 28GB/s. LSI mentions a FastPath option that dramatically increases IOPS capability of their RAID controllers from 80K to 140-150K IOPS. My understanding is that this is done by completely disabling the cache on the RAID controller. The resources to implement caching for large array of HDDs can actually impede IOPS performance, hence caching is even more degrading on an array of SSDs.

The bandwidth objective is also interesting. The 12GB/s IO bandwidth of the original VNX would require 15-16 FC ports at 8Gbps (700-800MBps per port) on the front-end. The VNX 7500 has a maximum of 32 FC ports, implying 8 quad-port FC HBAs, 4 per SP.

The 8 back-end SAS busses implies 4 dual-port SAS HBAs per SP? as each SAS bus requires 1 SAS port to each SP? This implies 8 HBAs per SP? The Intel Xeon 5600 processor connects over QPI to a 5220 IOH with 32 PCI-E gen 2 lanes, supporting 4 x8 and 1×4 slots, plus a 1×4 Gen1 for other functions.

In addition, a link is needed for inter-SP communication. If one x8 PCI-E gen2 slot is used for this, then write bandwidth would be limited to 3.2GB/s (per SP?). A single socket should only be able to drive 1 IOH even though it is possible to connect 2. Perhaps the VNX 7500 is dual-socket?

An increase to 28GB/s could require 40 x8Gbps FC ports (if 700MB/s is the practical limit of 1 port). A 2-socket Xeon E5-2600 should be able to handle this easily, with 4 memory channels and 5 x8 PCI-E gen3 slots per socket.

VNX Future?

The future VNX is cited as 5M IOPS and 112GB/s. I assume this might involve the new NVM-express driver architecture supporting distributed queues and high parallelism. Perhaps the reason both VNX2 and VNX Future are described is that the basic platform is ready but not all the components to support the full bandwidth?

EMC VNX

The 5M IOPS should be no problem with an array of SSDs, and the new NVM express architecture of course. But the 112GB/s bandwidth is curious. The number of FC ports, even at a future 16Gbit/s is too large to be practical. When the expensive storage systems will finally be able to do serious IO bandwidth, it will also be time to ditch FC and FCOE. Perhaps the VNX Future will support infini-band? The puprose of having extreme IO bandwidth capability is to be able to deliver all of it to the database server on demand. If not, then the database server should have its own storage system.

The bandwidth is also too high for even a dual-socket E5-2600. Each Xeon E5-2600 has 40 PCI-E gen3 lanes, enough for 5 x8 slots. The nominal bandwidth per PCIe G3 lane is 1GB/s, but the realizable bandwidth might be only 800MB/s per lane, or 6.4GB/s. A socket system in theory could drive 64GB/s. The storage system is comprised of 2 SP, each SP being a 2-socket E5-2600 system.

To support 112GB/s each SP must be able to simultaneously move 56GB/s on storage and 56GB/s on the host-side ports for a total of 112GB/s per SP. In addition, suppose the 112GB/s bandwidth for read, and that the write bandwidth is 56GB/s. Then it is also necessary to support 56GB/s over the inter-SP link to guarantee write-cache coherency (unless it has been decided that write caching flash on the SP is stupid).

Is it possible the VNX Future has more than 2 SP’s? Perhaps each SP is a 2-socket E5-4600 system, but the 2 SPs are linked via QPI? Basically this would be a 4-socket system, but running as 2 separate nodes, each node having its own OS image. Or that it is a 4-socket system? Later this year, Intel should be releasing an Ivy Bridge-EX, which might have more bandwidth? Personally I am inclined to prefer a multi-SP system over a 4-socket SP.

Never mind, I think Haswell-EP will have 64 PCIe gen4 lanes at 16GT/s. The is 2GB/s per lane raw, and 1.6GB/s per lane net, 12.8GB/s per x8 slot and 100GB/s per socket. I still think it would be a good trick if one SP could communicate with the other over QPI, instead of PCIe. Write caching SSD at the SP level is probably stupid if the flash controller is already doing this? Perhaps the SP memory should be used for SSD metadata? In any case, there should be coordination between what each component does.

Summary

It is good to know that EMC is finally getting serious about IO bandwidth. I was of the opinion that the reason Oracle got into the storage business was that they were tired of hearing complaints from customers resulting from bad IO performance on the multi-million dollar SAN.

My concern is that the SAN vendor field engineers have been so thoroughly indoctrinated in the SaaS concept that only capacity matters while having zero knowledge of bandwidth, that they are not be able to properly implement the IO bandwidth capability of the existing VNX, not to mention the even higher bandwidth in VNX2 and Future.

unsorted misc

EMC VNX

EMC VNX

EMC VNX

EMC VNX

EMC VNX

EMC VNX Early 2011?

VNX came out in early 2011 or late 2010? All VNX models use the Xeon 5600 processors, ranging from 2.13 to 2.8GHz, and four to six cores (actually from 1.6GHz and 2 cores?). The 5100, 5300 and 5500 are comprised of two Disk-processor enclosures (DPE) that house both the storage processors and the first tray of disks. The 5700 and 7500 models are comprised of Storage-processor enclosures (SPE) that house only the storage processors. Two DPE or SPE comprise an Array.

VNX 5100 VNX 5300 VNX 5500 VNX 5700 VNX 7500
Max Drives 75 125 250 500 1000
Enclosure 3U Disk + SP 3U Disk + SP 3U Disk + SP 2U SP 2U SP
DAE Options 25 x 2.5″-2U
15 x 3.5″-3U
25 x 2.5″-2U
15 x 3.5″-3U
25 x 2.5″-2U
15 x 3.5″-3U
25 x 2.5″-2U
15 x 3.5″-3U
60 x 3.5″-4U
25 x 2.5″-2U
15 x 3.5″-3U
60 x 3.5″-4U
Memory per Array 8GB 16GB 24GB 36GB 48GB
Max UltraFlex IO
Modules per Array
0 4 4 10 10
Embedded IO Ports
per Array
8 FC & 4 SAS 8 FC & 4 SAS 8 FC & 4 SAS
Max FC Ports
per Array
8 16 16 24 32
SAS Buses
(to DAEs)
2 2 2 or 6 4 4 or 8
Freq 1.6GHz 1.6GHz 2.13GHz 2.4GHz 2.8GHz
cores 2 4 4 4 6
Mem/DPE 4GB 8GB 12GB 18GB 24GB

The VNX front-end is FC with options for iSCSI. The back-end is all SAS (in the previous generation, this was FC). The 5100-5500 models have 4 FC ports and 1 SAS bus (2 ports) embedded per DPE, for 8 FC and 2 SAS busses per array. The 5100 does not have IO expansion capability. The 5300 and 5500 have 2 IO expansion ports per DPE, for 4 total in the array. The 5300 only allows front-end IO expansion, while the 5500 allows expansion on both front-end and back-end IO.

The 5300 and higher VNX models can also act as file servers with X-blades in the IO expansion slots. This capability is not relevent to high-performance database IO and is not considered here.

Disk-array enclosure (DAE) options are now 25 x 2.5″ in 2U, 15 x 3.5″ in 3U or a 60 x 3.5″ in 4U. The hard disk long dimemsion is vertical, the 1″ disk height is aligned along the width of the DAE for 12 across, and this is 5 deep.

An UltraFlex IO Module attaches to a PCI-E slot? (x8?). Module options are quad-port FC (upto 8Gbps), dual-port 10GbE, or quad-port 1GbE, or 2 SAS busses (4-ports). So a module could be 1 SAS bus (2 ports), 4 FC ports etc?

Each SAS port is 4 lanes at 6Gbp/s and 2 ports make 1 “bus” with redundant paths.

The first 4 physical disks reserves 192GB per disk.

There is an EMC document h8929-vnx-high-bandwidth-apps-ep with useful information. more later. The diagram below is from the EMC VNC document. The VNX storage engine core is a Westmere processor (1 or 2?) with 3 memory channels, and IO adapters on PCI-E gen 2. The backend is SAS, and the front-end to host can be FC, FCoE or iSCSI.

Below is the VNX System architecture from an EMC slidedeck titled “Introducing VNX Series”. Note EMC copyright, so I suppose I should get permission to use? In block mode, only the VNX SP is required. In file mode, up to four X-Blades can be configured?

Below from: Introducing VNX Series, Customer Technical Update

EMC VNX

Below from h8929

EMC VNX

As far as I can tell, the VNX models have a single Xeon 5600 processor (socket). While it may not take much CPU-compute capability to support a SAN storage system, there is a significant difference in the IO capability with 2 processor sockets populated (6 memory channels instead of 3), noting that IO must be writen to memory from the inbound side, then read from memory to the outbound side.

VNX 5100 VNX 5300 VNX 5500 VNX 5500* VNX 5700 VNX 7500
Backend SAS Buses 2 2 2 6* 4 4 or 8 Max Frontend FC 8 16 16 16 24 32 DSS Bandwidth (MB/s) 2300 3600 4200 4200 6400 7300 DW Bandwidth (MB/s) 2000 3200 4200 6400 6400 10000 Backup BW – Cache bypass mode 700 900 1700 1900 3300 7500 Rich Media BW 3000 4100 5700 5700 6200 9400

Note: * VNX 5500 Hi-BW option consumes all the flex-IO modules, and bandwidth is limited by FC front-end?

EMC VNX

The SAS IO expansion module adds 4 SAS ports for a total of 6 ports, since 2 are integrated. However, only a total of 4 ports (2 busses) are used per DPE.

Full Data Warehouse bandwidth requires at least 130 x 15K SAS drives.

h8177

h8177 says:
VNX5100 can support 1,500MB/s per DPE or 3,000MB/s for the complete 5100 unit.
VNX5300 can support 3,700MB/s for the complete unit.
VNX5500 can support 4,200MB/s for the complete unit on integrated back-end SAS ports, and 6,000MB/s with additional back-end SAS ports.

Flare 31.5 talks about the VNX5500 supporting additional front-end and back-end ports in the 2 expansion slots. To achieve the full 6,000MB/s, the integrated 8Gbps FC ports must be used in combination with additional Front-end ports in one of the expansion slots

h8297

The Cisco/EMC version of the SQL Server Fast Track Data Warehouse employs one VNX 5300 with 75 disks and two 5300’s with a total of 150 disks, each VNX with 4 x 10Gb FCoE connections. The throughput is 1985 for the single 5300 and 3419MB/s for two. This is well below the list bandwidth of 3GB/s+ for the VNX5300, which may have required 130 disks?

Of the 75 disks on each 5300, only 60 are allocated to data. So the 1985MB/s bandwidth corresponds to 33MB/s per disk, well below the Microsoft FTDW reference of 100MB/s per disk. The most modern 15K 2.5in hard drives are rated for 202MB/s on the outer tracks (Seagate Savvio 15K.3). The Seagate Savvio 10K.5 is rated for 168MB/s on the outer tracks. With consideration for the fact that perfect sequential placement is difficult to achieve, the Microsoft FTDW target of 100MB/s per disk or even a lower target of 50MB/s per disk is reasonable, but 33MB/s per disk is rather low.

EMC VNX

Data warehouse IO is 512KB random read.
DSS is 64KB sequential read.
Backup is 512KB sequential write.
Rich media is 256KB sequential read.

Deploying EMC VNX Unified Storage Systems for Data Warehouse Applications (Dec 2011)
h8177-deploying-vnx-data-warehouse-wp,
Introduction to the EMC VNX Series, A Detailed Review (Sep 2011)
h8217-introduction-vnx-wp,
h8046-clariion-celerra-unified-fast-cache-wp

Other EMC documents h8297 Cisco Reference Configurations for Microsoft SQL Server 2008 R2 Fast Track Data Warehouse 3.0 with EMC VNX5300 Series Storage Systems
h8929 VNX: Storage Technology High Bandwidth Applications


آخرین دیدگاه‌ها

    دسته‌ها