استوریج D2D چیست؟

  • 0

استوریج D2D چیست؟

D2D-Disk-to-Disk-Backup-Solutions-راهکارهای-بکاپ-استوریج

واژه Disk-to-Disk به نوعی از دیتا استوریج های بکاپ اطلاق می شود که در آنها بطور معمول داده ها از یک هارد دیسک به هارد دیسکی دیگر و یک سیستم استوریجی دیگر کپی می شوند. در سیستم D2D ، دیسکی که اطلاعات از روی آن کپی می شوند به عنوان Primary Diak و دیسک دیگر که اطلاعات بر روی آن کپی می شوند Secondary Disk و یا همان Backup Disk گفته می شود.

در مقایسه با سیستمهای D2T در سیستمهای D2D فایل های بکاپ به راحتی و بدون واسطه در دسترس هستند و دقیقا مانند هارد های سیستمی عمل می کنند. Tape ها نیاز دارند تا بصورت خطی اطلاعات را جستجو کنند و این باعث کندی در بازیابی داده ها می شود.

 

Backup technology is changing, and it is changing fast. Not so long ago, backing up meant copying your primary data from hard disk to tape – initially to the spools of half-inch tape beloved of film directors, and more recently to various types of tape cassettes and cartridges.

Now though, more and more organizations are using hard disks for their backups as well as their primary data, a process that has become known as disk-to-disk backup, or D2D for short.

There are a whole host of reasons for this shift. In particular, the cost of hard disks has fallen dramatically while their capacity has soared, and disk arrays have much better read/write performance than tape drives – this is particularly valuable if an application must be paused or taken offline to be backed-up.

In addition, tape is quite simply a pain to work with, especially if a cartridge must be retrieved, loaded, and scanned in its entirety, just to recover one file. Tapes can be lost or stolen, too. While we put up with all that in the past because we had to, that is no longer the case.

Sure, a tape cartridge on a shelf – albeit in a climate-controlled storeroom – is still the cheapest and least energy-consuming way to store data for the long term, but that is increasingly the role of an archive not a backup. So while tape is unlikely to vanish altogether, its role in backup is declining fast.

Disk can be incorporated into the backup process in many ways, from virtual tape libraries (VTLs) through snapshots to continuous data protection (CDP), and each method may suit some applications or user requirements better than others. An organization may even use more than one D2D backup scheme in parallel, in order to address different recovery needs.

Disk is also used in many other forms of data protection, such as data replication and mirroring, although it is important to understand that these are not backups. They protect against hardware failure or disasters, but they cannot protect against data loss or corruption as they offer no rollback capability.

When it comes to restoring data, disk’s big advantage over tape is that it is random-access rather than sequential access. That means that if you only need one file or a few files back, it will be faster and easier to find and recover from disk.

What backup and recovery methods you use will depend on two factors – the recovery point objective (RPO), i.e. how much data the organization can afford to lose or re-create, and the recovery time objective (RTO), which is how long you have to recover the data before its absence causes business continuity problems.

For instance, if the RPO is 24 hours, daily backups to tape could be acceptable, and any data created or changed since the failure must be manually recovered. An RTO of 24 hours similarly means the organization can manage without the system for a day.

If the RPO and RTO were seconds rather than hours, the backup technology would not only have to track data changes as they happened, but it would also need to restore data almost immediately. Only disk-based continuous data protection (CDP) schemes could do that.

Ways to use disk

Most current disk-based backup technologies fall into one of four basic groups, and can be implemented either as an appliance, or as software which writes to a dedicated partition on a NAS system or other storage array:

* Virtual tape library (VTL): One of the first backup applications for disk was to emulate a tape drive. This technique has been used in mainframe tape libraries for many years, with the emulated tape acting as a kind of cache – the backup application writes a tape volume to disk, and this is then copied or cloned to real tape in the background.

Using a VTL means there is no need to change your software or processes – they just run a lot faster. However, it is still largely oriented towards system recovery, and the restore options are pretty much the same as from real tape. Generally, the virtual tapes can still be cloned to real tapes in the background for longer-term storage; this process is known as D2D2T, or disk-to-disk-to-tape.

Simpler VTLs take a portion of the file space, create files sequentially and treat it as tape, so your save-set is the same as real tape. That can waste space though, as it allocates the full tape capacity on disk even if the tape volume is not full

More advanced VTLs get around this problem by layering on storage virtualization technologies. In particular this means thin provisioning, which allocates a logical volume of the desired capacity but does not physically write to disk unless there is actual data to write, and it has the ability to take capacity from anywhere, e.g. from a Storage Area Network, from local disk, and even from Network Attached Storage.

* Disk-to-disk (D2D): Typically this involves backing up to a dedicated disk-based appliance or a low-cost SATA array, but this time the disk is acting as disk, not as tape. Most backup applications now support this. It makes access to individual files easier, although system backups may be slower than streaming to a VTL.

An advantage of not emulating tape is that you are no longer bound by its limitations. D2D systems work as random-access storage, not sequential, which allows the device to send and receive multiple concurrent streams, for example, or to recover individual files without having to scan the entire backup volume.

D2D can also be as simple as using a removable disk cartridge instead of tape. The advantage here is backup and recovery speed, while the disk cartridge can be stored or moved offsite just as a tape cartridge would be.

* Snapshot: This takes a point-in-time copy of your data at scheduled intervals, and is pretty much instant. However, unless it is differential (which is analogous to an incremental backup) or includes some form of compression, data reduction or de-duplication technology, each snapshot will require the same amount of disk storage as the original.

Differential snapshot technologies are good for roll-backs and file recovery, but may be dependent on the original copy, so are less useful for disaster recovery.

Many NAS (network attached storage) vendors offer tools which can snapshot data from a NAS server or application server on one site to a NAS server at a recovery location.

However, in recent years snapshot technology has become less dependent on the hardware – it used to be mainly an internal function of a disk array or NAS server, but more and more software now offers snapshot capabilities.

* Continuous data protection (CDP): Sometimes called real-time data protection, this captures and replicates file-level changes as they happen, allowing you to wind the clock back on a file or system to almost any previous point in time.

The changes are stored at byte or block level with metadata that notes which blocks changed and when, so there is often no need to reconstruct the file for recovery – the CDP system simply gives you back the version that existed at your chosen time. Any changes made since then will need to be recovered some other way, for example via journaling within the application.

CDP is only viable on disk, not tape, because it relies on having random access to its stored data. Depending on how the CDP process functions, one potential drawback is that the more granular you make your CDP system, the more it impacts performance of the system and application. So technologies that do not rely solely on snapshot technology offer an advantage.

In addition, it can be necessary to roll forward or backward to find the version you want. One option here is to use CDP to track and store changes at very granular level, then convert the backed-up data to point-in-time snapshots for easier recovery.

Beyond data protection, a well designed CDP solution can bring other advantages, such as a lower impact on the application and server. It also moves less data over the network than file-based protection schemes, as it sends only the changed bytes.

Coherency and recovery

In order to be useful, a backup has to be coherent – a copy of something that is in the middle of being updated cannot reliably be restored. With traditional backup methods, applications would be taken offline for backup, usually overnight, but newer backup methods such as snapshots and CDP are designed to work at any time.

Snapshots provide a relatively coarse temporal granularity, so are more likely to produce a complete and coherent backup. However, they will miss any updates made since the last snapshot. The fine-grained approach of CDP is less likely to lose data, but it may be harder to bring the system back to a coherent state.

How you achieve a coherent backup will depend on the application or data. For instance, with unstructured file systems you need to find a known-good file version – typically the last closed or saved version. For files that can stay open a long time, you need to initiate a file system flush and create a pointer to that in the metadata.

To recover data, you would then find the right point in the CDP backup, wait for the data to copy back to the application server and then reactivate the application. However, that means that the more data you have, and the slower your network is, the longer recovery will take.

Fortunately, technologies are emerging to speed up this process. These provide the application with an outline of the restored data that is enough to let it start up, even though all the data has not yet truly been restored; a software agent running alongside the application then watches for data requests and reprioritises the restoration process accordingly – in effect it streams the data back as it is called for.

Schemes such as this can have applications up and running in less than 10 minutes, as the quickly recovered shell-file is just a few megabytes. Of course it does still take time to fully restore the application, but it does allow users to start using it again immediately.

One other issue that may affect the choice of snapshots or CDP is the level of interdependency within the application and its files. If there is too much interdependency, it will be more difficult to find a consistent recovery point. A potential solution is to choose software that is application-aware and can apply granular recovery intelligently, because it knows the dependencies involved.

Power and efficiency issues

One thing that must be said in tape’s favour is that its power consumption for offline data storage is very low – potentially as low as the cost of the air-conditioning for the shelf space to keep the cartridges on. Removable disk cartridges can match that of course, but only for traditional backup processes with their attendant delays.

To use newer backup processes such as snapshots and CDP requires the disk storage to be online. D2D hardware developers have therefore come up with schemes such as MAID (massive array of idle disk), which reduces power consumption by putting hard disks into a low-power state when they are not being accessed.

MAID-type systems from the likes of Copan, Hitachi Data Systems and Nexsan, and related technologies such as Adaptec’s IPM (intelligent power management) RAID controllers, therefore allow banks of disk drives to operate in different power states at varying times.

For instance, they can automate drives to go into standby mode or even spin down completely during idle periods. If a drive is accessed while powered down, the controller will spin it back up; alternatively the administrator can define peak IT activity periods when drives will never be spun down. The controller also monitors drives that have been powered down for a while, to make sure they still work OK.

Conversely, when drives do need to be accessed these storage arrays implement staggered spin-up techniques. This is to avoid overloading an array’s power supply by trying to power up all its drives at the same time.

It is claimed that these power management techniques can be configured to reduce a drive’s power consumption by up to 70 percent, without sacrificing performance. Higher reductions are possible, but may come at the cost of added latency and/or lower throughput.

Deduplication

There is more to using disks for backup than merely speed. A big advantage of disk over tape is that disk storage is random-access, whereas tape can only be read sequentially. That makes it feasible to reprocess the data on disk once it has been backed up, and as well as snapshots and CDP, that has enabled another key innovation in backup: deduplication.

This is a compression or data reduction technique which takes a whole data set or stream, looks for repeated elements, and then stores or sends only the unique data. Obviously, some data sets contain more duplication than others – for example, virtual servers created from templates will be almost identical. It is not unusual for users to report compression ratios of 10:1 or more, while figures of 50:1 have been reported in some cases.

In the past, de-duplication has typically been built into storage systems or hardware appliances, and has therefore been hardware-dependent. That is changing now though, with the emergence of backup software that includes deduplication features and is hardware-independent.

The technology is also being used for backups between data centres, or between branch offices and headquarters, as it reduces the amount of data that must be sent over a WAN connection.

D2D in branch offices and remote offices

There are many challenges involved in backing-up branch offices and remote offices. Who changes the tapes and takes them off-site, for instance? Plus, local data volumes are growing and more sites now run applications locally, not just file-and-print, so what do you do when the backup window becomes too small?

One possibility is to backup or replicate to headquarters, preferably using CDP or de-duplication technology to reduce the load on the WAN by sending only the changed data blocks. The drawback with anything online or consolidated is how long it takes to restore a failed system, however. Even if you have the skills on hand and a fast connection, it can take an enormous time to restore just a few hundred gigabytes of data.

D2D is the obvious next step – it can be installed as a VTL, so it functions the same way as tape but faster, but it also gives you a local copy of your files for recovery purposes. That local copy will probably answer 90 to 95 percent of recovery needs.

Add asynchronous replication to headquarters, and you can store one generation of backups locally with more consolidated at the data centre. Layer de-duplication on top, and there is less data to backup from the branch office and therefore less bandwidth consumed.

Consolidating backups at the data centre can bring other benefits too, in particular it enables information to be searched and archived more readily. It also takes the backup load off the branch offices as their backups are simply for staging and fast local recovery, so they no longer need to be retained.

Should the entire branch or remote office be lost, there are techniques to speed up the process of restoring a whole server or storage system. An example is the use of external USB hard drives, sent by courier and used to ‘seed’ the recovered system.

Even faster though are data-streaming technologies. This virtualizes the recovery process, presenting the application with an image of its data and streaming the underlying data back as it is called for.


  • 0

استوریج های هیبرید Hitachi

 

HYBRID STORAGE SYSTEMS

خانواده-استوریج-های-هیتاچی-Hitachi

سیستم های استوریج هیبریدی شرکت هیتاچی

تبدیل داده ها به اطلاعات با ارزش سازمانی به چه معنی است؟

برای افزایش سرعت در فعالیت های کاری و کاهش ریسک در مدت زمان Downtime سیستم ها، استفاده از سیستمهای ذخیره سازی هیبرید و استوریج های فلشی  All-Flash Enterprise بهترین گزینه می باشد. برای تامین یک TCO خیلی پایین برای ساختار فایل، استوریج های NAS High-Enterprise پیشنهاد می شود.

بصورت کاملا امن داده های موجود در دیتاسنتر ها و ساختارهای ابری شما ذخیره و بازیابی و به اشتراک گذاشته می شوند. این مهم توسط Object Storage های پیشرفته انجام می پذیرد که دیتای خام را به اطلاعات با ارزش قابل استفاده تبدیل می شود به تعبیر دیگر در دسترس بودن بیشتر اطلاعات و نیز مدیریت آسان آن در بهره برداری بسیار مهم می باشد. همانطور می دانید شرکت هیتاچی در تکنولوژی های مربوط به استوریج های Virtualization جزو لیدر های اصلی می باشد و سیستمهای اطلاعاتی این شرکت پلتفرمهایی را پیشنهاد می کنند که در آنها از قابلیت های Abstraction یا همان خلاصه سازی داده ها استفاده می شود که این قابلیت کمک می کند تا اطلاعات در دسترس رنج وسیع تری از محیط های کسب و کار قرار گیرد.

شرکت هیتاچی استوریجهای فلشی و سیستمهای NAS و پلترفرم های مخصوص محتوا را با هم به مشتریان پیشنهاد می کند تا بتواند کلیه ساختار اطلاعاتی را یکپارچه سازد.

با توجه به برنامه و Application حیاتی سازمان خود تصمیم گیری کنید!

برای انتخاب بهتر سیستم ذخیره ساز ابتدا باید نیازهایی که در ساختار اطلاعاتی داریم لیست شود و سپس با توجه به نیازهای محیط کسب و کارمان یک تکنولوژی و یا یک خانواده استوریج را انتخاب و در نهایت با گرفتن مشاوره از تیم های فنی و مهندسی دستگاه استوریج مورد نظر را خریداری نمود.

سیسمهای اطلاعاتی استوریج شرکت هیتاچی معمولا در خانواده های ذیل تقسیم بندی می شوند:

Enterprise Unified Storage Systems

High-Performance NAS Systems

Object Storage


  • 0

تعریف Logical Unit Number LUN

تعریف Logical Unit Number LUN

 

 

A logical unit number (LUN) is a unique identifier to designate an individual or collection of physical or virtual storage devices that execute input/output (I/O) commands with a host computer, as defined by the Small System Computer Interface (SCSI) standard.

SCSI is a widely implemented I/O interconnect that commonly facilitates data exchange between servers and storage devices through transport protocols such as Internet SCSI (iSCSI) and Fibre Channel (FC). A SCSI initiator in the host originates the I/O command sequence that is transmitted to a SCSI target endpoint or recipient storage device. A logical unit is an entity within the SCSI target that responds to the SCSI I/O command.

How LUNs work

LUN setup varies by system. A logical unit number is assigned when a host scans a SCSI device and discovers a logical unit. The LUN identifies the specific logical unit to the SCSI initiator when combined with information such as the target port identifier. Although the term LUN is only the identifying number of the logical unit, the industry commonly uses LUN as shorthand to refer to the logical unit itself.

The logical unit may be a part of a storage drive, an entire storage drive, or all of parts of several storage drives such as hard disks, solid-state drives or tapes, in one or more storage systems. A LUN can reference an entire RAID set, a single drive or partition, or multiple storage drives or partitions. In any case, the logical unit is treated as if it is a single device and is identified by the logical unit number. The capacity limit of a LUN varies by system.

A LUN is central to the management of a block storage array in a storage-area network (SAN). Using a LUN can simplify the management of storage resources because access and control privileges can be assigned through the logical identifiers.

LUN zoning and masking

SANs control host access to LUNs to enforce data security and data integrity. LUN masking and switch-based zoning manage the SAN resources accessible to the attached hosts.

LUN zoning provides isolated paths for I/O to flow through a FC SAN fabric between end ports to ensure deterministic behavior. A host is restricted to the zone to which it is assigned. LUN zoning is generally set up at the switch layer. It can help to improve security and eliminate hot spots in the network.

LUN masking restricts host access to designated SCSI targets and their LUNs. LUN masking is typically done at the storage controller, but it can also be enforced at the host bus adapter (HBA) or switch layer. With LUN masking, several hosts and many zones can use the same port on a storage device, but they can see only the specific SCSI targets and LUNs they have been assigned.

LUNS and virtualization

A LUN constitutes a form of virtualization in the sense that it abstracts the hardware devices behind it with a standard SCSI method of identification and communication. The storage object represented by the LUN can be provisioned, compressed and/ordeduplicated as long as the representation to the host does not change. A LUN can be migrated within and between storage devices, as well as copied, replicated, snapshottedand tiered.

A virtual LUN can be created to map to multiple physical LUNs or a virtualized capacity created in excess of the actual physical space available. Virtual LUNs created in excess of the available physical capacity help to optimize storage use, because the physical storage is not allocated until the data is written. Such a virtual LUN is sometimes referred to as a thin LUN.

A virtual LUN can be set up at the server operating system (OS), hypervisor or storage controller. Because the virtual machine (VM) does not see the physical LUN on the storage system, there is no need for LUN zoning.

Software applications can present LUNs to VMs running on guest OSes. Proprietary technology such as VMware’s Virtual Volumes can provide the virtualization layer and the storage devices to support them with fine-grain control of storage resources and services.

Types of LUNs

The underlying storage structure and logical unit type may play a role in performance and reliability. Examples include:

  • Mirrored LUN: Fault-tolerant LUN with identical copies on two physical drives for dataredundancy.
  • Concatenated LUN: Consolidates several LUNs into a single logical unit or volume.
  • Striped LUN: Writes data across multiple physical drives, potentially enhancing performance by distributing I/O requests across the drives.
  • Striped LUN with parity: Spreads data and parity information across three or more physical drives. If a physical drive fails, the data can be reconstructed from the data and parity information on the remaining drives. The parity calculation may have an impact on write performance.

  • 0

استوریج QNAP TVS-1271U-RP NAS

The QNAP TVS-1271U-RP is a 4th generation NAS storage solution designed for data backup, file synchronization, and remote access. Ideal for SMB use cases, it also features cross-platform file sharing, a wide-range of backup solutions, iSCSI and virtualization scenarios, as well as all kinds of practical business functions. The QNAP also includes abundant multimedia applications and a wide selection of different component options, all backed by impressive hardware specifications. The TVS-1271U-RP is also a highly scalable solution as it can support up to 1,120TB of raw capacity using multiple QNAP RAID expansion enclosures.

The TVS-1271U-RP is compatible with SATA 6Gbps drives, with QNAP quoting their NAS to deliver over 3,300MB/s in throughput and 162,000 IOPS. QNAP has also powered their TVS-1271U-RP with an Intel Haswell processor, including Pentium, Core i3, Core i5 and Core i7 options, which gives users the flexibly build their NAS based on individual needs. QNAP indicates that this will help to improve the efficiency of CPU-consuming tasks while serving more simultaneous tasks at once. To help boost IOPS performance, the TVS-1271U-RP supports two on-board internal cache ports, which can be equipped with the optional mSATA flash modules. In addition, QNAP offers an internal cache port design that will not use the space of a hard drive tray, which further increases the storage capacity of the TVS-1271U-RP.

Like all QNAP NAS solutions, TVS-1271U-RP is managed by the QTS intuitive user interface. Leveraging the latest version 4.2, this intelligent desktop allows for easy navigation and a ton of new features and enhancements. Users can also create desktop shortcuts or group shortcuts, monitor important system information in real-time, and open multiple application windows to run multiple tasks concurrently.

QNAP TVS-1271U-RP Specifications

  • Form Factor: 2U, Rackmount
  • Flash Memory: 512MB DOM
  • Internal Cache Port: Two mSATA port on board for read caching
  • Hard Drive: 12 x 3.5-inch SATA 6Gb/s, SATA 3Gb/s hard drive or 2.5-inch SATA, SSD hard drive
  • Hard Disk Tray: 12 x hot-swappable and lockable tray
  • LAN Port: 4 x Gigabit RJ-45 Ethernet port
  • (Expandable up to 8 x 1 Gb LAN or 4 x 10 Gb + 4 x 1 Gb LAN by installing optional dual-port 10 Gb and 1 Gb network card)
  • LED Indicators: Status, 10 GbE, LAN, storage expansion port status
  • USB/eSATA:
    • 4x USB 3.0 port (rear)
    • 4x USB 2.0 port (rear)
  • Support: USB printer, pen drive, USB hub, and USB UPS etc.
  • HDMI: 1
  • Buttons: Power button and reset button
  • Alarm Buzzer: System warning
  • Dimensions:
    • 89(H) x 482(W) x 534(D) mm
    • 3.5(H) x 18.98(W) x 21.02(D) inch
  • Weight:
    • 16.14 kg/ 35.58 lb (Net)
    • 18.98 kg/ 41.84 lb (Gross)
  • Sound Level (dB):
    • Sound pressure (LpAm) (by stander positions): 45.0 dB
    • (with 12 x HITACHI HUS724020ALA640 hard drive installed)
  • Power Consumption (W)
    • HDD Standby:
      • TVS-1271U-RP-PT-4G: 88.88
      • TVS-1271U-RP-i3-8G: 87.89
      • TVS-1271U-RP-i5-16G: 88.91
      • TVS-1271U-RP-i7-32G: 89.82
    • In Operation:
      • TVS-1271U-RP-PT-4G: 173.38
      • TVS-1271U-RP-i3-8G: 176.27
      • TVS-1271U-RP-i5-16G: 174.64
      • TVS-1271U-RP-i7-32G: 176.42
      • (with 12 x WD WD20EFRX hard drive installed)
  • Temperature: 0~40˚C
  • Relative Humidity: 5~95% non-condensing, wet bulb: 27˚C.
  • Power Supply
    • Input: 100-240V~, 50-60Hz, 7A-3.5A
    • Output: 500W
  • PCIe Slot: 2 (1* PCIe Gen3 x8, 1* PCIe Gen3 x4)
  • Fan: 3 x 7 cm smart cooling fan

Design and Build

Like all QNAP NAS solutions, the TVS- 1271U-RP has a fairly basic design with its all-metal chassis. There’s not too much to say about the front panel, as the vast majority of space is taken up by the 12 drive-bays (12 x 3.5-inch SATA 6Gb/s, SATA 3Gb/s hard drive or 2.5-inch SATA, SSD hard drives). To the far right is the power button and the Status, 10 GbE, LAN, storage expansion port status indicators.

Turning it around to the back panel shows a host of connection functionality and other features. On the front and center are the eight USB ports, four of which are 2.0 while the remaining four are 3.0. Just to the left is a tiny Password & Network Settings Reset Button while above are the four Gigabit LAN ports. An HDMI port is also located near the group with two redundant power supplies on the far right.

In addition, two expansion slots are visible (when not occupised with a 10G expansion card), allowing the unit to expand up to 1,120TB in raw capacity using a total of 140 hard drives with 8 expansion units. This is ideal for growing businesses and those that leverage a ton of data every day, such as with video surveillance, data archiving, TV broadcast storage, and other large-data applications.

The TVS- 1271U-RP supports the ANSI/EIA-RS-310-D rack mounting standards.

Testing Background and Comparables

We publish an inventory of our lab environment, an overview of the lab’s networking capabilities, and other details about our testing protocols so that administrators and those responsible for equipment acquisition can fairly gauge the conditions under which we have achieved the published results. None of our reviews are paid for or overseen by the manufacturer of equipment we are testing.

We tested the QNAP TVS-1271U-RP with the following drives in iSCSI block-level and CIFS file-level tests:

Application Performance Analysis

Our first benchmark of the QNAP TVS-1271U-RP is our Microsoft SQL Server OLTP Benchmark that simulates application workloads similar to those the QNAP TVS-1271U-RP and its comparables are designed to serve. For our application testing we are only looking at the Toshiba HK3R2 SSDs.

StorageReview’s Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council’s Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments. Our SQL Server protocol uses a 685GB (3,000 scale) SQL Server database and measures the transactional performance and latency under a load of 15,000 virtual users.

Looking at the TPS performance for each VM, all were configured identically and performed well, with little disparity between them. The average overall performance was found to be 2,894 TPS. The difference between the he top performer, VM2 at 2,912.6 TPS, and the lowest performer, VM4 at 2,876.8 TPS, was 35.8 TPS.

When looking at average latency of the same test, results were mirrored; however, there was a bit more disparity between the configurations. The average was set at 441.0ms. The top performer, VM2 with a latency of 409.0ms, was only 62ms lower than the highest latency VM, VM4, with 471.0ms.

Our next set of benchmarks is the Sysbench test, which measures average TPS (Transactions Per Second), average latency, as well as average 99th percentile latency at a peak load of 32 threads.

In the average transactions per second benchmark, the TVS-1271U-RP gave us a performance of 2,047 TPS aggregate performance.

In average latency, we measured 64ms across all 4 VMs, with the spread being from 55ms at the lowest to 77ms at the highest, a difference of 22ms.

In terms of our worst-case MySQL latency scenario (99th percentile latency), the QNAP measured 327ms averaging all four 4 VMs.

Enterprise Synthetic Workload Analysis

Our Enterprise Synthetic Workload Analysis includes four profiles based on real-world tasks. These profiles have been developed to make it easier to compare to our past benchmarks as well as widely-published values such as max 4k read and write speed and 8k 70/30, which is commonly used for enterprise systems.

  • 4k
    • 100% Read or 100% Write
    • 100% 4k
  • 8K (Sequential)
    • 100% Read or 100% Write
  • 8k 70/30
    • 70% Read, 30% Write
    • 100% 8k
  • 128k (Sequential)
    • 100% Read or 100% Write

In the first of our enterprise workloads, we measured a long sample of random 4k performance with 100% write and 100% read activity to attain results from for this benchmark. In this scenario, the QNAP populated with SSDs recorded 135,464 IOPS read and 70,246 IOPS write when configured in iSCSI while CIFS connectivity saw just 22,588 IOPS read and 61,433 IOPS write. For comparison, the HDD configuration posted 10,524 IOPS and 4,788 IOPS (read and write, respectively) when configured in iSCSI.

As expected, the average latency benchmark results were much closer in performance. Here, the QNAP populated with SSDs posted an impressive 1.89ms read and 3.64ms write (iSCSI), whereas the HDD configuration posted 24.31ms read and 53.49ms write (iSCSI). As shown in our chart below, there was a large read latency spike when the HDDs were configured in CIFS (398.46ms).

Looking at results of the max latency benchmark the QNAP NAS populated with SSDs showed the best read performance (71.8ms/CIFS); however, it showed a huge spike in writes with 6,673.4ms. The best configuration for maximum latency in writes was the QNAP populated with HDDs using CIFS.

The SSD configurations in both iSCSI and CIFS showed good consistency, with 1.66ms read and 5.15ms write in iSCSI and 9.22ms read and 22.49ms write in CIFS. As far as the HDD configurations of the QNAP NAS go, our iSCSI block-level protocol showed the best standard deviation read latency while CIFS showed the best writes.

Our next benchmark measures 100% 8K sequential throughput with a 16T/16Q load in 100% read and 100% write operations. Here, the performance of all drives improved substantially, with the QNAP NAS configured in HDDs via CIFS showed the top read activity by a noticeable margin (158,454 IOPS). The QNAP NAS configured in iSCSI posted the top write performance (135,251 IOPS).

Compared to the fixed 16 thread, 16 queue max workload we performed in the 100% 4k write test, our mixed workload profiles scale the performance across a wide range of thread/queue combinations. In these tests, we span workload intensity from 2 threads and 2 queue up to 16 threads and 16 queue. The SSDs configured in iSCSI via the QNAP NAS posted the highest results; however, the iSCSI protocol had the least consistent results (though it reached the 80K IOPS mark). The QNAP HDD configurations had substantially lower IOPS, particularly in our CIFS file-level test.

Results were mirrored when looking at average latency, though the QNAP configured in with SSDs had much more stables results in iSCSI. The HDDs configured in iSCSI also outperformed CIFS for this benchmark, which had significant latency spikes.

Performance was a much more erratic overall from all of the QNAP NAS configurations tested in our max latency benchmark though iSCSI and CIFS were more in line with each other than in previous benchmarks. Overall, the QNAP NAS configured with SSDs in iSCSI showed the best max latency results.

The results of the standard deviation benchmark were virtually identical to the results of the average latency benchmark, with iSCSI outperforming CIFS in both HDD and SSD configurations.

Our last test in the Enterprise Synthetic Workload benchmarks looks at 128k large block sequential performance, which shows the highest sequential transfer speed the QNAP drive configurations. Looking at the 128k performance of 100% write and 100% read activity, all drives and configurations posted similar write numbers, all of which boasted near 2,200,00KB/s mark in writes. The QNAP NAS populated with SSDs and configured in iSCSI had the best read and write performance with 2,304,102KB/s and 2,285,056KB/s, respectively.

Conclusion

One of biggest assets of the QNAP TVS-1271U-RP is its ability to scale as businesses grow combined with ease of use. With a massive storage pool of to 1,120TB in raw capacity via multiple QNAP RAID expansion enclosures, the TVS-1271U-RP is able to satisfy almost any needs. On the hardware side, TVS-1271U-RP is able to leverage SATA 6Gbps drives, which provides a good combination of deployment options with affordable storage media. QNAP ships the TVS-1271U-RP with an Intel Haswell processor, which is available in Pentium, Core i3, Core i5 and Core i7 options. Coupling all of this with the integrated QTS management interface allows the TVS-1271U-RP NAS boast a mountain of flexibility and thus ideal for a wide variety of applications.

We tested the NAS with both HDDs and SSDs. We saw upwards of 135,464 IOPS read and 70,246 IOPS write in throughput performances, as well as and average latencies as low as 1.89ms read and 3.64ms write (both leveraging an iSCSI configuration using HK3R2 SSDs). In our 8K sequential benchmark we saw throughput performance of 158,454 IOPS read with the Seagate NAS HDD (CIFS) while the HK3R2 SSD gave us a write performance of 135,251 IOPS (iSCSI). Our HDD 128k large block sequential performance boasted impressive speeds, all of which all surpassed the 2.2GB/s as far as writes go, although SSD performance didn’t come out as well. The QNAP populated with SSDs offers strong mixed workload performance, but CIFS read measurements dropped far below its HDD counterpart, measuring 230MB/s.

Pros

  • Good scalability
  • SATA support offers lower cost drives
  • Easy to configure and deploy

Cons

  • Application performance wasn’t as strong as synthetic workloads
  • Poor SSD iSCSI write performance in synthetic benchmarks

Bottom Line

The QNAP TVS-1271U-RP is a flexible SMB NAS that can cost-effectively scale as business needs grow.


  • 0

استوریج HP 3PAR StoreServ 7000 Storage

 استوریج-HP-3PAR-StoreServ-7000-Storage

Have budget constraints forced you to settle for midrange storage with compromised performance and scalability? Is managing your storage taking up more and more time while delivering diminishing returns? The gold standard platform for Tier-1 storage has extended its offerings—delivering effortless, efficient, bulletproof, and future-proof storage for the entire midrange. HP 3PAR StoreServ 7000 Storage delivers the same industry-leading architecture chosen by 3 out of 4 of the world’s largest managed service providers (MSPs) with pricing that makes compromises a thing of the past.

Store all your data on a single system that supports advanced features including storage federation and automated tiering that enable you to start small and grow without disruption. Make storage effortless with models that give you a range of options that support true convergence of block and file protocols, all-flash array performance, and the use of solid-state drives (SSDs) tiered with spinning media.

What’s new

  • HP 3PAR File Persona Software Suite delivers a tightly integrated, converged solution for provisioning block storage volumes as well as file shares from a single capacity store
  • HP 3PAR Thin Deduplication with patented Express Indexing increases capacity efficiency, protects flash performance, and extends flash life span to make your SSD tier more cost-efficient
  • HP 3PAR StoreServ Management Console provides a modern look and consistent feel for all HP 3PAR StoreServ arrays, making management effortless
  • HP 3PAR Online Import now supports an effortless migration path to 3PAR StoreServ Storage from EMC Symmetrix VMAX as well as existing support for EMC CLARiiON CX4, EMC VNX, and HP EVA
  • Support for VMware Virtual Volumes (VVOLs) enables granular VM-level storage control, disaster recovery, and quality of service in VMware environments with VMware vSphere6
  • New 600 GB 15k FIPS encrypted HDD

Features

Grow with Freedom in any Direction

One architecture and a single stack from midrange to high end.

Meet converged block, file, and object access.

Ease data migration with storage federation software.

Affordable entry pricing and non-disruptive scalability to four nodes.

Get Tier-1 Features, “Six-nines” Availability, and Data Mobility at a Midrange Price

Automated DR configuration protects your data with one step.

Mixed workload support increases consolidation opportunities.

Persistent technologies deliver high availability and Tier-1 resiliency.

Industry-leading thin technologies cut capacity requirements by up to 75 percent.

Automated sub-volume tiering optimizes service levels and reduces costs.

Simple to Install, Own, and Upgrade

Software suites simplify purchasing and lower costs.

HP 3PAR SmartStart and HP 3PAR Rapid Provisioning get you up and running in minutes.

Reconfigurable in just seconds without disruption.

Shares a single, simple management console with all HP 3PAR StoreServ Storage.

Manage Block, File, and Object access from a single interface for maximum agility.


استوریج HP 3PAR StoreServ 10000 Storage

استوریج HP 3PAR StoreServ 10000 Storage

سری HPE 3PAR 10000 یکی دیگر از محصولات SAN Storage است که از استاندارد پلاتینیوم برای استفاده از enterprise Tier 1 storage بهره می بردو برای کاربریهای Enterprise و برای کاربریهای Hybrid private cloud و ITaaS مورد استفاده قرار میگیرد و دارای پورت های پر سرعت فیبر نوری16GB/s است و بسته به مدل آن از 48 تا 96 عدد پورت فیبر نوری 16Gb/s در اختیار می گذارد. این محصول برای مجازی سازی بسیاری ایده آل بوده و آنرا به گزینه های بسیار مطلوب و قابل اطمینان تبدیل کرده است و با تکنولوژیهای به کار رفته در آن باعث کاهش 50 درصدی هزینه های اجرایی san storage می شود.
حداکثر فضای ذخیره سازی آن به صورت Raw Capacity نیز بسته به مدل انتخابی از 1600TB تا 3200TB است.
سری HPE 3PAR 10000 دارای 2 مدل به نام های 10400,10800 است که در جدول زیر به بررسی کاملتر این مدلها می پردازیم:

HPE PAR StoreServ 10000 Storage

HPE PAR StoreServ 10000 Storage2

 

NOTE: Support for SAS drives on HPE 3PAR StoreServ 10000 Storage is currently available with HPE 3PAR OS version 3.1.2 and later versions.
NOTE: Native FCoE support is available only for limited host configurations. Please check with your regional manager for more details
NOTE: A dedicated CNA is required for FCoE host connectivity
NOTE: The 1600TB and 3200 TB maximum raw capacity limits for the HPE 3PAR StoreServ 10400 and 10800 Storage respectively are applicable only to systems running HPE 3PAR OS version 3.1.3 or later
NOTE: Specifications are subject to change without notice
1- Each port is full bandwidth 8 Gbit/s or 16 Gbit/s Fibre Channel capable as aplicable
2- Recommended minimum is 32 drives which results in a 9.6 TB minimum raw capacity.
3- Maximum raw capacity currently supported with any and all drive types
4- For storage capacity, 1 GiB = 230bytes and 1 TiB = 1,024 GiB
5- RAID MP is HPE 3PAR Fast RAID 6 Technology
6- SSDs are Solid State Drives
7- SAS drives are Serial Access SCSI Drives
8- NL drives are Nearline (7.2k) disks
9- Recommended minimum is 4 drive chassis per pair of controller nodes
10- Each port is full bandwidth 10 Gbit/s iSCSI capable
11- Each port is full bandwidth 10 Gbit/s FCoE capable
12- Applies to the array storage assigned to 3PAR StoreServ File Controller for file services
13- For details, please refer to the 3PAR StoreServ File Controller v2 section in this document
14- Two built-in 1-GbE RCIP ports per node pair; maximum of 8 usable; RCFC works out of the FC Host ports

 

 

 


  • 0

دوره آموزشی استوریج NetApp

با مشاهده این کورس آموزشی مبانی فناوری های Storage را خواهید آموخت ، همچنین بر استفاده از امکانات حیرت انگیز و بسیار مفید NetApp ( سخت افزارها و نرم افزارهای حیرت انگیز Storage ) در زمینه Storage مسلط خواهید شد.
با مشاهده این مجموعه آموزش می توانید خود را برای پاسخ به پروژه های بازار کار و همچنین شرکت و قبولی در آزمون NCSA ( NS0-145 ) آماده کنید.
مخاطب این مجموعه آموزش تکنسین ها ، مدیران و مهندسان شبکه هستند.

عنوان اصلی : NetApp Certified Storage Associate (NCSA) NS0-145

این مجموعه آموزش ویدیویی محصول موسسه آموزشی CBT Nuggets است که بر روی 1 حلقه DVD و به مدت زمان 7 ساعت و 5 دقیقه در اختیار علاقه مندان قرار می گیرد.
در ادامه با برخی از سرفصل های درسی این مجموعه آموزش NetApp Certified Storage آشنا می شویم :

آشنایی با این کورس آموزشی NCSA
مقدمه ای بر مباحث ذخیره سازی داده ها
آموزش مبانی DAS , NAS و SAN و مقایسه این موارد با یکدیگر
آشنایی با فناوریهای جدید ذخیره سازی همچون Cloud و مجازی سازی
آموزش مجازی سازی سرورها
آموزش مباحث iSCSI , SAS , FC , File Block Storage , HA , HPC
آموزش مبانی کلود و استفاده از آن در ذخیره سازی داده ها
پاسخ به نگرانی های شما در زمینه امنیت و حریم شخصی در استفاده از کلود
آشنایی با حوزه بسیار مهم Flash Storage
آشنایی با اجزای تشکیل دهنده و کارایی سیستم های Flash Storage
مروری بر تکنولوژی های SSD
آموزش در رابطه با Flash Endurance
آشنایی با 6 خط اصلی در محصولات NetApp برای پوشش دادن صنعت ذخیره سازی
آموزش استفاده از رابطهای گرافیکی موجود برای مدیریت محصولات NetApp
آموزش کار با سیستم عامل محصولات NetApp و کار با داده های کلاستر شده ONTAP
آشنایی با سیستمهای عامل جایگزین و تشریح و آشنایی با OnTAP 7-Mode و معماری آن
آشنایی با مبانی و اجزای سخت افزاری NetApp
آموزش کار با سری محصولات NetApp FAS2500
آموزش کار با NetApp Fabric Attached Storage
آموزش کار با دیگر محصولات NetApp
آموزش متصل شدن به System Manager و استفاده از آن به منظور مدیریت سیستم ها و سرویسهای مختلف
آموزش کار با رابط خط فرمانی
آموزش کار با Aggregates
آموزش بالابردن کارایی در استفاده از Flash Pools بدون نیاز به افزایش هزینه ها
آشنایی با تکنولوژیهای ساخت و مدیریت Volume ها در NetApp بویژه تکنولوژی FlexVol
استفاده از قابلیت های Qtrees برای پشتیبان گیری از محتوا و تنظیم مجوزهای امنیتی دسترسی دیگران
آموزش شیوه های محاسبه مقدار فضای قابل استفاده در دیسک سخت
آموزش استفاده از امکان SnapShot Copy
آموزش جامع و کاربردی مدیریت شبکه ( DNS , VLAN )
استفاده از تکنولوژی LUN برای ارائه فضای ذخیره سازی به کاربران به شکل درایوهای دیسک Local
آموزش استفاده از برنامه SnapDrive برای اتصال و مدیریت LUN ها
آموزش ایجاد کردن منابع NAS برای کلاینت های یونیکسی و ویندوزی به کمک NFS Exports و CIFS Shares
آموزش استفاده از BranchCache برای اتصال به دفاتر شرکت
آموزش ایجاد کردن محدودیت در میزان داده هایی که کاربران می توانند ذخیره نمایند
آموزش استفاده از شیوه دسترسی مبتنی بر نقش کاربر ( Role-Based Access Control — RBAC )
آموزش نگهداری و ترمیم سیستم ذخیره سازی NetApp
آموزش نظارت و نگهداری میزان فضای ذخیره سازی در NetApp
آموزش مفاهیم Clustered Data ONTAP
آموزش کار با رابط کاربری Clustered Data ONTAP
آموزش استفاده از Volume ها و Namespace ها در محیط Clustered ONTAP
آشنایی با قابلیت حیرت انگیز Clustered File Access
آموزش موازنه بار ( Load Balancing ) در فضای کلاستر شده
آشنایی با امکانات ویژه NetApp برای تکنولوژی SAN
مبحث FC Connectivity
آشنایی بیشتر با مدرک NCSA
پیشنهادات مدرس برای قبولی در آزمون NCSA
آشنایی با مباحث که به قبولی شما در آزمون کمک می کنند

ثبت نام دوره آموزشی امنیت CEH


  • 0

نصب و راه اندازی HP BladeSystem

In this paper we see a system of servers already quite common in any environment more or less nice, not go into the issue of whether it is better for some environments or other, or if it is more convenient, simple, spend less… see an environment based on HP blades, this is a HP BladeSystem, any particular model and see all the settings that can be done from your OA, HP Onboard Administrator, will be the management console all the chassis, the ‘irons’, from this console can manage any component / element, and view its status at all times.

So by way of introduction, this is what would be a blade system, that is a drawer where we put depending on the model 8 blades, 16… all in itself would be redundancy and compact, since it does not occupy the same 16 servers in a rack format 16 blades. We have so many power supplies as we need to feed the chassis, not individually, if not comprehensively, and fans to cool the environment. Could set by the sensors depending on the temperature more or less operating at speed, and power sources and depends on our environment we ride up the blades could be turned off if not needed (for example a VMware), with all that we can save a lot $$$ into electricity,air conditioners, physical space / racks,… I can see every blade (server) is in a bay, each server operating system would, totally independent of other blades (or not), we have a small display to view the status of all the chassis and to set some parameters. In the back we have the switches, since such servers prevent wiring, and wired ethernet are internal connections, and fiber, all switches are duplicated to prevent falls and have high redundancy. Besides having one or two chassis management devices, that is where we administer the system, from the so-called HP Onboard Administrator. Each HP BladeSystem es different, as well as between different manufacturers (an IBM BladeCenter example), but still the same ‘philosophy’ and are ‘almost’ set equal.

The question is first of all install the chassis (Irons) and then we start setting, for how to mount the bars there everyone who read the official doc if you do not know 😉 since in this document we perform configuration. After connecting the HP Onboard Administrator to switch him or us, we can connect to the default IP with the default username and password (admin / password). By this we understand that the first thing to change would be the IP for which we are interested and admin password. Another way would be from the Insight Display or display that has the chassis front panel, from there we can make basic changes such.

In “Rack overview” A brief summary of our chassis, of the items we have in him, a front and a rear. Come the name of our enclosure, and serial number and part number.

In “Enclosure information” shows the condition of the components, if we aluna warning or everything is OK.

In “Enclosure information, flange “Information” we can change the name of our chassis / BladeSytem / enclosure, or name that owns CPD. Besides the serial ebseñarnos, also indicates a support for chassis connections, UID LED on the identification, the connection port between the chassis rest of our CPD (we make the connection with the BladeSystem continue below) llamado Enclosure link downlink port, Enclosure and also have the uplink port that will link to connect to the upper chassis or to connect a computer if necessary.

In “Enclosure information”, flange “Virtual Buttons” can turn on or off LED Light UID, to indicate any administrator what unit must do their homework.

In “Enclosure information” > “Enclosure Settings”. is a summary of BladeSystem devices and see if we need one to connect / enable and firmware all, we must always consider that we can have the latest firmware and all common elements have the same firmware!

In “Enclosure information” > “Enclosure Settings” > “AlertMail” for that, as the name suggests, to activate email alerts of our chassis.

In “Enclosure information” > “Enclosure Settings” > “Device Power Sequence”, flange “Device Bays”, can enable the lighting of the blades in the chassis with a priority order.

In “Enclosure information” > “Enclosure Settings” > “Device Power Sequence”, flange “Interconnect Bays”, can enable the bays on the chassis connection (switches) with an order of priority.

In “Enclosure information” > “Enclosure Settings” > “Date and Time” to configure the time service of the chassis, be a manual or schedule an NTP time server or.

In “Enclosure information” > “Enclosure Settings” > “Enclosure TCP/IP Settings” is where you can configure the name, IP, netmask, gateway and DNS servers to the chassis, the Onboard Administrator.

In “Enclosure information” > “Enclosure Settings” > “Network Access” flange “Protocols”, are the connection protocols that will qualify to enter the chassis. We have web access with HTTP or HTTPS, seguro shell with SSH, Telnet y XML reply.

In “Enclosure information” > “Enclosure Settings” > “Network Access” flange “Trusted Hosts” if we enable only give access to the enclosure from these IP’s and not from the entire network.

In “Enclosure information” > “Enclosure Settings” > “Network Access” flange “Anonymous Data” simply if we enable chassis give you some information before it loguearnos, you can sign in to be information that interests us as Darla track 😉

In “Enclosure information” > “Enclosure Settings” > “Link Loss Failover”, if te Devices Onboard Administrator, and want to do that when the primary OA lose connection, al otro pass Onboard Administrator, we enable and indicate what time of seconds that pass without a network connection, primary OA (long as the network has secondary OA!),

In “Enclosure information” > “Enclosure Settings” > “SNMP Settings” in case we have configured our network monotorización system, to manage alerts, ads… Denial Nagios, was x put 😉

In “Enclosure information” > “Enclosure Settings” > “Enclosure Bay IP Adresses”, tab “Device Bays”, we configured the IP’s of the blades, not your operating system, if the iLO IP’s so we could later connect to each bay.

In “Enclosure information” > “Enclosure Settings” > “Enclosure Bay IP Adresses”, tab “Interconnect Bays”, we configured the IP’s of the rear chassis modules, of fiber switches, the ethernet…

In “Enclosure information” > “Enclosure Settings” > “Configuration scripts”, can import the configuration scripts to automate configuration chassis and do it faster, we import it from a file or from a URL.

In “Enclosure information” > “Enclosure Settings” > “Reset Factory Defaults”, because for that, to reset the chassis to the default settings from the factory,

In “Enclosure information” > “Enclosure Settings” > “Device Summary”, is one of the most commonly used screens to document a blade environment, is a summary of all the components that we have in our chassis, with the description, serial, part number, manufacturer, model, spare part number, firmware, hardware version… all, the blades, switches, power, coolers / fans, mezzanines of the blades, info of the chasis…

In “Enclosure information” > “Enclosure Settings” > “DVD Drive”, from here we can connect the CD / DVD of the chassis to a specific blade, case we need to get a CD / DVD on a particular blade. Man, is best done from the iLO connection… but tb is here 😛

In “Enclosure information” > “Active Onboard Administrator” tab “Status e Information”, we see the state of the chassis says, at temperatures of about, and others on our chassis.

In “Enclosure information” > “Active Onboard Administrator” tab “Virtual Buttons”, we have two buttons, one for the chassis completely Resetar, I hope this does not have to do ever, since there is no reason to reboot the entire chassis, or to turn on the LED UID information.

In “Enclosure information” > “Active Onboard Administrator” > “TCP/IP Settings”, is informational, on aand Onboard Administrator, network name information, and other IP data,

In “Enclosure information” > “Active Onboard Administrator” > “Certificate administration”, tab “Information” we have that, information on the certificate for the SSL web server.

In “Enclosure information” > “Active Onboard Administrator” > “Certificate administration”, tab “Certificate Request”, serve to generate a certificate our, using a self-signed certificate or CSR by power give it to a certificate authority for us to generate one ‘good’. And in the flange “Certificate Upload” I would climb.

In “Enclosure information” > “Active Onboard Administrator” > “Firmware Update”, could update the firmware of our chassis, if we want the, downgrade could make it, can upload it from a file of our team, from a URL or directly from a USB flash drive connected to the chassis.

In “Enclosure information” > “Active Onboard Administrator” > “System Log”, tab “System Log” We have everything that happens in our chassis, a LOG.

In “Enclosure information” > “Active Onboard Administrator” > “System Log”, tab “Log Options” LOG can redirect a server’s LOG’s, to a Kiwi Syslog type.

In “Enclosure information” > “Device Bays” we have all our blades, with their status, if the UID is on or not, bay number, state on / off, iLO IP address, and state of the DVD drive if it is connected or not.

In “Enclosure information” > “Device Bays” > BLADE > flange “Status”, have the information on our blade, if you would have some warning here would tell us what the problem, or if you have a high temperature. Certainly, if we select hardware element, we indicated in the drawing on the right on which device we, helps when we have enough elements or in the theme of the switches.

In “Enclosure information” > “Device Bays” > BLADE > flange “Information”, blade shows information in question, all of it is quite interesting, o a anotar, as the MAC or WWPN…

In “Enclosure information” > “Device Bays” > BLADE > flange “Virtual Devices”, we have different options on our blade power button, and turn it the UID.

In “Enclosure information” > “Device Bays” > BLADE > flange “Boot Options”, select the boot order of the blade, or next boot,

In “Enclosure information” > “Device Bays” > BLADE > flange “Log IML”, o Integrated Management Log, have the log’s of the blade, everything that happens will be recorded.

In “Enclosure information” > “Device Bays” > BLADE > “iLO” flange “Processor Information”, can do remote management of equipment, ideally to remotely control the computer, so we click on “Integrated Remote Console” for control of the team and to manage their devices or if we have it in Java JRE “Remote Console”.

We open in a new window control on remote computer, to do everything we need, mount remote units, connect CD / DVD the remote site, restart, shutdown…

In “Enclosure information” > “Device Bays” > BLADE > “iLO” tab “Event Log”, filtered records have a team at iLO.

In “Enclosure information” > “Device Bays” > BLADE > “Port Mapping”, tab “Graphical View” we can see the internal connections of the blade, This screen is used to understand the inner connection between the blade and the switches we have in our chassis, we can see the blade adapters (Embedded or integrated and Mezzanines) each with its internal ports, are usually additional network cards or HBA’s (fiber) with one or more ports. And each port on each adapter looks at what port or ethernet fiber switch is connected.

In “Enclosure information” > “Device Bays” > BLADE > “Port Mapping”, tab “Table View” have the same information as in the previous tab but different view 😉

In “Enclosure information” > “Interconnect Bays” shows the chassis switches, back, in my case, two swtiches Ethernet and two fiber, shows the state of them, and if you have the power UID, switch type and model, su IP address management.

In “Enclosure information” > “Interconnect Bays”tab “Status” can view the status and diagnostic switch in question, if you have any electrical issues alert, Temperature…

In “Enclosure information” > “Interconnect Bays” > Bay ethernet > tab “Information” see it, information about this device, certain to score in the correct documentation or to arrange a future of trouble,

In “Enclosure information” > “Interconnect Bays” > Bahia ethernet , flange “Virtual Buttons” same as above, is a virtual button to turn off / reboot the device or to turn on the UID if necessary,

In “Enclosure information” > “Interconnect Bays” > Bahia ethernet > “Port Mapping” we can see the blades we have connected to this ethernet switch indicating which mezzanine what blade is what switch port, MAC also shows the device in question, and if a switch would display the WWNN fiber.

In “Enclosure information” > “Interconnect Bays” > Bahia ethernet > “Management Console”, We initiate the switch management console, and would switch level configuration, to set it up the zoninga that interest, through “HP Virtual Connect Manager”,

In “Enclosure information” > “Interconnect Bays” > BAY fiber > tab “Status”, can view the status and diagnostic fiber switch in question, if you have any electrical issues alert, Temperature…

In “Enclosure information” > “Interconnect Bays” > BAY fiber > tab “Information” see it, information about this device, certain to score in the correct documentation or to arrange a future of trouble,

In “Enclosure information” > “Interconnect Bays” > BAY fiber, flange “Virtual Buttons” is the virtual button to turn off / reboot the device or to turn on the UID if necessary,

In “Enclosure information” > “Interconnect Bays” > BAY fiber > “Port Mapping” we can see the blades we have connected to this ethernet switch indicating which mezzanine what blade is what switch port, also shows the connected HBA WWNN, perfect to have everything well documented and annotated for configuration issues paths.

In “Enclosure information” > “Interconnect Bays” > Bahia ethernet > “Management Console”, We initiate the switch management console fiber, and would switch level configuration, to set it up the zoninga that interest, we have a tutorial here on how to configure a switch fiber http://www.bujarra.com/?p=2752,

In “Enclosure information” > “Power and Thermal” see the status of the chassis electrical issue, and su temperature and in case of any mistakes we mark with error.

In “Enclosure information” > “Power and Thermal” > “Power Management” We can manage the configuration of the redundancy on the chassis power, so as to enable “Dynamic Power” for putting cost savings F.A. you do not need a standby state not to use them unless necessary. Note, it sounds a tall tale with this issue of saving or putting virtualization topics such chassis, saving is true, just take a calculator and multiply the servers so you spend so worth the hour kilowatt (kWh) and you can amaze with the costs in one year! all this you must add the cost of air conditioning of course… if we reverse, there is nothing better than a blade virtualization environment.

In “Enclosure information” > “Power and Thermal” > “Enclosure Power Allocation” spending shows us that we need the items currently in the chassis, our blades, switches, modules… and what would be our total capacity. One should always keep in mind that if we damaged a power supply, if we covered!

In “Enclosure information” > “Power and Thermal” > “Power Meter”, tab “Graphical View” shows a graph of our chassis consumption,

In “Enclosure information” > “Power and Thermal” > “Power Meter”, tab “Tablel View” records shows a consumption of our chassis,

In “Enclosure information” > “Power and Thermal” > “Power Subsystem” shows the status of all our power supplies, and the power mode, and if we would have redundancy.

In “Enclosure information” > “Power and Thermal” > “Power Subsystem” > “Power Supply X” will show specific information and a power source in question as its capacity / consumption, serial, part number, spare part number…

In “Enclosure information” > “Power and Thermal” > “Thermal Subsystem” tab “Fan Summary” shows a generic view of the fans or coolers that have our chassis and their use.

In “Enclosure information” > “Power and Thermal” > “Thermal Subsystem” tab “Fan Zones”, shows areas of our chassis ventilation and whether they are covered or not by fans to cool the area, normal is placed behind the blades fans, because it does not make much sense to place them on the other side, sometimes anyway not us who assemble the chassis if not an HP engineer, so not all our decisions 😉

In “Enclosure information” > “Power and Thermal” > “Thermal Subsystem” > “Fan X” and shows the individual status of each fan or cooler, and their use,

In “Enclosure information” > “Users / Authentication” > “Local Users” we have a local user management on the chassis, manage access to blade environment.

In “Enclosure information” > “Users / Authentication” > “Local Users” > “Directory Settings”, to configure access instead of local users with service users through LDAP Directory,

In “Enclosure information” > “Users / Authentication” > “Local Users” > “Directory Groups” to manage groups of users from LDAP,

In “Enclosure information” > “Users / Authentication” > “Local Users” > “SSH Administration”, manages the keys for SSH,

In “Enclosure information” > “Users / Authentication” > “Local Users” > “HP SIM Integration” > sirve para integrar el Onboard Administrator con HP Systems Insight Manager, to pass credentials.

In “Enclosure information” > “Users / Authentication” > “Local Users” > “Signed in Users” shows the currently logged users on the chassis of blades or historic logins,

In “Enclosure information” > “Insight Display”, shows the small screen that has the front of the chassis, a small display that will enable us to perform certain basic configurations with a couple of buttons, and view the status of the chassis briefly.

Well, the interesting thing is that if we set some of this is well documented, as well as for us to deliver to the customer and who come after us know what and how it is mounted, with logical drawings like this on the connections and documentation of IP’s, MAC’s, WWNN, WWPN, cabling… and perfectly in this structure would blade, more comfortable to manage and maintain! a bargain!!!


  • 0

راه اندازی VMware View در استوریج NetApp

راه اندازی VMware View در استوریج NetApp

استوریج-NetApp-Storage-1

محتوای کلی

  • معرفی VMware View  در استوریج های NetApp
  • اهداف کلی
  • سناریوی اجرایی
  • محیط اجرایی
  • نرم افزار های مورد نیاز
  • راه اندازی و پیکربندی شبکه
  • راه اندازی شبکه در سوییچ های سری NEXUS سیسکو
  • Storage VLAN برای NFS
  • شبکه VMware View
  • نصب و راه اندازی کنترلر های استوریج NetApp برای VMware vSphere
  • نصب فیزیکی کنترلر استوریج NetApp 2000-SEAT
  • نصب شبکه کنترلر استوریج NetApp
  • کانفیگ کردن NFS Trunk
  • پیکربندی دیسک کنترلر استوریج NetApp
  • پیکربندی Logical های استوریج
  • پیکربندی SSH استوریج NetApp
  • پیکربندی FLEXSCALE برای ماژول Performance Acceleration به اصطلاح PAM
  • پیکربندی Virtual Machine Datastore Aggregate
  • تغییر Aggregate Snapshot Reserve برای تولید مجموع VMware View
  • نصب و راه اندازی استوریج NetApp با استفاده از RCU 0
  • ایجاد یک Volume برای میزبانی از ماشین مجازی Template
  • پیکربندی Snapshot Copy ها و Optimal Performance
  • پیکربندی های دیگر مربوط به کنترلر A استوریج
  • پیکربندی های دیگر مربوط به کنترلر B استوریج
  • ایجاد Volume ها برای میزبانی Clone های لینک شده و CIF User Data
  • غیرفعال کردن Default Snapshot Schedule و تنظیم کردن SNAP Reserve روی صفر
  • پیکربندی Performance برای VMDK ها در NFS
  • نصب VMware vSphere بر روی هاست
  • پیکربندی سرور فیزیکی
  • نصب لایسنس های مورد نیاز
  • نصب vSphere
  • نصب و راه اندازی VMware vCenter Server
  • کانفیگ کردن Service Console برای فعال سازی Redundancy
  • راه اندازی VMware Kernel NFS Port
  • پیکربندی vMotion
  • تنظیمات مربوط به شبکه VMware vSphere Host
  • اضافه کردن Datastore ماشین مجازی Template در هاست vSphere
  • اضافه کردن View SWAP Datastore در هاست vSphere
  • کانفیگ کردن مکان نگهداری Virtual SWAPFILE Datastre
  • پیکربندی محیط ESX با VSC
  • راه اندازی VMware View Manager 4 و VMware View Composer
  • نصب و پیکربندی Image ویندوز XP
  • ایجاد یک ماشین مجازی در VMware vSphere
  • فرمت بندی ماشین مجازی با اندازه پارتیشن های مورد نظر
  • دانلود و آماده کردن درایور LSI 53C1030
  • چک لیست نیازمندیهای قبل از نصب ویندوز XP
  • نصب و پیکربندی ویندوز XP
  • تولید سریع ماشین های مجازی ویندوز XP در محیط VMware View با استفاده از RCU
  • راه اندازی Clone های لینک شده
  • معرفی کردن کاربران و گروهها به Desktop Pool ها
  • نصب FLEXSHARE (اختیاری)
  • تست روند کار VMware View و استوریج NetApp
  • جایگزین کردن استوریج با 10,000 SEAT
  • مراجع

 

 

اهداف کلی

هدف این مقاله نصب و راه اندازی مرحله به مرحله VMware View بر روی استوریج های سری FAS2040 و FAS3100 و FAS6000 شرکت NetApp که بصورت کلاستر شده HA هستند و در شبکه ای با وجود سوییچهای NEXUS 5000 و NEXUS 7000 سیسکو می باشد. این مقاله به جزییات کلی راه اندازی ساختار دسکتاپ مجازی Windows XP خواهد پرداخت.

استوریج در این ساختار 100,000 SEAT را پشتیبانی خواهد کرد. از این مقاله صاحبان صنایع و کارخانه ها می توانند استفاده کنند تا بتوانند دید جامعی در خصوص تکنولوژیهای نوین دسکتاپ پیدا کنند.

جهت کسب اطلاعات بیشتر و یا مشاوره فنی با کارشناسان شرکت کلیک کنید.

مشاوره فنی رایگان

این مقاله یک سناریوی راه اندازی ترکیبی را که در آن کاربران مختلف با دسکتاپ های مختلف در ارتباط هستند و قابلیتهایی مانند Storage Efficienc و Performance و Data Protection و نیز سادگی در عملیات مورد خواست می باشد. جدول ذیل محیط یک مشتری فرضی را مخلوطی از کاربران می باشد را نشان می دهد. نیازمندیهای مخصوص کاربران مختلف براحتی و با استفاده از ارائه دسکتاپ های مختلف با نرم افزار VMware View Manager پاسخ داده می شوند و این در سایه دو تکنولوژی NetApp Rapid Cloning Utility و VMware Linked Clone Technology میسر می شود.

 

جدول 1)  RCU and linked clones deployment mix

توزیع ماشین های مجازی تعداد ماشین های مجازی
تعداد ماشین های مجازی تولید شده توسط RCU 3.0 1,000
تعداد ماشین های مجازی تولید شده توسط Linked Clones 1,000

 

جدول 2) جزییات تعداد ماشین های مجازی تولید شده توسط VMware Linked Clone

توزیع ماشین های مجازی تعداد ماشین های مجازی
تعداد ماشین های مجازی در حالت دسترسی  Linked Clone Persistent 500
تعداد ماشین های مجازی در حالت دسترسی  Linked Clone Nonpersistent 500
تعداد کل ماشین های مجازی تولید شده توسط Linked Clones 1,000

 

این سناریو بر روی دستیابی به Storage Efficiency در لایه های مختلف و نیز Performance Acceleration را برای هرگونه سناریو تولید در محیط های مختلف تمرکز کرده است.

جدول زیر محیط فعالیت یک مشتری فرضی را نشان می دهد که دارای کاربران مختلف با نیازمندیهای مختلف از لحاظ مقدار استفاده دیتا و مقدار دیتا هاست شده و دسکتاپ های مجازی دارند.

جدول زیر به تفاوتهای فرآیند تولید از طریق NetApp RCU 3.0 و VMware Linked Clone نیز اشاره ای دارد.

جدول 3 ) انواع سناریو های راه اندازی VMware View

پروفایل کاربر نیازمندیهای کاربری تعداد ماشین های مجازی VMware View Manager Desktop Delivery Model Access Mode Deployment Solution
مالی – بازاریابی – مشاوره   قابل تغییر ، دسکتاپ های شخصی شده با استفاده از ترکیب اداره ای و یا خصوصی شده ، قابلیت دانلود Application های مختلف و استفاده ار بسیاری از Application های از پیش نصب شده بر روی سیستم و نیز قابلیت نگهداری از فایلهای دیگری بر روی سیستم بجز پچ ها و سیستم عامل و User Data می باشد. 500 Manual desktop pool Persistent NetApp RCU 3.0
برنامه نویسان   ترکیبی از نرم افزارهای اداری معمولی و نرم افزارهای خاص Enterprise مخصوص برنامه نویسی را ساپورت می کنند و همچنین قابلیت اضافه کردن نرم افزار و App جدید هستند.این سیستم قابلیت نگهداری از فایلهای دیگری بر روی سیستم بجز پچ ها و سیستم عامل و User Data می باشد. 500 Manual desktop pool Nonpersistent NetApp RCU 3.0
نیرو های Helpdesk و افراد Call Center   این کاربران فقط بر روی یک Application خاص کار می کنند و نیازی به فراهم بودن قابلیت تغییر ندارند. دارای دسکتاپ شخصی سازی شده و نیازی به نگهداری اطلاعات دیگر بر روی سیستم نمی باشند و اطلاعات این سیستم ها در جایی دیگر Protect می شوند. 500 Automated desktop pool Persistent VMware linked clones

 

 

پروفایل کاربر نیازمندیهای کاربری تعداد ماشین های مجازی VMware View Manager Desktop Delivery Model Access Mode Deployment Solution
واحد آموزش و دانش آموزان  دسکتاپ های موقت برای دوره های زمانی آموزش و نیازمند دسکتاپ کاملا تمیز و جدید و نیازی به Customization و شخصی سازی دسکتاپها و یا سیستم عامل و اطلاعات کاربری نمی باشد. 500 Automated desktop pool Nonpersistent VMware linked clones

 

سناریوی نصب

در این سناریوی راه اندازی محیط 2000 کلاینتی در یک دستگاه استوریج NetApp FAS کلاستر شده و با پروتکل NFS را نشان خواهیم داد. نصب و راه اندازی 1000 کلاینت از طریق تکنولوژیهای NetApp و 1000 کلاینت از طریق VMware Linked Clone می باشد. البته هر دو مدل تولید VDI یعنی Persistent و nonpersistent بطور کامل Highight شده است. این پیکربندی می تواند بر روی دستگاههای استوریج NetApp مدلهای FAS2040 ، FAS3100 و FAS6000 و همچنین سری های NetApp/V نیز قابل راه اندازی می باشد. از یک دستگاه FAS3160A استفاده می کنیم.  در انتهای این مقاله جدولی شامل کلیه نیازمندیهای استوریج را ارائه خواهیم کرد.

این سناریو از NetApp FAS3160 HA بصورت جفت در محیط اصلی خود استفاده کرده است. این پروژه از 50 درصد Read/Write بصورت ترکیبی و از حداقل 20 درصد منابع پردازنده در هر کنترلر خود استفاده می کند و پیش بینی می شود که هر ماشین مجازی دارای 2 گیگابایت فضای استوریج و از 8 IOPS در پیکربندی خود استفاده کند. با این برآورد یک مجموعه 7000 کلاینتی در یک دستگاه NetApp FAS3160 قابلیت اجرا شدن را دارند.

بدلیل اینکه کابران متفاوت هستند و این 7000 کاربر به عنوان مرجع ما در این مقاله استفاده می شود.

 محیط اجرایی

به این نکته توجه داشته باشیم که لایسنس های مورد نیاز برای کنترلر های NetApp و محصولات VMware و نیز ویندوزهای XP باید خریداری گردند تا بتوان قابلیت های ذکر شده را راه اندازی کرد.

همچنین برای دستگاهای سیسکو Nexus 5000 و 7000 نیز لایسنس Virtual Port Channel cPCs را تهیه نمود. در آخر نیز باید دقت کنیم که دستگاه UCS Cisco نیز باید لایسنس گردد. از لایسنس های Trial می توان نمودن موارد کوچک استفاده نمود.

 

نرم افزار های مورد نیاز پروژه

NetApp System Manager 1.01

NetApp Rapid Cloning Utilities (RCU) 3.0

VMware vSphere™ (ESX 4.0 and vCenter™ Server 4.0)    VMware View Manager and Composer 4.0

NetApp Virtual Storage Console (VSC) 1.0

 

 نصب و پیکربندی شبکه

به دلیل قابلیتهای ذکر شده در این سناریو ما از سوییچهای Nexus 5020  و 7000 استفاده می کنیم. به جهت پیچیدگی و متنوع بودن محیط های شبکه سازمانها ما نمی توانیم یک راهکار معمولی مشخص برای راه اندازی کل شبکه ها داشته باشیم برای کسب اطلاعات بیشتر از وضعیت پیکربندی های شبکه ای می توانید به سایت TR-3749: NetApp and VMware vSphere Storage Best Practices مراجعه فرمایید.

 

Below is a list of the topics that are covered in depth in the networking section in TR-3749: Traditional Ethernet switch designs

Highly available storage design with traditional Ethernet switches vSphere networking with multiple virtual machine kernel ports

vSphere with multiple virtual machine kernel, traditional Ethernet, and NetApp networking with single-mode VIFS

vSphere with multiple virtual machine kernel, traditional Ethernet, and NetApp networking with multilevel VIFS

Cross-stack EtherChannel switch designs

Highly available IP storage design with Ethernet switches that support cross-stack EtherChannel EtherChannel vSphere networking and cross-stack EtherChannel

vSphere and NetApp with cross-stack EtherChannel Datastore configuration with cross-stack EtherChannel

 

Detailed below are the steps used to create the network layout for the NetApp storage controllers and for each vSphere host in the environment.

 

2.1            NETWORK SETUP OF CISCO NEXUS NETWORK SERIES

For the purposes of this deployment guide, a network design with two Cisco Nexus 7000 switches and two Cisco Nexus 5020 switches was used. All of Cisco‘s best practices were followed in the setup of the Nexus environment.  For more information on configuring a Cisco Nexus environment, visit http://www.cisco.com.

 

The goal in using a Cisco Nexus environment for networking is to integrate its capabilities to logically separate public IP traffic from storage IP traffic. In doing this, the chance of issues developing from changes made to a portion of the network is mitigated.

 

Since the Cisco Nexus 5020 switches used in this configuration support vPCs and Nexus 7000 switches are configured with a VDC specifically for storage traffic, logical separation of the storage network from the rest of the network is achieved while providing a high level of redundancy, fault tolerance, and security. The vPC provides multipathing, which allows you to create redundancy by enabling multiple parallel paths between nodes and load balancing traffic where alternative paths exist.

 

Alternatively, instead of a two Nexus 7000‘s two Nexus 5020‘s can be used instead. With this configuration, vPC‘s can be configured as well for network segmentation using VLAN‘s.  Using this configuration will reduce the network cost significantly, but also not allow for VDC network segmentation.

Details in diagrams below are for a pure 10GbE environment. On the Nexus network perform the following configurations:

Set up a Pier Keep Alive Link as a management interface between the two Nexus 7000 switches.

On the default VDC on the Nexus 7000 switches, be sure to enable a management VLAN for the service console, a public VLAN for the virtual machine network, and a private, nonroutable VLAN for VMotion™.

In order to isolate and secure the NFS traffic, create a separate VDC on the Nexus 7000 switches for NFS traffic. Assign ports to this VDC and configure these ports for a private, nonroutable VLAN.*

Create virtual port channels between the Nexus 5020 switches for the public VLAN, service console VLAN, NFS VLAN, and the VMotion VLAN.

*Note: This is an optional configuration. If you do not use this configuration or have this option available, create an additional private, nonroutable VLAN.

 

2.2            STORAGE VLAN FOR NFS

If you are using VDC‘s on the Nexus 7000‘s, be sure to configure a nonroutable VLAN on a separate VDC for the NFS storage traffic to pass to and from the NetApp storage controllers to the vSphere hosts. With this setup the NFS traffic is kept completely contained, and security is more tightly controlled.

Also, it is extremely important to have at least two physical Ethernet switches for proper network redundancy in your VMware View environment. Carefully plan the network layout for your environment, including detailed visual diagrams detailing the connections for each port.

 

2.3               VMWARE VIEW NETWORK

When creating a VMware View environment that contains several hundred or several thousand virtual machines, be sure to create a large enough DHCP scope to cover the number of IP addresses that will be needed by the clients. This step should be planned well before implementation.

استوریج NetApp پیکربندی کلی

 

 

Figure 1) NetApp storage controller VIF configuration for 10GbE.

 

3           NETAPP STORAGE CONTROLLER SETUP FOR VMWARE VSPHERE

Perform all of the steps listed below on both controllers of the NetApp system. Failure to do so could result in inconsistencies and performance problems within the environment.

 

3.1            NETAPP CONTROLLER 2,000-SEAT PHYSICAL CONFIGURATION

Table 4) NetApp solution configuration.

NetApp System Components Number and/or Type Slot on Each NetApp Controller Part Installed In
Disk shelves required 2 (totaling 48 FC SAS disks; 1 shelf per controller) N/A
Size and speed of hard disk in shelves 450GB @ 15K RPM* N/A
Disk shelf type DS4243 N/A
Dual-port 10GB Ethernet NIC 4 (2 per controller) 2 and 3
Quad-port Fibre Channel card 4/2/1 2 (one per controller) 4
Performance Acceleration Module (PAM) 2 (one per controller) varies
NFS licenses 2 (one per controller) N/A
FlexClone® licenses 2 (one per controller) N/A
FlexShare® licenses (optional) 2 (one per controller) N/A

 

*If the deployment will not have a CIFS component, 300GB SAS drives can be substituted.

 

 

For the purposes of this configuration, the basis for the design architecture is eight IOPs per virtual machine. This number might vary per environment and for different user types. For further details on sizing best practices, check NetApp TR-3705.

 

3.2            NETWORK SETUP OF NETAPP STORAGE CONTROLLER

In order to achieve optimal performance, maximize the number of Ethernet links for both controllers in the NetApp cluster. Below are the guidelines for setting up the network for both storage controllers.

Table 5) Network setup of NetApp controller.

Step Action
1 Connect to the NetApp storage controllers using System Manager.
2 Please use the diagrams above for a reference on how to configure the cabling for the FAS storage controller.For 10GbE connections, please ensure that one interface from each of the two dual-port NICs are going to separate Cisco Nexus 5020 switches. In total two connections should go to Cisco Nexus 5020 A and two should go to Cisco Nexus 5020 B.Please use this setup on both FAS storage controllers in the cluster.

 

 

Step Action
3 The ports that these interfaces are connected to on the switches must meet the following criteria:a.        They must be on the nonroutable VLAN created for NFS network traffic.b.        They must be configured into a trunk, either manually as a multimode VIF or dynamically as an LACP VIF.c.        If LACP is used, then the VIF type must be set to static LACP instead of multimode on the NetApp storage controller.Note: For the purposes of this document we use the 192.168.0.0/24 network for the private subnet for NFS and the 192.168.1.0/24 network for the private subnet for VMotion.a.        The NetApp storage controller IP address range is from 192.168.0.2 through 192.168.0.10.b.        The vSphere NFS VMware kernel IP address range is 192.168.0.11through 192.168.0.254.c.     The VMware VMotion-enabled VMware kernel IP address range is 192.168.1.11 through 192.168.1.254.

 

 

3.3               CONFIGURE NFS TRUNK

Table 6) Configure the NFS trunk on the NetApp storage controller.

Step Action
1 Connect to the NetApp storage controllers using System Manager.             Figure 2) System Manager trunk configuration.

 

 

Step Action
2 Select Next at the first Create VIF Wizard screen.Figure 3) System Manager Create VIF Configuration Wizard.
3 At the next screen, name the VIF, select the four 10GbE NICs, choose the LACP option, and select Next.Figure 4) System Manager VIF parameters.

 

 

Step Action
4 At the next screen, select IP based as the load balancing type and select Next.Figure 5) System Manager load balancing type.
5 At the VIF Interface Parameters screen enter the IP address and the subnet mask and select Next.Figure 6) System Manager VIF interface parameters.

 

 

Step Action
6 At the final screen, please select ―Finish‖ to build the VIF.Figure 7) System Manager Create VIF Wizard completion.
7 Once this is done, please determine that the VIF is enabled. The VIF created should appear as an entry similar to the one below.Figure 8) System Manager VIF created.

 

Note: Repeat these steps for the two remaining ports. Be sure that one NIC is on switch A and the other is on switch B. These ports will be used for CIFS and management traffic and should be set up using VLAN tagging.

 

3.4            OVERVIEW OF THE NETAPP STORAGE CONTROLLER DISK CONFIGURATION

The figure below shows the disk layout for both of the NetApp storage controllers. To meet the performance and capacity needs of this configuration, each controller has one aggregate (aggr0 for root and for hosting production virtual machines) with the required number of spindles and enough spares disks that can be easily added later to the aggregates to deal with unknowns.

 

 

Figure 9) NetApp storage controller disk configuration.

 

 

 

3.5            OVERVIEW OF THE LOGICAL STORAGE CONFIGURATION

The figure below shows the logical storage layout for the 2,000-seat configuration:

Controller A hosts 1,000 virtual machines created using NetApp RCU 3.0 and is part of a manual desktop pool, with 500 in persistent access mode and 500 in nonpersistent access mode.

Controller B hosts, 1000 virtual machines created using VMware linked clones and is part of an automated desktop pool with 500 in persistent access mode and 500 in nonpersistent access mode.

The virtual machine swap file (vswap) datastore on storage controller A hosts the virtual machine swap file for all 2,000 virtual machines. The assumption is that the backup of the OS disk is not in the scope of the project for phase 1 of the deployment but might be in phase 2.

Controller B hosts the CIFS share for storing the user data for all 1,000 NetApp RCU 3.0–created virtual machines and also the 500 virtual machines created using VMware linked clones, in nonpersistent access mode. For the 500 virtual machines created using linked clones in persistent access mode, the user data will be hosted on a second datastore.

 

 

Figure 10) NetApp storage controller logical storage configuration.

 

 

FAS Controller A (1,000 NetApp RCU Persistent Desktops)

 

Table 7) NetApp FAS controller A configuration.

VDI Infrastructure Component Number
Total volumes on FAS controller A 8 (including root volume)
FlexClone gold volume 1
FlexClone volumes 4
Volume for virtual machine swap file (vswap) datastore 1
Volume to host template virtual machine (to be used as the source for creating all the NetApp RCU 2.0–based virtual machines) 1

 

FAS Controller B (1,000 Nonpersistent VMware Linked Clones)

Table 8) NetApp FAS controller B configuration.

VDI Infrastructure Component Number

 

 

Total volumes on FAS controller B 9 (including root volume)
FlexClone gold volume 1
FlexClone volumes 2
Volume for hosting linked clone parent virtual machine 1
Volume for hosting OS disk for linked clone virtual machines in persistent access mode 1
Volume for hosting user data disk for linked clone virtual machines in persistent access mode 1
Volume for hosting OS disk for linked clone virtual machines in nonpersistent access mode 1
Volume for hosting CIFS user data 1

 

 

3.6            CONFIGURE NETAPP STORAGE CONTROLLERS’ SSH CONFIGURATION

For both storage controllers, perform the following steps:

Table 9) Configuring SSH.

Step Action
1 Connect to the NetApp storage controller‘s console (via either SSH, telnet, or console connection).
2 Execute the following commands and follow the setup script:secureadmin setup ssh options ssh.enable on options ssh2.enable on

 

3.7            CONFIGURE FLEXSCALE FOR PERFORMANCE ACCELERATION MODULE (PAM)

The Performance Acceleration Module is an intelligent read cache that reduces storage latency and increases I/O throughput by optimizing performance of random read intensive workloads. As a result, disk performance is increased and the amount of storage needed is decreased.

For both storage controllers, perform the following steps:

Table 10) FlexScale configuration.

Step Action
1 Connect to the NetApp storage controller‗s console (via either SSH, telnet, or console connection).
2 To enable and configure FlexScale™, do the following: options flexscale.enable onoptions flexscale.normal_data_blocks on

 

3.8          CONFIGURE VIRTUAL MACHINE DATASTORE AGGREGATE

For both storage controllers, perform the following steps:

Table 11) Creating the VMware aggregate.

Step Action
1 Open NetApp System Manager and click Aggregates.

 

 

Figure 11) System Manager Aggregate Wizard.
2 Right-click aggr0 and then click Edit.Figure 12) System Manager Aggregate—Edit.
3 Select 16 disks from the Disk details screen and move them from Available spare disks to Disks in aggregate.  Select Next.Figure 13) System Manager Aggregate—disk details.
4 Select OK. The disk will then be added to Aggregate 0. This process could take some time, so be patient here.

 

 

 

 

3.9            MODIFY THE AGGREGATE SNAPSHOT RESERVE FOR THE VMWARE VIEW_PRODUCTION AGGREGATE

For both storage controllers, perform the following steps:

Table 12) Modify aggregate Snapshot reserve.

Step Action
1 Connect to the controller‘s console, using either SSH, telnet, or serial console.
2 Set the aggregate Snapshot™ schedule:snap sched –A <aggregate-name> 0 0 0
3 Set the aggregate Snapshot reserve:snap reserve –A <aggregate-name> 0
4 Delete existing Snapshot copies, type snap list -A <vol-name>, and then type:snap delete <vol-name> <snap-name>
5 To log out of the NetApp console, type CTRL+D.

 

4           NETAPP STORAGE SETUP USING RCU 3.0

Some of the steps below can be performed using either RCU 3.0 from inside the vCenter server or System Manager on controller A of the NetApp FAS system. Failure to do so could result in inconsistencies and performance problems with your environment. Note that creation of the gold datastore on controller B is not required because RCU 3.0 uses the template virtual machine in the template datastore as the basis to create the gold datastore on controller B as well.

 

4.1          CREATE A VOLUME TO HOST THE TEMPLATE VIRTUAL MACHINE

Table 13) Create the virtual machine template volume.

Step Action
1 To provision datastores across multiple ESX hosts in a datacenter, In vCenter, right-click on a datacenter,  select NetApp, and then select Provision datastores. Figure 14) RCU 3.0 datastore provisioning
2 At the next screen, select the storage controller you would like to deploy the datastore to. Figure 15) RCU 3.0 datastore provisioning—storage controller selection.
3 Complete the Wizard using the following:

 

Make the size of the volume 50GB. Name the volume rcu_gold.Place the rcu_gold volume on the View_Production aggregate. Enable thin provisioning.Enable auto-grow.o    Enter a Grow increment of 5.o    Enter a Maximum datastore size of 1200. Select Next when all information is entered. Figure 16) RCU 3.0 datastore provisioning—datastore configuration.
4 At the following screen, verify that all information is correct and select Apply. Figure 17) RCU 3.0 datastore provisioning—completion.

 

4.2            CONFIGURE SNAPSHOT COPIES AND OPTIMAL PERFORMANCE

Perform this step for the volume hosting the template virtual machine.

Table 14) Configure Snapshot autodelete for volumes.

Step Action
1 Log into System Manager. Figure 18) System Manager Snapshot copies and performance.
To configure Snapshot copies, highlight the rcu_gold volume, click on Snapshot, and then select

 

 

Configure.Figure 19) System Manager Snapshot copies and performance—configure Snapshot copies.
2 Set the Snapshot reserve percentage to 0 and uncheck the ―Enable scheduled snapshots‖ option. Select Apply and then OK to return to the System Manager main screen.Figure 20) System Manager Snapshot copies and performance—configure Snapshot copies continued.
3 To set optimal performance, highlight the rcu_gold directory, right-click on the directory, and select Edit from the drop-down list.

 

 

Figure 21) System Manager Snapshot copies and performance—configure performance.
4 Click on the Auto Size tab and ensure that both the ―Allow volume to grow automatically‖ and―Delete snapshots automatically‖ boxes are checked.  Then click Apply.Figure 22) System Manager Snapshot copies and performance—configure auto grow.
5 Select the Advanced tab. Ensure that the ―No access time updates‖ option is checked. Also ensure that the ―No automatic Snapshot copy‖ is checked. Once this is complete, click Apply and then OK to return to the main System Manager screen.

 

 

 

4.3            STORAGE CONTROLLER “A” ADDITIONAL SETUP AND CONFIGURATION

 

CREATE THE VOLUME TO HOST VIRTUAL MACHINE SWAP FILES

Table 15) Create the view_swap volume.

Step Action
1 In vCenter, right-click on a vSphere host, select NetApp, and then select Provision datastores.
2 At the next screen, select the storage controller you would like to deploy the datastore to.
3 Complete the Wizard using the following:Make the size of the volume 1100GB. Name the volume view_swap.Place the view_swap volume on the View_Production aggregate. Enable thin provisioning.Enable Auto-grow.o    Enter a Grow increment of 5.o    Enter a Maximum datastore size of 1200. Select Next when all information is entered.
4 At the following screen, verify that all information is correct and select Apply.
5 For a visual reference for the directions above, please refer to table xxx.

 

CONFIGURE THE VOLUME

Table 16) NFS volume configurations.

Step Action
1 Log into System Manager.

 

 

2 To configure Snapshot copies, highlight the view_swap volume, click on Snapshot, and then select Configure.
3 Set the Snapshot reserve percentage to 0 and uncheck the ―Enable scheduled snapshots‖ option. Select Apply and then OK to return to the System Manager main screen.
4 To set optimal performance, highlight view_swap directory, right-click on the directory, and select Edit from the drop-down list.
5 Click on the Auto Size tab and ensure that both the ―Allow volume to grow automatically‖ and―Delete snapshots automatically‖ boxes are checked.  Then click Apply.
6 Select the Advanced tab. Ensure that the ―No access time updates‖ option is checked. Also ensure that the ―No automatic Snapshot copy‖ box is checked. Once this is complete, click Apply and then OK to return to the main System Manager screen.
7 For a visual reference for the directions above, please refer to table 14.

 

5           STORAGE CONTROLLER “B” SETUP AND CONFIGURATION

  • CREATE THE VOLUMES FOR HOSTING LINKED CLONES AND CIFS USER DATA CREATE VOLUME TO HOST OS DATA DISKS IN PERSISTENT ACCESS MODE

Table 17) Create the view_lcp volume.

Step Action
1 Open NetApp System Manager.
2 Select Volumes and then click on Create. Figure 24) System Manager—volume select.
3 On the Details tab enter the following:Make the size of the volume 1300GB. Name the volume view_lcp.Select Storage type as NAS.Place the view_lcp volume on the View_Production aggregate. Set the Total volume size to 1300.Set the Snapshot reserve to 0. Figure 25) System Manager—volume details configuration.
4 Click on the Space Settings tab. Ensure Deduplication is set to Enable and that the Guarantee is set to None. Once this is done, click on Create. The main System Manager screen will appear.

 

 

Figure 26) System Manager—volume space settings configuration.
5 Highlight the newly created volume, right-click on it, and select Edit from the drop-down list.Figure 27) System Manager—volume deduplication configuration start.
6 Click on the Deduplication tab and set the deduplication schedule according to your business needs.

 

 

Figure 28) System Manager—volume deduplication configuration.
7 Click on the Auto Size tab and ensure that both the Volume autogrow and Snapshot autodelete boxes are checked.Figure 29) System Manager—volume autosize configuration.
8 Click on the Advanced tab and ensure that No access time updates and No automatic Snapshot copy are selected.

 

 

Figure 30) System Manager—volume advanced configuration.
9 Click on Apply, then click OK to be returned to the System Manager home screen.

 

CREATE VOLUME TO HOST USER DATA DISKS IN PERSISTENT ACCESS MODE

Table 18) Create the linked clones volume for host user data.

Step Action
1 Open NetApp System Manager.
2 The volume should be created using the following information. Complete the Wizard using the following:Name the volume view_lcp_userdata. Select Storage type as NAS.Place the view_lcp_userdata volume on the View_Production aggregate. Set the Total volume size to 250.Set the Snapshot reserve to 0.
3 Please set the deduplication, autosize, and advanced settings as detailed in the steps above.

 

CREATE VOLUME TO HOST OS DATA DISKS IN NONPERSISTENT ACCESS MODE

Table 19) Create the linked clones host OS data disk volume.

Step Action
1 Open NetApp System Manager.
2 Complete the Wizard using the following:Make the size of the volume 700GB. Name the volume view_lcnp.Select Storage type as NAS.Place the view_lcpn volume on the View_Production aggregate. Set the Total volume size to 700GB.

 

 

Step Action
   Set the Snapshot reserve to 0.
3 Please set the deduplication, autosize, and advanced settings as detailed in the steps above.

 

CREATE THE VOLUME TO HOST CIFS USER DATA

This volume will be used for hosting CIFS user data for virtual machines provisioned using NetApp RCU and linked clones in nonpersistent access mode.

Table 20) Create the CIFS volume to host user data.

Step Action
1 In System Manager, select Volumes.
2 Select Volumes.
3 Select Add to open the Volume Wizard.
4 Complete the Wizard using the following:Name the volume view_cifs. Select Storage type as NAS.Place the view_cifs volume on the View_Production aggregate. Set the Total volume size to 1750.Set the Snapshot reserve to 20%.
5 Please set the deduplication, autosize, and advanced settings as detailed in the steps above.

 

5.2            DISABLE THE DEFAULT SNAPSHOT SCHEDULE AND SET SNAP RESERVE TO ZERO

For all the volumes configured above to contain VMs for controller B (and NOT for the CIFS volume), do the following:

Table 21) Disable default Snapshot schedule and set snap reserve to zero.

Step Action
1 Log into the NetApp console. Figure 31) System Manager—volume deduplication configuration.

 

 

Step Action
2  Figure 32) System Manager—configure volume Snapshot copies for view_lcp volume. Set the volume Snapshot schedule for volumes created above by doing the following:     Ensure that the Snapshot reserve for volumes is set to 0.Uncheck Enable scheduled snapshots.

 

5.3            CONFIGURE OPTIMAL PERFORMANCE FOR VMDKS ON NFS

For all the volumes with NFS exports configured above for controller B, do the following:

Table 22) Set optimal performance for VMDKs on NFS.

Step Action
1 Log in to the NetApp console.
2 From the storage appliance console, run options nfs.tcp.recvwindowsize 64240.

 

 

 

6           VMWARE VSPHERE HOST SETUP

 

6.1            PHYSICAL SERVER CONFIGURATION

Below are the server specifications that were used for this configuration. You might have different servers with different configurations.

Table 23) vSphere host configuration.

Server Component Number or Type
VMware vSphere host 16
Memory per vSphere host 96GB

 

 

Server Component Number or Type
CPUs per vSphere host 2 Intel® Nehalem quad-core CPUs
Network interface cards (NICs) per vSphere host 2

 

6.2            LICENSES NEEDED

Table 24) vSphere licenses needed per 2,000-seat installation.

VMware View Infrastructure Component Number
vSphere Server licenses (1 license needed per 2 CPUs) 32
VMware vCenter Server Licenses 1
VMware View Enterprise Licenses 1,000
VMware View Premier Licenses 1,000
Windows XP licenses 2,000

 

6.3            INSTALL VSPHERE

For information on the installation and configuration of vSphere, refer to the ESX and vCenter Server Installation Guide published by VMware.

Below are guidelines used for this environment when deploying the VMware View infrastructure.

Table 25) VMware View infrastructure components.

VMware View Infrastructure Component Number
Virtual machine per vSphere server 125
Virtual machine per CPU core 15.625
Memory per Windows XP VMware View desktop 512MB

 

6.4            INSTALL VMWARE VCENTER SERVER

For information on the installation and configuration of VMware vCenter Server refer to the ESX and vCenter Server Installation Guide published by VMware.

To obtain licenses for VMware, contact your VMware sales representative.

 

6.5            CONFIGURE SERVICE CONSOLE FOR REDUNDANCY

Table 26) Configure service console for redundancy.

Step Action
1 Make sure that the primary Service Console vSwitch has two NICs assigned to it.Note: The network ports that the NICs use must exist on the administrative VLAN and be on separate switches to provide network redundancy.
2 Open VMware vCenter.
3 Select a vSphere host.
4 In the right pane, select the Configuration tab.  

 

 

Step Action
Figure 33) VMware configuration.
5 In the Hardware box under the Configuration tab, select Networking.Figure 34) VMware networking.
6 In the Networking section, click the Properties section of vSwitch1.Figure 35) VMware networking properties.
7 In the Properties section, click the Network Adapters tab.Figure 36) VMware vSwitch configuration.

 

 

Step Action
8 Click Add at the bottom (pictured above) and select the vmnic that will act as the secondary NIC for the service console.Figure 37) Adding second vmnic to the vSwitch.
9 Click Next (pictured above). At the following screen, verify and click Next, then at the following screen click Finish. At the following screen, click Close.

 

 

Step Action
Figure 38) Adding second vmnic to the vSwitch confirmation. 

 

 

Figure 39) Adding second vmnic to the vSwitch finish.

 

 

Step Action
Figure 40) Adding second vmnic to the vSwitch close.

 

6.6            CONFIGURE VMWARE KERNEL NFS PORT

Table 27) Configure VMware kernel NFS port.

Step Action
1 For each vSphere host, create a separate NFS VMkernel network in the existing virtual switch. The VMkernel will be setup on the private, nonrouteable NFS VLAN created in previous steps. This VLAN can be created on the either the separate VDC on the Nexus 7000 or on a private, nonrouteable VLAN using a vPC on the Nexus 5020 network. For this example, VLAN 350 is used.Note: Currently, VDC is not supported on Cisco Nexus 5000 switches.
2 Use the following assignments for your NFS storage traffic VMware kernel IP addresses. Note: For the storage network the private subnet of 192.168.0.xxx is being used.
vSphere Host 1:192.168.0.11vSphere Host 2:192.168.0.12vSphere Host 3:192.168.0.13vSphere Host 4:192.168.0.14 vSphere Host 5:192.168.0.15vSphere Host 6:192.168.0.16vSphere Host 7:192.168.0.17vSphere Host 8:192.168.0.18 vSphere Host 9:192.168.0.19vSphere Host 10:192.168.0.20vSphere Host 11:192.168.0.21vSphere Host 12:192.168.0.22 vSphere Host 13:192.168.0.23vSphere Host 14:192.168.0.24vSphere Host 15:192.168.0.25vSphere Host16: 192.168.0.26

 

 

4. For the vSwitch for the NFS VMware kernel, set the load balancing policy to ―Route based on IP hash.‖   Figure 41) vSphere host NFS load balancing configuration.

 

6.7            CONFIGURE VMOTION

Table 28) Configure VMotion.

Step Action
1 For each vSphere host, create a separate VMotion VMkernel network in the existing virtual switch. The VMkernel will be setup on the private, nonrouteable VMotion VLAN created in previous steps. For this example, VLAN 350 is used.
2 Use the following assignments for your VMotion VMware kernel IP addresses. Note: For the storage network the private subnet of 192.168.1.xxx is being used.
vSphere Host 1:192.168.1.11vSphere Host 2:192.168.1.12vSphere Host 3:192.168.1.13vSphere Host 4:192.168.1.14 vSphere Host 5:192.168.1.15vSphere Host 6:192.168.1.16vSphere Host 7:192.168.1.17vSphere Host 8:192.168.1.18 vSphere Host 9:192.168.1.19vSphere Host 10:192.168.1.20vSphere Host 11:192.168.1.21vSphere Host 12:192.168.1.22 vSphere Host 13:192.168.1.23vSphere Host 14:192.168.1.24vSphere Host 15:192.168.1.25vSphere Host16: 192.168.1.26

 

6.8            VMWARE VSPHERE HOST NETWORK CONFIGURATION

Depicted below is the way a fully configured network environment will look once all the networking steps above have been completed.

Figure 42) VMware vSphere host configuration example.

 

 

6.9            ADD TEMPLATE VIRTUAL MACHINE DATASTORE TO VSPHERE HOST

Table 29) Add template virtual machine datastore to vSphere hosts.

Step Action
1 Open VMware vCenter.
2 Select a vSphere host.
3 In the right pane, select the Configuration tab. Figure 43) VMware configuration.
4 In the Hardware box, select the Storage link.

 

 

Figure 44) VMware virtual machine swap location.
5 In the upper-right corner, click Add Storage to open the Add Storage Wizard.Figure 45) VMware Add Storage.
6 Select the Network File System radio button and click Next.Figure 46) VMware Add Storage Wizard.
7 Enter a name for the storage controller, export, and datastore (view_rcu_template), and then click Next.

 

 

Figure 47) VMware Add Storage Wizard NFS configuration.
8 Click Finish.Figure 48) VMware Add Storage Wizard finish.

 

6.10         ADD VIEW_SWAP DATASTORE TO VSPHERE HOST

Table 30) Add vdi_swap datastore to vSphere hosts.

Step Action
1 Open vCenter.
2 Select a VMware vSphere host.
3 In the right pane, select the Configuration tab.
4 In the Hardware box, select the Storage link.
5 In the upper-right corner, click Add Storage to open the Add Storage Wizard.
6 Select the Network File System radio button and click Next.
7 Enter a name for the storage controller, export, and datastore (view_swap), then click Next.
8 Click Finish.
9 Repeat this procedure for all the vSphere hosts.

 

 

6.11         CONFIGURE LOCATION OF VIRTUAL SWAPFILE DATASTORE

Table 31) Configure location of datastore virtual swap file.

Step Action
1 Open VMware vCenter.
2 Select a vSphere host.
3 In the right pane, select the Configuration tab. Figure 49) VMware configuration.
4 In the Software box, select Virtual Machine Swapfile Location.  Figure 50) VMware virtual machine swap location.
5 In the right pane, select Edit.
6 The virtual machine Swapfile Location Wizard will open.

 

 

7 Click view_swap datastore and select OK.
8 Repeat steps 2 through 7 for each vSphere host in the vSphere cluster.

 

 

7           CONFIGURING THE ESX ENVIRONMENT WITH THE VSC

 

Step Action
1 Open VMware vCenter.
2 Click on the NetApp tab found in VMware vCenter. Figure 51) NetApp tab.
3 The Virtual Service Console (VSC) should now be visible. A screen similar to the image below should be visible. Figure 52) VSC configuration.
4 Set the Recommended Values by right-clicking on the ESX host and selecting ―Set Recommended Values.‖  Figure 53) VSC configuration—set recommended values.
5 The NetApp Recommended Settings screen should be visible. Leave the defaults checked and select OK. This will begin making the necessary changes to the ESX host.

 

 

Figure 54) VSC configuration—NetApp recommended settings.
6 Once the settings have been changed, the main VSC screen will be visible once again. The status will chance to ―Pending Reboot.‖ Figure 55) VSC configuration—recommended values set.
7 Please reboot the ESX host to finish the configuration changes.

 

 

8           SET UP VMWARE VIEW MANAGER 4.0 AND VMWARE VIEW COMPOSER

VMware View Manager is a key component of VMware View and is an enterprise-class desktop management solution that streamlines the management, provisioning, and deployment of virtual desktops. This product allows security for and configuration of the VMware View environment and allows an administrator to determine exactly which virtual machines a user may access.

View Composer is a component of the VMware View solution and uses VMware linked clone technology to rapidly create desktop images that share virtual disks with a master image to conserve disk space and streamline management.

For setup and configuration details for the different components of VMware View Manager and View Composer, refer to the VMware View Manager Administration Guide.

 

9           SET UP AND CONFIGURE WINDOWS XP GOLD IMAGE

 

9.1            CREATE A VIRTUAL MACHINE IN VMWARE VSPHERE

For the purposes of this portion of the document, follow whatever guidelines you have for both virtual machine size and RAM for your Windows XP virtual machine. For the purposes of this implementation we use 512MB RAM (VMware guidelines for RAM are between 256MB for low end and 512MB for high end). Follow the Guest Operating System Installation Guide by VMware, starting on page 145. Be sure to name this Windows XP virtual machine windows_xp_gold.

 

9.2            FORMAT THE VIRTUAL MACHINE WITH THE CORRECT STARTING PARTITION OFFSETS

 

To set up the starting offset using the fdisk command found in vSphere, follow the steps detailed below:

 

Table 32) Format a virtual machine with the correct starting offsets.

Step Action
1 Log in to the vSphere Service Console.
2 CD to the virtual machine directory and view this directory by typing the following commands (shown below):cd /vmfs/volumes/vdi_gold /windows_xp_gold ls –lFigure 56) Using FDisk for setting offset—navigate to .vmdk directory.
3 Get the number of cylinders from the vdisk descriptor by typing the following command (this number will be different depending on several factors involved with the creation of your .vmdk file):cat windows_xp_gold.vmdk Figure 57) Using FDisk for setting offset—find cylinders of the vDisk.
4 Run fdisk on the windows_xp_gold-flat.vmdk file by typing the following command:fdisk ./windows_xp_gold-flat.vmdk

 

 

Figure 58) Using FDisk for setting offset—starting FDisk.
5 Set the number of cylinders.
6 Type in x and then press Enter.
7 Enter c and press Enter.
8 Type in the number of cylinders that you found from doing step 3.Figure 59) Using FDisk for setting offset—set the number of cylinders.
9 Type p at the Expert command screen to look at the partition table (which should be blank).Figure 60) Using FDisk for setting offset—set view partition information.
10 Return to regular (nonextended) command mode by typing r at the prompt.Figure 61) Using FDisk for setting offset—set cylinder information.

 

 

11 Create a new partition by typing n and then p when you are asked which type of partition.
12 Enter 1 for the partition number, enter 1 for the first cylinder, and press Enter for the last cylinder question to make it use the default value.
13 Go into extended mode to set the starting offset by typing x.
14 Set the starting offset by typing b and pressing Enter, selecting 1 for the partition and pressing Enter, and entering 64 and pressing Enter.
15 Check the partition table by typing p.Figure 62) Using FDisk for setting offset—view partition table to verify changes.
16 Type r to return to the regular menu.
17 To set the system type to HPFS/NTF, type t.
18 For the Hex code, type 7.Figure 63) Using FDisk for setting offset—set system type and hex code.
19 Save and write the partition by typing w. Ignore the warning, which is normal.Figure 64) Using FDisk for setting offset—save and write the partition.
20 Start the virtual machine and run the Windows setup. Make sure to press Esc to bring up the boot menu and select ―CD ROM drive‖ to boot from CD.

 

 

Figure 65) Using FDisk for setting offset—VMware boot screen.If you miss the boot menu, the VM may appear to hang with a black screen with only a blinking cursor. Press ctrl-alt-insert to reboot the VM and try again to catch the boot menu by pressing Escape. If you have trouble catching the boot process above, you can insert a boot delay in the VM settings. In the VI Client, right-click the VM, then à    Edit Settings à   Optionsà    Advanced / Boot Options.Figure 66) Using FDisk for setting offset—advanced boot options.Note that boot delay is in milliseconds. You should return the boot delay to 0 after the VM boots normally from its virtual disk.
21 When the installation gets to the partition screen, install on the existing partition. DO NOT DESTROY or RECREATE! C: should already be highlighted. Press Enter at this stage.

 

  • DOWNLOAD AND PREPARE THE LSI 53C1030 DRIVER

Table 33) Download and prepare LSI 53C1030 driver.

Step Action
1 Download the LSI 53C1030 driver from http://www.rtfm-ed.co.uk/downloads/lsilogic.zip.
2 Using MagicISO or another third-party solution, create a .flp image containing LSI logic drivers. An alternative third-party solution is Virtual Floppy Drive 2.1.
3 Using VMware vCenter 4.0 upload the file to the desired datastore by performing the following steps:a.     At the Summary screen for a vSphere host, double-click the datastore icon to go into the Datastore Browser screen.

 

 

9.4            WINDOWS XP PREINSTALLATION CHECKLIST

Table 34) Windows XP preinstallation checklist.

Step Action
1 Be sure to have a Windows XP CD or ISO image that is accessible from the virtual machine.
2 Using the Virtual Infrastructure Client (VI Client), connect to VMware vCenter.
3 Locate the virtual machine that was initially created and verify the following by right-clicking the virtual machine and selecting Edit Settings:a.     A floppy drive is present.b.     The floppy drive is configured to connect at power on.c.     The device type is set to use a floppy image and is pointing to the LSI driver image.d.     A CD/DVD drive is present and configured to connect at power on.e.     A CD/DVD device type is configured to point at the Windows XP CD or ISO image.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 70) Verify virtual machine settings for virtual floppy drive.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 71) Verify virtual machine settings for virtual floppy drive.

 

9.5            INSTALL AND CONFIGURE WINDOWS XP

 

INSTALL WINDOWS XP

Table 35) Install Windows XP.

Step Action
1 Using the virtual infrastructure client, connect to VMware vCenter Server.
2 Right-click the virtual machine and select Open Console. This will allow you to send input and view the boot process.
3 Power on the virtual machine created earlier by clicking the green arrow icon at the top of the console screen (shown below)..     Figure 72) Power on button.
4 As the Windows setup process begins, press F6 when prompted to add an additional SCSI driver. Specify the LSI logic driver on the floppy image (.flp) at this stage.
5 Perform the installation of Windows XP as normal, selecting any specifics for your environment that need to be configured.
6 Because this is a template, keep the installation as generic as possible.
7 Enter a name for the storage appliance, export, and datastore (view_rcu_template), then click Next.
8 Click Finish.

 

 

CONFIGURE WINDOWS XP*

Table 36) Configure Windows XP.

Step Action
1 Install and configure the VMware tools.
2 If not applied to the installation CD, install the most recent service pack and the most recent Microsoft® updates.
3 Install the connection broker agent.
4 Set the Windows screen saver to blank.
5 Configure the default color setting for RDP by making the following change in the registry:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP- Tcp – Change the color depth to 4
6 Disable unused hardware.
7 Turn off theme enhancements.
8 Adjust the system for best performance by going to My Computer>Properties>Advanced Tab>Performance Section>Settings.
9 Set the blank screen saver to password protect on resume.
10 Enable hardware acceleration by going toStart>Control Panel>Display>Settings Tab>Advanced Button>Troubleshooting Tab.
11 Delete any hidden Windows update uninstalls.
12 Disable indexing services by going to Start>Control Panel>Add Remove Windows Components>Indexing Service.

 

 

Note: Indexing improves searches by cataloging files. For users who search a lot, indexing might be beneficial and should not be disabled.
13 Disable indexing of the C: drive by opening My Computer, right-clicking C:, and selecting Properties. Uncheck the options shown below: Figure 73) Uncheck to disable Indexing Service on C: drive.
14 Remove system restore points:Start>Control Panel>System>System Restore
15 Disable any unwanted services.
16 Run disk cleanup:My Computer>C: properties
17 Run disk defrag:My Computer>C: properties>Tools

*From Warren Ponder, Windows XP Deployment Guide (Palo Alto, CA: VMware, Inc., 2008), pp. 3–4.

 

 

DISABLING NTFS LAST ACCESS

Table 37) Disabling NTFS last access.

Step Action
1 Log in to the gold virtual machine.
2 Open a CMD window by going to start > run, enter cmd, and press Enter.

 

 

3 At the command line enter the following:fsutil behavior set disablelastaccess 1

 

 

 

CHANGE DISK TIMEOUT VALUE

Table 38) Change disk timeout values.

Step Action
1 Log in to the gold VM.
2 Open a regedit by going to start > run, enter regedit, and press Enter.
3 Find the TimeOutValue by following the path [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk].
4 Change the key “TimeOutValue”=dword:00000190.
5 Reboot the virtual machine now or at the end of the installation of applications and general system settings.

 

INSTALL APPLICATIONS

Install all the necessary infrastructure and business applications in the gold VM. A few examples include VMware View Agent (if planning to use VMware View Manager) to allow specific users or groups RDP access to the virtual machines, MS Office, antivirus scanning agent, Adobe Reader, and so on.

 

INSTALL VMWARE VIEW AGENT

Install VMware View Agent (if planning to use VMware View Manager) to allow specific users or groups RDP access to the virtual desktops.

 

POWER OFF VM AND CONVERT TO TEMPLATE

After performing all the template customizations and software installations, power off the virtual machine because the customizations and installations need to be powered off to deploy. Then convert the VM to a template. This reduces the risk of accidentally powering on the VM.

 

10       RAPID DEPLOYMENT OF WINDOWS XP VIRTUAL MACHINES IN A VMWARE VIEW ENVIRONMENT USING RCU 3.0

For detailed installation and configuration instructions for RCU 3.0, please see the Rapid Cloning Utility 3.0 Installation and Administration Guide. NetApp highly recommends using RCU 3.0 because further steps in this guide will use RCU 3.0 to create datastores, deploy virtual machines, and configure datastores from vCenter.

 

CREATE CUSTOMIZATION SPECIFICATION

Create a customization specification for use with deployment of the VMs. The customization specification creates the information necessary for sysprep to successfully customize a guest OS from the VMware vCenter Server. It includes information on hostname, network configuration, license information, domain membership, and other information necessary to customize a guest OS. This procedure can be found in the vSphere Basic System Administration Guide on page 180. This customization specification can be used by RCU to personalize each VM. In addition to creating the customization specification, sysprep will need to be downloaded and installed. Procedures to do this can be found in the vSphere Basic System Administration Guide on page 325.

 

DEPLOY SPACE-EFFICIENT CLONES USING RCU 3.0

Using the template virtual machine as the source virtual machine, create the virtual machines using RCU 3.0 in four datastores (250 virtual machines per datastore) on storage controller A in vSphere Cluster A with eight vSphere hosts. These virtual machines will be imported into VMware View Manager as part of a manual desktop pool, in persistent access mode.

RCU will perform the following steps:

  1. Create the clones with file
  2. Clone the datastores with volume
  3. Mount the NFS datastores to the vSphere
  4. Create the virtual machines from the cloned
  5. Customize the virtual machines using the customization
  6. Power on the virtual
  7. Import virtual machines into VMware View

Table 39) Deploy space-efficient clones using RCU 3.0.

Step Action
1 Log into the VMware vCenter Server using  the vCenter                          Client.
2 Once storage controllers have been added, select the inventory button to get back to the servers and VMs. Right-click the VM to be cloned and select ―Create NetApp Rapid Clones.‖ Figure 74) RCU—Create rapid clones.

 

 

Step Action
3 Choose the storage controller with the drop-down arrow and click Next.Figure 75) RCU—Select storage controller.Additionally, if the VMware VI client is not running, select Advanced Options and enter the password for the vCenter Server.
4 Select the data center, cluster, or server to provision the VMs to and select ―Specify the virtual machine folder for the new clones‖ if necessary and select Next.Figure 76) RCU—Select data center, cluster, or server.

 

 

Step Action
5 Select the disk format you would like to apply to the virtual machine clones and click Next.Figure 77) RCU—Select disk format.
6 Enter in the number of clones, the clone name, the starting clone number, and the clone number increment. Then if guest customization is required, select the checkbox and the customization specification that will be applied after the VM has been provisioned. Then choose whether or not the virtual machine will be powered on after the clones are created. Then, if using VMware View, select ―Import into connection broker‖ and choose ―VMware View.‖ Then select Create new datastores if required and click Next.Figure 78) RCU—Specify details of the virtual machine clones.

 

 

Step Action
7 If no datastores are present select Create NFS or VMFS datastore(s).Figure 79) RCU—Create and configure datastores
8 Select the number of datastores to create. Then provide the root of the datastore name, the size of the datastore in GB, and the aggregate that you wish to use for the virtual machines. Then check the box for thin provisioning if needed.  For NFS-based datastores the option to auto- grow the datastore will appear. You can then select the Grow increment size, the Maximum size, and whether or not you would like to provide specific datastore names. Then click Next.Figure 80) RCU— Create and configure datastores continued.

 

 

Step Action
9 After datastore creation RCU will display the datastore that was created. If necessary you can create additional datastores at this time, then click Next.Figure 81) RCU— Create and configure datastores complete.
10 Then select the datastore and click Next.Figure 82) RCU—Select the datastore.

 

 

Step Action
11 If you selected ―Import into connection broker‖ the wizard will ask for the View Server hostname, the Domain name of the view server, the username, and the password. Then you can choose to create either an individual or a manual desktop pool and provide a new or existing pool name. For manual pools, the admin has the option of creating a persistent or a nonpersistent pool.After this has been completed click Next.Figure 83) RCU—Specify the details of the connection broker import.

 

 

Step Action
12 Then review the configuration and if correct click Apply. The provisioning process will now begin. You can use the Tasks window within the vCenter Client to view the current tasks as well as the NetApp storage controller console.Figure 84) RCU—Apply configuration.
13 After creating the virtual machines review the View Manager configuration and entitle users by logging into the VMware View Administrator 4 interface.Figure 85) RCU—Entitle users in VMware View.

 

 

Step Action
14 Select the pool to be entitled—in this case it is the manual nonpersistent pool Helpdesk. Click Entitlements.Figure 86) RCU—Select the pool to be entitled in VMware View.
15 Then on the Entitlements screen click Add.Figure 87) RCU—Open the entitlement screen in VMware View.

 

 

Step Action
16 Select users or groups and either enter Name or Description to narrow down the search and click Find. Then click on the user(s) or group(s) to be entitled. Then click OK.Figure 88) RCU—Select users and groups in VMware View.
17 Verify that the users and groups to be added are correct and click OK.Figure 89) RCU—Verify users and groups to be added in VMware View.

 

 

Step Action
18 Verify that the pool is now Entitled and Enabled.Figure 90) RCU—Verify entitlement of pools in VMware View.
19 Then adjust the pool settings by clicking on the pool and Edit and clicking Next until you get to the Desktop/Pool Settings. Then, after adjusting the pool to your liking, click Finish.Note: The settings in this example are for demonstration purposes only. Your individual settings may be different.   Please consult the View Administration Guide for more information.Figure 91)RCU—Adjust pool settings in VMware View.

 

 

Step Action
20 Test the connection by logging into a desktop using the View Client.Figure 92) RCU—Test the connection in VMware View

 

 

 

Resize the FlexClone Volumes to the Estimated Size

Using RCU‘s datastore resizing feature, resize the four FlexClone volumes created on storage controller A to 525GB, planning for future growth considering the assumptions on new writes.

 

Note: The architecture proposed in this deployment guide balances the 2,000 virtual machines across 2 vSphere clusters with 8 vSphere hosts per cluster (16 vSphere hosts in total). The reason for this is that VMware does not support more than eight vSphere hosts per cluster when using VMware View Composer/linked clones. For further details, refer to View Composer Design Guide.

 

11       DEPLOY LINKED CLONES

This sample deployment has 500 virtual machines that are part of 2 automated desktop pools created using linked clones.

Pool 1: 500 virtual machines provisioned in persistent access mode with OS data disks and user data disk hosted on separate datastores created earlier.

Pool 2: 500 virtual machines provisioned in nonpersistent access mode with 1 datastore hosting OS data disk, created earlier.

For provisioning the linked clone–based desktop pools and associated virtual machines, refer to the procedure mentioned in VMware View Manager Administration Guide.

 

12       ENTITLE USERS/GROUPS TO DESKTOP POOLS

The next step is to entitle users/groups to the various desktop pools created in VMware View Manager. Follow the instructions in the VMware View Manager Administration Guide. Finally, install VMware View Client on every end user access device (PCs, thin clients, and so on).

 

13       SET UP FLEXSHARE (OPTIONAL)

FlexShare is a Data ONTAP® software feature that provides workload prioritization for a storage system. It prioritizes processing resources for key services when the system is under heavy load. FlexShare does not provide guarantees on the availability of resources or on how long particular operations will take to complete. FlexShare provides a priority mechanism to give preferential treatment to higher-priority tasks.

FlexShare provides storage systems with the following key features:    Relative priority of different volumes

Per-volume user versus system priority

 

Per-volume cache policies

 

These features allow storage administrators to tune how the system should prioritize system resources in the event that the system is overloaded.

Since the configuration presented in this design guide uses a high water mark of 80% CPU utilization for each storage controller in a cluster, it may be necessary to enable critical VMs to be available in the event of a failover.  NetApp recommends setting priorities for volumes that contain VMs that are especially critical and where potential downtime due to a storage controller failure could cause issues. By performing the following optional steps, critical VMs will not be affected by any performance degradation that could result in a storage controller takeover in the event of a failure.

Table 40) Enable priority settings.

Step Action
1 To enable priority settings, log into Storage Console.
2 Enter the following command:3160-2> priority onWed Feb 3 11:16:32 EST [wafl.priority.enable:info]: Priority scheduling is being enabledPriority scheduler starting.
3 To set the volume priority enter the following command:3160-2*> priority set volume <volume name> level=High system=High cache=keep
4 To ensure the proper setttings have been made for the volume enter the following command:3160-2*> priority show volume -v veabugold Volume: veabugoldEnabled: on Level: HighSystem: High Cache: keepUser read limit: n/a Sys read limit: n/aNVLOG limit: n/a%

 

14       TESTING AND VALIDATION OF THE VMWARE VIEW AND NETAPP STORAGE ENVIRONMENT

Below is a checklist designed to determine if your environment is setup correctly. Run these tests as appropriate for your environment and document the results.

 

Table 41) Testing and validation steps.

Item Item Description
1 Test Ethernet connectivity for VMware vSphere servers and NetApp. If using NIC teams or VIFs, pull network cables or down the interfaces and verify network functionality.
2 If running in a cluster, test SAN multipathing by performing a cable pull or by disabling a switch port (if applicable).
3 Verify that datastores are seen as cluster-wide resources by creating a custom map of the hosts and datastores and verifying connectivity.
4 Test vCenter functionality for appropriate access control, authentication, and VI clients.
5 Perform NetApp cluster failover testing for NAS and verify that datastores remain connected.
6 Test performance and IOPs to determine that the environment is behaving as expected.

 

15       100,000-SEAT STORAGE SCALE-OUT IN 10,000-SEAT INCREMENTS

Below is a chart detailing a FAS3160 HA pair storage scale-out from 10,000 to 100,000 seats. This chart uses the base deployment scenario detailed above in section 1.2. Because configurations are different in each environment the numbers may vary between different implementations. Therefore, the chart below represents this specific configuration and is to be used only as a reference and may not reflect each individual implementation.

Table 42) Incremental scale-out to 100,000 seats.

# of Seats # HA Pairs # Disk Shelves # Servers # Nexus 5020‘s*
10,000 2 8 80 4
20,000 3 15 160 6
30,000 5 22 240 7
40,000 6 30 320 9
50,000 8 37 400 11
60,000 10 45 480 12
70,000 11 53 560 14
80,000 13 60 640 18
90,000 15 68 720 19
100,000 16 76 800 21

*This configuration assumes two 6 port expansion slots have been added to the Nexus 5020

 

16       REFERENCES

TR-3705: NetApp and VMware VMware View Best Practices TR-3749: NetApp and VMware vSphere Storage Best Practices

TR-3505: NetApp Deduplication for FAS Deployment and Implementation Guide

TR-3747: NetApp Best Practices for File System Alignment in Virtual Environments ESX and vCenter Server Installation Guide

ESX Configuration Guide

vSphere Basic System Administration Guide Guest Operating System Installation Guide Getting Started With VMware View

VMware Infrastructure Documentation

 Windows XP Deployment Guide

VMware View Manager Administration Guide

VMware View Reference Architecture Planning Guide

Cisco Nexus 7000 Series NX-OS Interfaces Configuration Guide, Release 4.1 Cisco Nexus 5000 Series Switch CLI Software Configuration Guide

 

 

17       ACKNOWLEDGEMENTS

The following people contributed to the creation and design of this guide: Vaughn Stewart, Technical Marketing Engineer, NetApp

Larry Touchette, Technical Marketing Engineer, NetApp

Eric Forgette, Software Engineer, NetApp George Costea, Software Engineer, NetApp Peter Learmonth, Reference Architect, NetApp David Klem, Reference Architect, NetApp

Wen Yu, Sr. Technical Alliance Manager, VMware

Fred Schimscheimer, Sr. Technical Marketing Manager, VMware

Ravi Venkat, Technical Marketing Engineer, Cisco

 

 

18       FEEDBACK

Send an e-mail to xdl-vgibutmevmtr@netapp.com with questions or comments concerning this document.

 

VMware View on NetApp Deployment Guide

19       VERSION HISTORY

Table 43) Version history.

Version Date Document Version History
Version 1.0 May 2009 Original document
Version 2.0 February 2010 Updates to network configuration. RCU 3.0 and System Manager added.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any information or recommendations provided in this publication, or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS, and the use of this information or the implementation of any recommendations or techniques herein is a customer‘s responsibility and depends on the customer‘s ability to evaluate and integrate them into the customer‘s operational environment. This document and

the information contained herein may be used solely in connection with the NetApp products discussed in this document.



آخرین دیدگاه‌ها

    دسته‌ها