استوریج HPE StoreOnce 5100 48TB

  • 0

استوریج HPE StoreOnce 5100 48TB

استوریج HPE StoreOnce 5100 48TB

استوریج-HPE-StoreOnce-5100-48TB

رشد دیتا در یک سازمان باعث افزایش زمانهای بکاپ ، بروز مشکلات افزونه در سطح سرویس دهی سازمانها و به هدر رفتن منابع سخت افزاری و مدیریتی آن سازمان می شود. دستگاه StoreOnce یک ساختار بکاپ حرفه ای را برای محیط های Enterprise پیشنهاد می کند. ساختارهای بکاپ امروزی با ساختارهای بکاپ خطی قدیمی بسیار فاصله گرفته اند. این سیستم قابلیت کاهش حجم بکاپ تا 95 درصد و انتخاب از Small تا Enterprise می باشد.

سرعت مناسب برای ذخیره و بازیابی در محیطهای عملیاتی به این مفهوم است که مراحل بکاپ گیری نباید به روند کلی سرویس دهی سازمان اختلالی وارد نماید و همچنین مراحل بازگردانی اطلاعات نیز نباید آنقدر کند باشد که قراردادهای SLA را تحت الشعاع قرار دهد.

در دستگاه جدید HPE StoreOnce قابلیت Deduplication وجود دارد و شما می توانید عملیات Deduplication را در سرتاسر بزنامه کاربردی بکاپ انجام دهید.

این سیستم بالاترین درجه سازگاری و یکپارچگی را با نرم افزار های شناخته شده بکاپ دارد و همچنین این دستگاه می تواند با شبکه های SAN ، Virtual ، Ethernet شما کاملا هماهنگ شود و از این طریق باعث کمتر شدن هزینه ها و پیچیدگی های شبکه می گردد.

قابلیتهای سیستم

1- طراحی ماژولار دستگاه برای استفاده در محیط های Enterprise

طراحی ماژولار این دستگاه به کاربر این قابلیت را می دهد تا در هر لحظه با توجه به نیاز فعلی خود تصمیم گیری نماید و کار بسیار راحتی برای ارتقای آن در آینده داشته باشد.

2- طراحی مبتنی بر نیاز مشتری از مشتریان معمولی تا دیتاسنترهای بزرگ

3- سیستم مانیتورینگ StoreOnce Enterprise Manager

4- قابلیت ذخیره سازی حجم عظیمی از اطلاعات تا 1720 ترابایت در دستگاه StoreOnce 6500

5- مناسب برای محیط های معمولی Midsize با دستگاههای مدل 3520 و 3540 و 4900 و 5100

برای محیط های کوچک و دفاتر ریموت استفاده از ابزار StoreOnce VSA بهترین انتخاب می باشد و این ابزار دارای لایسنس های 4 و 10 و 50 ترابایتی می باشد. برای استفاده Entry نیز می توان از StoreOnce 3100 با قابلیت پشتیبانی از 8 ترابایت فضای کلی و 5.5 ترابایت قابل استفاده بهره برد.

دستگاه StoreOnce می تواند با انواع دستگاههای فیزیکی و مجازی عمل Deduplication را انجام دهد. علاوه بر این این دستگاه می تواند با سرور بکاپ و دستگاه StoreServ عملیات Deduplication را انجام دهد.

دستگاه HPE StoreOnce یک راهکار جامع برای انتقال اطلاعات میان محیطهای Enterprise می باشد. با استفاده از ماشین های مجازی بکاپ برای محیط های کوچک بسیار مقرون به صرفه می باشند.

بکاپ گیری این دستگاه توسط نرم افزار هایی مانند HP Data Protector و Symantec NetBackup و Backup Exec و از طریق نرم افزار هایی مانند OST و Veeam و BridgeHead صورت می پذیرد.

این دستگاه همچنین قابلیت پشتیبانی از Application های حیاتی مانند Oracle RMAN و SAP HANA و MS SQL را نیز بدون اضافه نمودن قطعه ای دارا می باشد.

دستگاه StoreOnce 6500  سرعت بکاپ گیری معادل 139 ترابایت بر ساعت را ارائه می دهد.

دستگاه StoreOnce 4900  سرعت بکاپ گیری معادل 22 ترابایت بر ساعت را ارائه می دهد.

دستگاه StoreOnce 5100  سرعت بکاپ گیری معادل 26 ترابایت بر ساعت را ارائه می دهد.

دستگاه StoreOnce 3540  سرعت بکاپ گیری معادل 13 ترابایت بر ساعت را ارائه می دهد.

دستگاه StoreOnce 3100  سرعت بکاپ گیری معادل 6.4 ترابایت بر ساعت را ارائه می دهد.

دستگاه پیکربندی شده با StoreOnce VSA نیز سرعت بکاپ گیری معادل 6 ترابایت بر ساعت را ارائه می دهد.

دستگاههای StoreOnce بالاترین سرعت بازگردانی اطلاعات را به میزان تقریبی 75 ترابایت بر ساعت را دارند.

کلیه عملیات Backup و Disaster Recovery بصورت اتوماتیک و بهینه تنها توسط یک کنسول به نام StoreOnce Catalyst انجام می گیرد و در صورت فیبر بودن ارتباط نیز از FC Fabric دستگاه استفاده خواهد نمود.

یکی از قابلیتهای کلیدی این دستگاه One-to-many Disaster Recovery می باشد که در آن بصورت همزمان می تواند دیتا را از یک سایت بر روی چند سایت دیگر انتقال نماید.

کلیه گزارشات و وضعیت های سیستمی در نرم افزار StoreOnce Enterprise Manager برای کاربر قابل مشاهده خواهد بود.

 

مشخصات فنی دستگاه

ظرفیت
  • فضای 48TB Raw
  • فضای 36TB Useable
  • قابل افزایش فضا تا میزان 288 ترابایت
نوع هارد دیسکها
  • تعداد 12 عدد هارد LFF SAS
اینترفیس های هاست
  • GbE
  • Fibre Channel
  • 1Gb Ethernet
  • Host interface type and number of ports per controller.
نرخ انتقال
  • 26.7 TB/hr
  • Maximum using StoreOnce Catalyst
Virtual appliance
  • No
Deduplication
  • HP StoreOnce deduplication
تعداد VTL ها و NAS Target ها
  • 32
حداکثر ظرفیت در صورت استفاده به عنوان Tape Library
  • 32768
حداکثر تعداد منابع اتصالی 
  • 32
قابلیت شبیه سازی به عنوان Tape های
  • HPE LTO-2/LTO-3/LTO-4/LTO-5/LTO-6 Ultrium Tape Drives in MSL2024 Tape Library, MSL4048 Tape Library, HPE D2D generic library with HPE D2D generic tape library
پروتکل های بکاپ
  • StoreOnce Catalyst, NAS (CIFS/NFS), and iSCSI/FC VTL
Storage expansion options
  • StoreOnce 5100 48TB Capacity Upgrade kit
RAID support
  • RAID 6
پشتیبانی از Replication
شکل و اندازه
  • 2U to 12U

محتویات جعبه
  • StoreOnce 5100 48 TB System, (12) 4TB disks, (2) Ethernet cables (Cat 5e) 3m, (2) Power cable (IEC 320 C13 Connector for Rack PDU), (1) Rack rail kit, Installation poster.

 

گروه فنی و مهندسی وی سنتر عرضه کننده کلیه تجهیزات و خدمات HP به مشتریان خود می باشد.

این مجموعه با بهره گیری از کارشناسان زبده آماده ارائه مشاوره فنی خرید تجهیزات به مشتریان می باشد.

شماره تماس: 88884268

 

 

 

 


  • 0

متدهای ذخیره سازی اطلاعات

امروزه یکی از مهمترین فاکتور ها در طراحی شبکه های سازمانی متد ذخیره سازی اطلاعات است. افزایش حجم اطلاعات، بازیابی اطلاعات و امنیت اطلاعات ذخیره شده از بزرگترین چالش های رو به رو است. در این مطلب سه تکنولوژی مختلف مورد بررسی قرار می گیرند.

توجه: این مطلب مفاهیم ابتدایی و مقدماتی را به صورت تئوری مورد بررسی قرار می دهد.

DAS – Direct-Access Storage

image

به سیستمی گفته می شود که در آن Storage ها مستقیما به Server یا WorkStation متصل باشند. با توجه به آن تعریف یک Hard Disk عادی که با استفاده از یکی از اینترفیس های رایج  SATA، IDE ، SCSI و یا SAS به سیستم متصل شده یک نوع DAS تلقی می شود. اما به عبارت رایج DAS به دسته از Hard Disk های داخلی و یا خارجی گفته می شود که به یک Server متصل باشد. ویژگی اصلی DAS آن است که آن Storage تنها در اختیار یک سیستم قرار می گیرد. DAS یک راهکار با توجیه مالی مناسب برای سرور هایی است که به سرعت دسترسی به اطلاعات بالا احتیاج داشته باشند اما حجم اطلاعات مورد ذخیره سازی آن ها اندک باشد. به عنوان مثال می توان به DHCP,DNS, Wins و DC ها اشاره کرد. با استفاده از DAS دسترسی به اطلاعات به صورت Block Base خواهد بود به آن معنا که دیتا در بلاک هایی بدون فرمت منتقل می شود. این تکنولوژی بر خلاف File Base عمل می کند. اغلب DAS برای تمیز دادن تکنولوژی های ذخیره سازی شبکه ای NAS و SAN با سایر تکنولوژی هایی ذخیره سازی غیر شبکه ای به کار می رود. معمولا اتصال DAS تنها به یک Server ممکن است البته با استفاده از برخی کنترلر های خارجی و در برخی از انواع DAS های خارجی امکان اتصال دو یا بیشتر سرور هم موجود است. رابط بین سرور و DAS معمولا HBA یا کنترلر SCSI است. علاوه بر پروتکل های ارتباطی که در فوق ذکر شد، با استفاده از Fiber Channel یا FC نیز امکان ارتباط  DAS با Server وجود دارد. با به کار گیری روش های رایج خنک سازی دوبل، مکانیزم ذخیره سازی RAID می توان Fault Tolerance را بهبود بخشید.

HBA یا host bus Adaptor یک یک سخت افزار است که یک کامپیوتر را به یک شبکه یا تجهیزات ذخیره سازی اطلاعات متصل می کند. این واژه هرچند بیشتر برای تجهیزات eSATA, SCSI و FC استفاده می شود، اما تجهیزاتی که با استفاده از IDE، Ethernet و یا FireWire و USB این امکان را ایجاد می کنند می توانند HBA گفته شوند. در تصویر زیر یک HBA مخصوص SCSI که اینترفیس ISA متصل می شود را می بینید.

800px-Controller_SCSI

 

نقص به کارگیری DAS در تمام سناریوهای ذخیره سازی ایجاد Islands Of Information یا جزایری اطلاعات است. همانطور که گفته شد، Storage در این حالت اغلب تنها توسط یک سرور قابل دسترسی است بنابراین مدیریت، افزایش هزینه نگه داری و استفاده بهینه از فضای ذخیره سازی با چالش های عمیقی رو به رو خواهد بود. مسئله دیگر آن است به دلیل آنکه DAS مستقیما به یک Server متصل است، در صورت Down شدن Server دسترسی به اطلاعات روی DAS تا راه اندازی مجدد سرور و یا انتقال DAS به یک سرور دیگر ممکن نخواهد بود که این باعث می شود برای Fault Tolerance از Failover Clustering استفاده شود. از روی دیگر هزینه اولیه DAS نسبت به سایر متد ها بسیار کمتر است که این سبب شده تا بیشتر سازمان ها به این روش برای ذخیره سازی اطلاعات روی آورند. در اینجا لازم است توجه شد که هر کدام از متد ها دارای مزایا و معایبی هستند که در سناریو های مختلف طراحی مناسب متفاوت خواهد بود و معایب ذکر شده در خصوص DAS به معنای ناکارآمدی آن نمی باشد. برای مدیریت DAS در ویندوز کنسول Disk Managment و Diskpart.exe به عنوان ابزاری در Command Line استفاده می شود.

NAS – Network Attached Storage

image

یک دستگاه NAS در واقع یک Server است که دارای یک سیستم عامل خاص مخصوص File Services است. مشهورترین سیستم عامل NAS سیستم عامل FreeNAS است که بر پایه FreeBSD و Open Sourece است. این سیستم عامل دارای قابلیت Web Managment است. FreeNAS به فضایی کمتر از 64MB نیاز دارد. برای آشنایی با این سیستم عامل می توانید از Disk Image مخصوص VMWare آن استفاده کنید. سیستم عامل های مشابه دیگری همچون NASLite و Nexenta نیز با قابلیت های مختلفی در دسترس اند. NAS ابتدا توسط سیستم عامل Novel و با استفاده از پروتکل NCP در سال 1983 معرفی شد. در سال 1984 شرکت SUN آن را در محیط یونیکس و با پروتکل NFS معرفی کرد. سپس Microsoft و 3com با توسعه نرم افزار LAN Manager و گسترش این پروتکل آن را توسعه دادند. 3Server و نرم افزار 3+share اولین سرور خاص برای این منظور بود که از سخت افزار، نرم افزار و چند هارد دیسک اختصاصی بهره می برد. IBM و SUN با الهام گرفتن از از فایل سرور NOVEL سرور های خاصی را ساختند. شرکت Auspex یکی از اولین سازندگان NAS است که در سال 1990 گروهی از متخصصان آن از آن شرکت جدا شدند و سیستمی را ایجاد کردند که هم از CIFT و هم از NFS به صورت همزمان استفاده می کرد. این اقدام در حقیقیت شروعی برای NAS سرور های خاص بود.

ویژگی اصلی NAS پیاده سازی آسان و قابلیت نگه داری حجم قابل توجهی اطلاعات که از طریق LAN قابل دسترسی باشد است. به صورت عملی برخلاف Local BUS دسترسی به اطلاعات از طریق LAN با سرعت کمتری اتفاق می افتد و به صورت File Based است. NAS برای File Server و Web Server ها می تواند یک Storage مناسب باشد. همچنین در محیط های کوچک به عنوان یک Backup Solution بسیار می تواند کارا عمل کند. معمولا NAS Server ها قابلیت اتصال ابزار های ورودی همانند Keyboard و ابزار های خروجی همانند مانیتور را ندارند و برای مدیریت آن ها از ابزار هایی که برای آن منظور طراحی شده استفاده می شود. ابزار مدیریتی NAS ها با توجه به نوعشان متفاوت هستند اما اغلب از طریق نرم افزار Web-Based خودشان صورت می گیرد. تصویر زیر مربوط به FreeNAS است. از آنجایی که NAS Server ها معمولا قابلیت ارتقا ندارند، ممکن است در اثر over load مشکلاتی ایجاد شود که نیاز به بازنگری در متد ذخیره سازی وجود دارد. NAS ها برای نگه داری فایل های Media با حجم زیاد در محیط های کوچک بهترین گزینه هستند. یک دستگاه NAS مناسب باید دارای تکنولوژی های Redundant Power Supply، Redundant data access path و Redundant  Controller باشد و قابلیت استفاده RAID و Clustering را داشته باشد.

freenas

پروتکل های مشهور مورد استفاده در تکنولوژی  NAS عبارت اند از:

CIFT – Common Internet File System و SMB – Server Message Block: یک پروتکل لایه Application است که برای دسترسی به فایل ها، پرینتر ها و پورت های سریال به اشتراک گذاشته شده به کار می رود. ابتدا توسط IBM پروتکل طراحی و پیاده سازی شده و سپس توسط Microsoft توسعه داده شده و قابلیت های آن اضافه شده.

NFS – Network File System: در 1984 توسط Sun Microsystems طراحی و پیاده سازی شد. امروزه NFS در اکثر سیستم عامل ها به کار گرفته شده است.

همچنین پروتکل های دیگری همچون AFP – Apple Filing Protocol، FTP، Rsync و… استفاده می شود.

Filer نوعی NAS است که صرفا نقش یک فایل سرور را ایفا می کند. با استفاده از Filer دیگر نیازی نیست تا سرور های گران قیمت شبکه درگیر کار ساده ی فایل سرور باشند. NAS Server ها در درون خود معمولا از SCSI استفاده می کنند.

300g_front

NAS یك وسیله شبكه محور است و عموما به دلیل یکپارچه سازی محل ذخیره سازی داده های كاربران در شبكه LAN مورد استفاده قرار می گیرد. NAS یك راهکار مناسب ذخیره سازی است كه دسترسی سریع و مستقیم كاربران به سیستم فایلی را فراهم می سازد. استفاده از NAS مشكل تاخیر هایی را بر طرف می سازد كه غالبا كاربران برای دسترسی به فایل های موجود در سرورهای همه منظوره با آن مواجه هستند. NAS ضمن تامین امنیت لازم، تمام خدمات فایلی و ذخیره سازی را از طریق پروتكل های استاندارد شبكه ای فراهم می سازد: TCP/IP برای انتقال داده ها، Ethernet و Giga Ethernet برای دسترسی میانی، و CIFS، HTTP، و NFS برای دسترسی به فایل از راه دور. علاوه بر این، با NAS می توان به طور همزمان به كاربران یونیكس و ویندوز سرویس داد و اطلاعات را بین معماری های متفاوت به اشتراك گذاشت. از نظر كاربران شبكه، NAS وسیله ای است كه دسترسی به فایل را بدون مزاحمت . ایجاد اختلال برای آنها مهیا می سازد. به كمك گیگا بایت اترنت به كارایی بالا و تاخیر كوتاه دست یافته و هزاران كاربران را از طریق فقط یك اینترفیس سرویس می دهد. بسیاری از سیستم های NAS دارای چند اینترفیس هستند و می توانند همزمان به چند شبكه متصل شوند.

SAN – Storage Area networks

image

SAN در واقع یک شبکه با عملکرد بسیار بالا است که مختص انتقال اطلاعات میان سرور ها و زیرسیستم ذخیره سازی اطلاعات است. از دیدگاه سیستم عامل سرور، محل ذخیره سازی به صورت local است. مهمترین وجه تمایز SAN با DAS آن است که در DAS فضا فقط در اختیار یک سرور است. اما با استفاده از Clustering و SAN می توان هم به بهینه ترین حالت ممکن از فضای ذخیره سازی موجود استفاده کرد و هم مقاومت در برابر خطا در وضعیت قابل قبولی قرار گیرد. با آنکه سرعت انتقال در DAS در گذشته بیشتر بوده، اما امروزه دیگر مسئله سرعت مطرح نمی باشد. راه اندازی SAN پیچیده تر و هزینه اولیه آن بسیار بیشتر از سایر تکنولوژی ها است. SAN زمانی بهترین انتخاب است که حجم عظیمی اطلاعات نیاز به مدیریت دارند و سرعت دسترسی به آن ها پر اهمیت است. SAN برای Backup Server ها گزینه مناسبی است و برای DataBase Server، Streaming Media Server، Mail Server ها در سازمانی بزرگ تنها راهکار موثر است. از سال 2000 پیچیدگی و هزینه بالای SAN کاسته شد و این سبب شده تا شرکت های کوچک تر هم به استفاده ازSAN ها روی آورند. SAN برای ارتباط میان Storage و Server از تجهیزات مخصوصی بهره می برد که به آن SAN Fabric گفته می شود. فضای موجود در SAN تحت پارتیشن های مجازی به نام LUN یا Logical Unit Number تقسیم بندی می شود و به عنوان پارتیشن Local در اختیار Server قرار می گیرد. سیستم عامل ها، File System مخصوص خود را روی LUN ها برقرار می کنند. برای آنکه چند Server بتوانند به دیتا ی ذخیره شده روی SAN دسترسی داشته باشند لازم است از SAN File System یا Clustered file system استفاده شود. SAN File System نوعی File System است که در آن امکان mount بودن هم زمان با چند سرور ایجاد می شود. مثال مناسب برای این نوع فایل سیستم، Cluster Shared Volume یا CSV  است که در ویندوز سرور 2008 R2 جزء ویژگی های Failover Clustering است و برای استفاده در Hyper-V کاربر دارد. برای مدیریت SAN، سیستم عامل ها ابزار های متمایزی ارائه می دهند. به عنوان مثال یکی از این ابزار ها SMfs در ویندوز سرور 2008 است که با استفاده از Add Features می توان آن را به قابلیت های پیش فرض نصب شده اضافه کرد. این ابزار برای ساخت و تخصیص دادن LUN ها کاربرد دارد. ویندوز سرور 2008 ابزار های متعدد دیگری را نیز دارا می باشد.

FC SANs

Fiber Channel یک عملکرد بالا با انتقال بلاکی (Block Based) برای زیرساخت ذخیره سازی اطلاعات را ایحاد می کند. راه انداریFC هزینه بالایی دارد و راه اندازی آن پیچیده است. اجزاء یک شبکه FC شامل سوییچ، Server HBAs و کابل ها است که تمام این اجزا مخصوص است و توسط کمپانی های محدودی ساخته می شود. FC تکنولوژی است که همچنان مشابه قبل مطلوب است. مزیت دیگر فاصله بسیار زیادی است که در این تکنولوژی پشتیبانی می شود.

FCoE

Fiber Channel over Ethernet یک نوع کپسوله کردن بلاک های FC برای انتقال روی Ethernet است. با این روش با استفاده از 10Gig Ethernet می توان با نگه داشتن زیرساخت FC گستره ی آن را افزود. FCoE در لایه IP قابل Route نیست و محدودیت های خاصی دارد.

iSCSI SANs

internet SCSI یک استاندارد برای توسعه انتقال بلاک های SCSI روی بستر Ethernet با استفاده از TCP/IP است. سرور ها با استفاده از نرم افزار هایی به نام iSCSI Initiator با تجهیزات مربوطه متصل می شوند. راه اندازی iSCSI به صورت عمومی ارزان تر و ساده تر از FC SANs است، اما با صرف نظر از این مزیت، کمپانی هایی که در گستره ی جغرافیایی وسعی به فعالیت می پردازند و به صورت توزیع شده فعالیت می کنند، ممکن است از جزایری از FC SANs بهره ببرند که محدود به 10KM می شود.( با آنکه امروزه برای افزایش 10KM تکنولوژی هایی موجود است، اما پیاده سازی آن ها توجیه اقصادی پیدا نکرده است) با استفاده iSCSI می توان ارتباط را در یک شبکه MAN یا WAN ایجاد کرد. بر خلاف FC Channel، پیاده سازی مایکروسافت از پروتکل iSCSI از پروتکل CHAP و IPSec بهره می برند.  بزرگترین ایراد iSCSI آن است که برای کارایی مناسب لازم است از سوییچ ها و کابل های 10GB Ethernet لازم است استفاده شود که گران قیمت هستند. از جهت دیگر، سرعت انتقال iSCSI از FC کمتر است. شاید بتوان امیدوار بود با پیشرفت و ارزان تر شدن جایگاه مناسب تری برای iSCSI ایجاد شود.

 

جمع بندی

بدون شک برای شبکه های کوچک استفاده از DAS به عنوان ساده ترین و کم هزینه ترین روش بهترین انتخاب است. همچنین در محیط های کوچک با استفاده از DAS هزینه نگه داری ممکن است کمتر از NAS باشد و به دلیل هزینه بالای راه اندازی و معمولا عدم نیاز، راهکار SAN در شبکه های کوچک استفاده نمی شوند مگر در شرایط خاص. در شبکه های بزرگتر با سایز متوسط، استفاده ترکیبی از NAS و DAS می تواند انتخاب مناسبی باشد. اما با افزایش فضای مورد نیاز هزینه SAN به ازای هر GigaByte فضا کاهش می یابد. با توجه به اهمیت اطلاعات و هزینه نگه داری راهکار SAN باید راهکاری مناسب باشد.

2983498217_31220ef649_o

در بررسی هزینه پیاده سازی باید مسئله استفاده بهینه از فضای موجود مد نظر قرار گیرد، هرچند که در محیط های کوچک مورد توجه قرار گرفته نمی شود اما با توجه به نمودار زیر که نرخ استفاده عملی را نمایش می دهد، نمودار دوم باز تعریف می شود که هزینه پیاده سازی و نگه داری SAN نسبت به DAS در آن کاهش چشم گیر پیدا می کند. برای اطلاعات بیشتر اینجا را بخوانید.

2983498429_e0ed69a270_o

2983498613_e40ed9e438_o


  • 0

فروش استوریج EMC

فروش استوریج EMC

گروه فنی و مهندسی وی سنتر آمادگی خود را جهت تامین انواع استوریج ها و تجهیزات EMC اعلام می کند.

برای قیمت گیری می توانید لیست خرید خود را به ایمیل Sales@vcenter.ir ارسال فرمایید.

شماره تماس: 88884268


  • 0

بررسی سرور HP DL380 G9

 

برای خرید و قیمت گیری کلیه سرور های HP با ما تماس بگیرید.

شماره تماس: 88884268


سرور HP ProLiant DL380 Gen9

قیمت-سرور-HP-Proliant-DL380-G9-Server-قطعات-سرور

Bottleneck سرور شما کجاست ؟  فضای ذخیره سازی ، منابع پردازشی و یا ارتقا پذیری 
سرور HP Proliant DL380 G9 که پر فروش ترین سرور دنیا می باشد نسل به نسل بهتر می شود. این سرور بالاترین Performance و ارتقا پذیری ممکن را در سرور های دو یونیتی دو پردازنده ای HP دارد. قابل اعتماد بودن و دسترسی بالای این دستگاه قابلیت راه اندازی سرویس های شبکه را در بالاترین Level را میسر می کند. این سرور یکی از بهترین گزینه برای محیط دیتاسنتر های امروزی می باشد. 
سرور DL380 G9 جوری طراحی شده است تا بتواند هزینه ها و پیچیدگیهای شبکه را کاهش دهد و همچنین با استفاده از پردازنده های Intel E5-2600 V3 کارایی تا 70 درصد را ارائه می دهد.
این دستگاه با استفاده از تکنولوژی جدید DDR4 SmartMemory می تواند تا 1.5 تربایت حافظه RAM را ساپورت کند و از لحاظ کارایی نیز نسبت به نسل قبلی آن 14 درصد افزایش یافته است.
در حوزه ارتباط با استوریج داخلی این ارتباط به 12Gb/s SAS رسیده است.
سرور DL380 G9 قابلیت پشتیبانی از 40 گیگابیت بر ثانیه شبکه و انواع کارت های گرافیکی حرفه ای را دارند.
همانطور که می دانید سرور های HP در سازمانهایی که سرویس های مکانیزه و اتوماتیک دارند مورد استفاده قرار می گیرند و کلیه عملیات Update – Deploy – Monitor – Maintain را به سادگی تمام انجام می گیرند. 

تغییرات کلیدی

  • HP NVMe PCIe SSDs پشتیبانی تا 1.6 ترابایت
  •  Intel and NVIDIA GPUs های جدید
  • مدیریت حرفه ای سرور های رکمونت HP G9 با نرم افزار HP OneView که باعث سادگی و کاهش پیچیدگیهای مدیریتی

نرم افزار HP OneView 2.0

ویژگیها

طراحی بسیار قابل انعطاف این دستگاه کلیه نیاز های شما در آینده را فراهم خواهد ساخت.

شاسی دستگاه HP Proliant DL380 G9 بازطراحی شده است و قابلیت پشتیبانی از 8 تا 24 هارد SFF و تعداد 4 تا 12 هارد LFF را دارند.

 

The redesigned HP Flexible Smart Array and HP Smart SAS HBA Controllers allow you the flexibility to choose the optimal 12 Gb/s controller most suited to your environment. In conjunction with the embedded SATA HP Dynamic Smart Array B140i Controller for boot, data and media needs.

You have a choice of embedded 4x1GbE, HP FlexibleLOM or PCIe standup 1GbE to 40GbE Adapters provide you flexibility of networking bandwidth and fabric so you can adapt and grow to changing business needs.

World-class Performance and Industry-leading Energy Efficiency

The HP ProLiant DL380 Gen9 Server supports industry standard Intel® Xeon® E5-2600 v3 processors with up to 18 cores, 12G SAS and 1.5 TB of HP DDR4 Smart Memory. 3
High efficiency redundant HP Flexible Slot Power Supplies provide up to 96% efficiency (Titanium), HP Flexible Slot Battery Backup module and support for the HP Power Discovery Services offering.
ENERGY STAR® qualified server configurations illustrate a continued commitment to helping customers conserve energy and save money.
Improved ambient temperature standards with HP Extended Ambient Operating Support (ASHRAE A3 and A4) and optional performance heatsinks help to reduce cooling costs. 4
Enhanced performance with active and passive, double-wide GPU support for workload acceleration.

Agile Infrastructure Management for Accelerating IT Service Delivery

With HP ProLiant DL380 Gen9 Server, HP OneView provides infrastructure management for automation simplicity across servers, storage and networking.
Online personalized dashboard for converged infrastructure health monitoring and support management with HP Insight Online.
Configure in Unified Extensible Firmware Interface (UEFI) boot mode, provision local and remote with Intelligent Provisioning and Scripting Toolkits.
Embedded management to deploy, monitor and support your server remotely, out of band with HP iLO.
Optimize firmware and driver updates and reduce downtime with HP Smart Update, consisting of HP SUM (Smart Update Manager) and SPP (Service Pack for ProLiant).

Industry Leading Serviceability

The HP ProLiant DL380 Gen9 Server comes with a complete set of HP Technology Services, delivering confidence, reducing risk and helping customers realize agility and stability. HP provides consulting to transform your infrastructure; services to deploy, migrate and support your new ProLiant Servers.
HP provides consulting advice to transform and modernize your infrastructure; services to deploy, migrate and support your new ProLiant servers and education to help you succeed quickly. 

System features

Processor family
Intel® Xeon® E5-2600 v3 product family
Number of processors
  • 1 or 2
Processor core available
18 or 16 or 14 or 12 or 10 or 8 or 6 or 4
Form factor (fully configured)
2U
Power supply type
(2) Flex Slot
Expansion slots
  • (6) Maximum – For detail descriptions reference the QuickSpec

Memory

Memory, maximum
  • 1.5TB
Memory slots
  • 24 DIMM slots
Memory type
  • DDR4 SmartMemory

Storage

Drive description
  • ((4) or (12)) LFF SAS/SATA/SSD
  • ((8), (10), (16), (18) or (24)) SFF SAS/SATA/SSD
  • (2) SFF Rear drive optional or
  • (3) LFF Rear drive optional
  • NVMe support via Express Bay will limit max drive capacity

Controller Cards

Network controller
  • 1Gb 331i Ethernet Adapter 4 Ports per controller and/or
  • Optional FlexibleLOM
  • Depending on model
Storage controller
  • (1) Dynamic Smart Array B140i and/or
  • (1) Smart Array P440
  • (1) Smart Array P840
  • Depending on model

Server management

Infrastructure management
iLO Management (standard), Intelligent Provisioning (standard), iLO Advanced (optional), HP Insight Control (optional)

What’s included

Warranty
  • 3/3/3 – Server Warranty includes three years of parts, three years of labor, three years of onsite support coverage. Additional information regarding worldwide limited warranty and technical support is available at:http://h18004.www1.hp.com/products/servers/platforms/warranty/index.html Additional HP support and service coverage for your product can be purchased locally. For information on availability of service upgrades and the cost for these service upgrades, refer to the HP website athttp://www.hp.com/support

استوریج HP MSA 1040 2040

استوریج HP MSA 1040 2040 نسل جدید استوریج های Small-Midsize شرکت HP می باشد که در آنها قابلیت Tiering اضافه شده است.

برای راه اندازی و مدیزیت این استوریج ها باید کاربر آن دانش لازم در حوزه های زیر را داشته باشد:

– دانش کافی شبکه

– پیکربندی دستگاه استوریج

– مدیریت شبکه SAN

– داشتن معلومات کافی در مورد شبکه های DAS و NAS

– آشنایی کامل با پروتکل های ارتباطی شبکه SAN

 

کتاب های الکترنیکی مرتبط

• HP MSA System Racking Instructions
• HP MSA 1040 Installation Guide
• HP MSA 1040 System Cable Configuration Guide
• HP MSA 1040 User Guide
• HP MSA 1040 SMU Reference Guide
• HP MSA 1040 CLI Reference Guide
• HP MSA 2040 Installation Guide
• HP MSA 2040 System Cable Configuration Guide
• HP MSA 2040 User Guide
• HP MSA 2040 SMU Reference Guide
• HP MSA 2040 CLI Reference Guide

در صورت نیاز می توانید به آدرس های زیر مراجعه فرمایید.

HP MSA 1040   hp.com/go/msa1040
HP MSA 2040   hp.com/go/msa2040

معرفی

استوریج HP MSA 1040 برای محیط های کوچک که نیاز به پروتکل های 8Gb Fibre Channel و 6/12Gb SAS و 1GbE و 10GbE دارند مناسب می باشد.

استوریج MSA 1040 نسل چهارم از معماری این استوریج می باشد که با یک پردازنده جدید ارائه می شود و دارای دو پورت هاست در هر کنترلر و میزان 4 گیگابایت حافظه Cache در هر کنترلر می باشد.

ویژگی های کلی این استوریج را در ذیل می توانید ببینید:

• کنترلر جدید با معماری و پردازنده جدید
• حافظه کش 4 گیگابایتی در هر کنترلر
• ارتباط SAS با سرعت 6 و 12 گیگابیت بر ثانیه
• قابلیت افزایش فضای ذخیره سازی از طریق کابلهای SAS
• دو پورت هاست در هر کنترلر
• ارتباط 4 و 8 گیگابیت بر ثانیه فیبر
• ارتباط iSCSI با سرعت 1 و 10 گیگابیت اترنت
• قابلیت Expand تا 4 محفظه Enclosure دیگر
• پشتیبانی از 99 دیسک SFF
• پشتیبانی از Thin-Provisioning که نیازمند خرید لایسنس می باشد.
• سیستم مدیریت وب جدید
• پشتیبانی از Sub-LUN Tiering که نیازمند خرید لایسنس می باشد.
• قابلیت Wide Striping که نیازمند خرید لایسنس می باشد و در آن شما می توانید تعداد هارد دیسک زیادی به یک Volume اختصاص بدهید تا بتوانید Performance را بالا ببریم.

استوریج HP MSA 2040 یک استوریج با کارایی بسیار بالا می باشد که دارای سرعت 8 الی 16 گیگابیت بر ثانیه  فیبر برای انتقال داده و ارتباط 6 و 12 گیگابیت بر ثانیه SAS و ارتباط iSCSI با سرعت های 1 و 10 گیگابیت اترنت و چهار پورت هاست در هر کنترلر می باشد. استوریج MSA 2040 برای مشتریانی که بدنبال Performance بالا و قیمت پایین می باشند گزینه مناسبی می باشد و همچنین با توجه به قابلیت های موجود این دستگاه برای راهکارهای جامع و مجازی سازی نیز یکی از گزینه های اصلی می باشد.

ویژگی های کلی این استوریج را در ذیل می توانید ببینید:

• کنترلر جدید با معماری و پردازنده جدید
• حافظه کش 4 گیگابایتی در هر کنترلر
• پشتیبانی از هارد های SSD
• 4 host ports per controller
• ارتباط 4 و 8 و 16 گیگابیت بر ثانیه فیبر
• ارتباط SAS با سرعت 6 و 12 گیگابیت بر ثانیه
• ارتباط iSCSI با سرعت 1 و 10 گیگابیت اترنت
• پشتیبانی همزمان از FC و iSCSI در یک کنترلر
• قابلیت Expand تا 8 محفظه Enclosure دیگر
• پشتیبانی از 199 دیسک SFF
• قابلیت پشتیبانی از FDE Full Drive Encryption بوسیله SED
• پشتیبانی از Thin-Provisioning
• پشتیبانی از Sub-LUN Tiering
• پشتیبانی از Read Cache
• قابلیت پشتیبانی از Performance Tier با خرید لایسنس
• سیستم مدیریت وب جدید
• قابلیت Wide Striping که نیازمند خرید لایسنس می باشد و در آن شما می توانید تعداد هارد دیسک زیادی به یک Volume اختصاص بدهید تا بتوانید Performance را بالا ببریم. برای مثال 16 دیسک برای یک Volume
•  GL200 Firmware
• تنها هارد های SSD و SED در این استوریج ساپورت می شوند.

استوریج HP MSA 2040 با توجه به هاردهای SSD کارایی بالایی را ارائه می کند. با توجه به قابلیت Performance Tier و وجود هارد های SSD بالاترین performance ممکن را در محیط های اشتراکی و مجازی ارائه می نماید.

استوریج های HP MSA 1040/2040 با یک لایسنس 64 Snapshots و Volume Copy برای بالا بردن درصد Data Protection می آیند. لایسنس دیگری به نام Snapshots 512 وجود دارد که خرید آن بصورت Optional می باشد. این دستگاه می تواند با دستگاههایی نظیر P2000 G3 و MSA 1040 و MSA 2040 و با استفاده از پروتکل های FC و iSCSI عملیات Replication را انجام دهد. این عملیات با استفاده از ویژگی Remote Snap دستگاه صورت می پذیرد.

اصطلاحات

VirtualDisk Vdisk: واژه Vdisk با نام Disk Group تعویض شده است. در استوریج های خطی Linear و نیز SMU ورژن 2 از واژه Vdisk استفاده شده است. برای استوریج های مجازی و ورژن سوم SMU از واژه Disk Group استفاده شده است. در واقع Vdisk و Disk Group اساسا یک تعریف را دارند.

Vdisk ها دارای انواع RAID بیشتری مانند NRAID ، RAID0 ، RAID3 می باشند که قابل راه اندازی در محیط CLI و نیز RAID50 با قابلیت راه اندازی در هر دو محیط CLI و SMU می باشند.

Linear Storage: Linear Storage is the traditional storage that has been used for the four MSA generations. With Linear Storage, the user specifies which drives make up a RAID Group and all storage is fully allocated.
Virtual Storage: Virtual Storage is an extension of Linear Storage. Data is virtualized not only across a single disk group, as in the linear implementation, but also across multiple disk groups with different performance capabilities and use cases.
Disk Group: A Disk Group is a collection of disks in a given redundancy mode (RAID 1, 5, 6, or 10 for Virtual Disk Groups and NRAID and RAID 0, 1, 3, 5, 6, 10, or 50 for Linear Disk Groups). A Disk Group is equivalent to a Vdisk in Linear Storage and utilizes the same proven fault tolerant technology used by Linear Storage. Disk Group RAID level and size can be created based on performance and/or capacity requirements. With GL200 or newer firmware multiple Virtual Disk Groups can be allocated into a Storage Pool for use with the Virtual Storage features; while Linear Disk Groups are also in Storage Pools, there is a one-to-one correlation between Linear Disk Groups and their associated Storage Pools.
Storage Pools: The GL200 firmware or newer introduces Storage Pools which are comprised of one or more Virtual Disk Groups or one Linear Disk Group. For Virtual Storage, LUNs are no longer restricted to a single disk group as with Linear Storage. A volume’s data on a given LUN can now span all disk drives in a pool. When capacity is added to a system, users will benefit from the performance of all spindles in that pool.
When leveraging Storage Pools, the MSA 1040/2040 supports large, flexible volumes with sizes up to 128TB and facilitates seamless capacity expansion. As volumes are expanded data automatically reflows to balance capacity utilization on all drives.
LUN (Logical Unit Number): The MSA 1040/2040 arrays support 512 volumes and up to 512 snapshots in a system. All of these volumes can be mapped to LUNs. Maximum LUN sizes are up to 128TB and the LUNs sizes are dependent on the storage architecture: Linear vs. Virtualized. Thin Provisioning allows the user to create the LUNs independent of the physical storage.
Thin Provisioning: Thin Provisioning allows storage allocation of physical storage resources only when they are consumed by an application. Thin Provisioning also allows over-provisioning of physical storage pool resources allowing ease of growth for volumes without predicting storage capacity upfront.
Thick Provisioning: All storage is fully allocated with Thick Provisioning. Linear Storage always uses Thick Provisioning.
Tiers: Disk tiers are comprised of aggregating 1 or more Disk Groups of similar physical disks. The MSA 2040 supports 3 distinct tiers:
1. A Performance tier with SSDs
2. A Standard SAS tier with Enterprise SAS HDDs
3. An Archive tier utilizing Midline SAS HDDs
Prior to GL200 firmware, the MSA 2040 operated through manual tiering, where LUN level tiers are manually created and managed by using dedicated Vdisks and volumes. LUN level tiering requires careful planning such that applications requiring the highest performance be placed on Vdisks utilizing high performance SSDs. Applications with lower performance requirements can be placed on Vdisks comprised of Enterprise SAS or Midline SAS HDDs. Beginning with GL200 or newer firmware, the MSA 2040 now supports Sub-LUN Tiering and automated data movement between tiers.

The MSA 2040 automated tiering engine moves data between available tiers based on the access characteristics of that data. Frequently accessed data contained in “pages” will migrate to the highest available tier delivering maximum I/O’s to the application. Similarly, “cold” or infrequently accessed data is moved to lower performance tiers. Data is migrated between tiers automatically such that I/O’s are optimized in real-time.
The Archive and Standard Tiers are provided at no charge on the MSA 2040 platform beginning with GL200 or newer firmware. The Performance Tier utilizing a fault tolerant SSD Disk Group is a paid feature that requires a license. Without the Performance Tier license installed, SSDs can still be used as Read Cache with the Sub-LUN Tiering feature. Sub-LUN Tiering from SAS MDL (Archive Tier) to Enterprise SAS (Standard Tier) drives is provided at no charge.
Note
The MSA 1040 only supports the Standard and Archive Tiers, and requires a license to enable Sub-LUN Tiering and other Virtual Storage features such as Thin Provisioning.
Read Cache: Read Cache is an extension of the controller cache. Read Cache allows a lower cost way to get performance improvements from SSD drives.
Sub-LUN Tiering: Sub-LUN Tiering is a technology that allows for the automatic movement of data between storage tiers based on access trends. In the MSA 1040/2040, Sub-LUN Tiering places data in a LUN that is accessed frequently in higher performing media while data that is infrequently accessed is placed in slower media.
Page: An individual block of data residing on a physical disk. For Virtual Storage, the page size is 4 MB.
General best practices
Use version 3 of the Storage Management Utility
With the release of the GL200 firmware, there is an updated version of the Storage Management Utility (SMU). This new Web Graphical User Interface (GUI) allows the user to use the new features of the GL200 firmware. This is version 3 of the SMU (V3).
SMU V3 is the recommended Web GUI. SMU V3 can be accessed by adding “/v3” to the IP address of the MSA array: https://<MSA array IP>/v3
The recommended Web GUI is SMU V2 if you are using the replication features of the MSA 1040/2040. SMU V2 can be accessed by adding “/v2” to the IP address of the MSA array: https://<MSA array IP>/v2
Become familiar with the array by reading the manuals
The first recommended best practice is to read the corresponding guides for either the HP MSA 1040 or HP MSA 2040. These documents include the User Guide, the Storage Management Utility (SMU) Reference Guide, or the Command Line Interface (CLI) Reference Guide. The appropriate guide will depend on the interface that you will use to configure the storage array. Always operate the array in accordance with the user manual. In particular, never exceed the environmental operation requirements.
Other HP MSA 1040 and HP MSA 2040 materials of importance to review are:
• The HP MSA Remote Snap Technical white paper located at: h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-0977ENW.pdf

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

The recommended practice would be to use initiator nicknaming as outlined in figure 1, host aggregating of initiators and the grouping of hosts using V3 SMU.
Disk Group initialization for Linear Storage
During the creation of a Disk Group for Linear Storage, the user has the option to create a Disk Group in online mode (default) or offline mode. If the “online initialization” option is enabled, you can use the Disk Group while it is initializing. Online initialization takes more time because parity initialization is used during the process to initialize the Disk Group. Online initialization is supported for all HP MSA 1040/2040 RAID levels except for RAID 0 and NRAID. Online initialization does not impact fault tolerance.
If the “online initialization” option is unchecked, which equates to “offline initialization,” you must wait for initialization to complete before using the Disk Group for Linear Storage, but the initialization takes less time to complete.
Figure 2. Choosing online or offline initialization

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

Best practice for monitoring array health
Setting up the array to send notifications is important for troubleshooting and log retention.
Configure email and SNMP notifications
The Storage Management Utility (SMU) version 3 is the recommended method for setting up email and SNMP notifications. Setting up these services is easily accomplished by using a Web browser; to connect; type in the IP address of the management port of the HP MSA 1040/2040.
Email notifications can be sent to up to as many as three different email addresses. In addition to the normal email notification, enabling managed logs with the “Include logs as an email attachment” option enabled is recommended. When the “Include logs as an email attachment” feature is enabled, the system automatically attaches the system log files to the managed logs email notifications sent. The managed logs email notification is sent to an email address which will retain the logs for future diagnostic investigation.
The MSA 1040/2040 storage system has a limited amount of space to retain logs. When this log space is exhausted, the oldest entries in the log are overwritten. For most systems this space is adequate to allow for diagnosing issues seen on the system. The managed logs feature notifies the administrator that the logs are nearing a full state and that older information will soon start to get overwritten. The administrator can then choose to manually save off the logs. If “Include logs as an email attachment” is also checked, the segment of logs which is nearing a full state will be attached to the email notification. Managed logs attachments can be multiple MB in size.
Enabling the managed logs feature allows log files to be transferred from the storage system to a log-collection system to avoid losing diagnostic data. The “Include logs as an email attachment” option is disabled by default.
HP recommends enabling SNMP traps. Version 1 SNMP traps can be sent to up to three host trap addresses (i.e., HP SIM Server or other SNMP server). To send version 3 SNMP traps, create a SNMPv3 user with the Trap Target account type. Use SNMPv3 traps rather than SNMPv1 traps for greater security. SNMP traps can be useful in troubleshooting issues with the MSA 1040/2040 array.
To configure email and version 1 SNMP settings in the SMU, click Home -> Action -> Set Up Notifications.
Enter the correct information for email, SNMP, and Managed Logs. See figure 4.
Figure 3. Setting Up Management services

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

Figure 4. SNMP, Email, and Managed Logs Notification Settings

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

To configure SNMPv3 users and trap targets, click Home | Action | Manage Users. See figure 5.
Figure 5. Manage Users

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

Enter the correct information for SNMPv3 trap targets. See figure 6.
Figure 6. User Management

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

Setting the notification level for email and SNMP
Setting the notification level to Warning, Error, or Critical on the email and SNMP configurations will ensure that events of that level or above are sent to the destinations (i.e., SNMP server, SMTP server) set for that notification. HP recommends setting the notification level to Warning.
HP MSA 1040/2040 notification levels:
• Warning will send notifications for all Warning, Error, or Critical events.
• Error will only send Error and Critical events.
• Critical will only send Critical events.
Sign up for proactive notifications for the HP MSA 1040/2040 array
Sign up for proactive notifications to receive MSA product advisories. Applying the suggested resolutions can enhance the availability of the product.
Sign up for the notifications at: hp.com/go/myadvisory
Best practices for provisioning storage on the HP MSA 1040/2040
The release of the GL200 firmware for the MSA 1040/2040 introduces virtual storage features such as Thin Provisioning and Sub-LUN Tiering. The section below will assist in the best methods for optimizing these features for the MSA 1040/2040.
Thin Provisioning
Thin Provisioning is a storage allocation scheme that automatically allocates storage as your applications need it.
Thin provisioning dramatically increases storage utilization by removing the equation between allocated and purchased capacity. Traditionally, application administrators purchased storage based on the capacity required at the moment and for future growth. This resulted in over-purchasing capacity and unused space.
With Thin Provisioning, applications can be provided with all of the capacity to which they are expected to grow but can begin operating on a smaller amount of physical storage. As the applications fill their storage, new storage can be purchased as needed and added to the array’s storage pools. This results in a more efficient utilization of storage and a reduction in power and cooling requirements.
Thin provisioning is enabled by default for virtual storage. The overcommit setting only applies to virtual storage and simply lets the user oversubscribe the physical storage (i.e., provision volumes in excess of physical capacity). If a user disables overcommit, they can only provision virtual volumes up to the available physical capacity. Snapshots are allowed for virtual volumes only with overcommit enabled. The overcommit setting is not applicable on traditional linear storage.
Overcommit is performed on a per pool basis and using the “Change Pool Settings” option. To change the Pool Settings to overcommit disabled:
1. Open V3 of the SMU and select “Pools”
2. Click “Change Pool Settings”
3. Uncheck the “Enable overcommitment of pool?” by clicking the box.

 

Figure 7. Changing Pool Settings

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

Figure 8. Disabling the overcommit of the pool

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

Thresholds and Notifications
If you use Thin Provisioning, monitor space consumption and set notification thresholds appropriately for the rate of storage consumption. The thresholds and notifications below can help determine when more storage needs to be added.
Users with a manage role can view and change settings that affect the thresholds and corresponding notifications for each storage pool.
• Low Threshold—When this percentage of pool capacity has been used, Informational event 462 is generated to notify the administrator. This value must be less than the Mid Threshold value. The default is 25%.
• Mid Threshold—When this percentage of pool capacity has been used, Warning event 462 is generated to notify the administrator to add capacity to the pool. This value must be between the Low Threshold and High Threshold values. The default is 50%. If the over-commitment setting is enabled, the event has Informational severity; if the over-commitment setting is disabled, the event has Warning severity.
• High Threshold—When this percentage of pool capacity has been used, Warning event 462 is generated to alert the administrator that it is critical to add capacity to the pool. This value is automatically calculated based on the available capacity of the pool minus reserved space. This value cannot be changed by the user.

 

T10 Unmap for Thin Reclaim
Unmap is the ability to reclaim thinly provisioned storage after the storage is no longer needed.
There are procedures to reclaim unmap space when using Thin Provisioning and ESX.
The user should run the unmap command with ESX 5.0 Update 1 or higher to avoid performance issues.
In ESX 5.0, unmap is automatically executed when deleting or moving a Virtual Machine.
In ESX 5.0 Update 1 and greater, the unmap command was decoupled from auto reclaim; therefore, use the VMware® vSphere CLI command to run unmap command.
See VMware documentation for further details on the unmap command and reclaiming space.
Pool Balancing
Creating and balancing storage pools properly can help with performance of the MSA array. HP recommends keeping pools balanced from a capacity utilization and performance perspective. Pool balancing will leverage both controllers and balance the workload across the two pools.
Assuming symmetrical composition of storage pools, create and provision storage volumes by the workload that will be used. For example, an archive volume would be best placed in a pool with the most available Archive Tier space. For a high performance volume, create the Disk Group on the pool that is getting the least amount of I/O on the Standard and Performance Tiers.
Determining the pool space can easily be viewed in V3 of the SMU. Simply navigate to “Pools” and click the name of the pool.

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

Viewing the performance of the pools or Virtual Disk Groups can also assist in determining where to place the Archive Tier space.
From V3 of the SMU, navigate to “Performance” then click “Virtual Pools” from the “Show:” drop-down box. Next, click the pool and for real time data, click “Show Data”. For Historical Data, click the “Historical Data” box and “Set time range”.

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

Tiering
A Tier is defined by the disk type in the Virtual Disk Groups.
• Performance Tier contains SSDs
• Standard Tier contains 10K RPM/15K RPM Enterprise SAS drives
• Archive Tier contains MDL SAS 7.2K RPM drives
Disk Group Considerations
With the GL200 firmware on the MSA, allocated pages are evenly distributed between disk groups in a tier; therefore, create all disk groups in a tier with the same RAID type and number of drives to ensure uniform performance in the tier.
Consider an example where the first Disk Group in the Standard Tier consists of five 15K Enterprise SAS drives in a RAID 5 configuration. To ensure consistent performance in the tier, any additional disk groups for the Standard Tier should also be a RAID 5 configuration. Adding a new disk group configured with four 10K Enterprise SAS drives in a RAID 6 configuration will produce inconsistent performance within the tier due to the different characteristics of the disk groups.

For optimal write performance, parity based disk groups (RAID 5 and RAID 6) should be created with “The Power of 2” method. This method means that the number of data (non-parity) drives contained in a disk group should be a power of 2. See the chart below.
RAID Type
Total Drives per Disk Group
Data Drives
Parity Drives
RAID 5
3
2
1
RAID 5
5
4
1
RAID 5
9
8
1
RAID 6
4
2
2
RAID 6
6
4
2
RAID 6
10
8
2
Due to the limitation of Disk Groups in a pool, which is 16, RAID type should be considered when creating new Disk Groups. For example, instead of creating multiple RAID 1 Disk Groups, consider using a larger RAID 10 Disk Group.
Drive Type and Capacity Considerations when using Tiering
All hard disk drives in a tier should be the same type. For example, do not mix 10K RPM and 15K RPM drives in the same Standard Tier.
If you have a Performance Tier on the MSA 2040, consider sizing the Performance Tier to be 5%–10% the capacity of the Standard Tier.
Disk Group RAID Type Considerations
RAID 6 is recommended when using large capacity Midline (MDL) SAS drives in the Archive Tier. The added redundancy of RAID 6 will protect against data loss in the event of a second disk failure with large MDL SAS drives.
RAID 5 is commonly used for the Standard Tier where the disks are smaller and faster resulting in shorter rebuild times. RAID 5 is used in workloads that typically are both random and sequential in nature.
See the Best practices for SSDs section for RAID types used in the Performance Tier and Read Cache.
Global Spares with Tiers
Using Global spares is recommended for all tiers based on spinning media. When using these global spares, make sure to use the same drive types as the Disk Group. The drive size must be equal or larger than the smallest drive in the tier.
Expanding Virtual Volumes
There might come a time when the Virtual Disk Group in a pool will start to fill up. To easily add more space, the MSA implements Wide Striping to increase the size of the virtual volumes. The recommended method to increase the volume size is to add a new Virtual Disk Group with the same amount of drives and RAID type as the existing Virtual Disk Group has.
For example, a Virtual Disk Group in pool A is filling up. This Disk Group is a five 300GB drive, 15K RPM, RAID 5 Disk Group. The recommended procedure would be to create a new Virtual Disk Group on pool A that also has five, 300GB 15K disk drives in a RAID 5 configuration.

Best practices when choosing drives for HP MSA 1040/2040 storage
The characteristics of applications and workloads are important when selecting drive types for the HP MSA 1040/2040 array.
Drive types
The HP MSA 1040 array supports SAS Enterprise drives and SAS Midline (MDL) drives. The HP MSA 2040 array supports SSDs, SAS Enterprise drives, SAS Midline (MDL) drives, and Self-Encrypting Drives (SED). See the Full Disk Encryption section below for more information on SED drives. The HP MSA 1040/2040 array does not support Serial ATA (SATA) drives. Choosing the correct drive type is important; drive types should be selected based on the workload and performance requirements of the volumes that will be serviced by the storage system. For sequential workloads, SAS Enterprise drives or SAS MDL drives provide a good price-for-performance tradeoff over SSDs. If more capacity is needed in your sequential environment, SAS MDL drives are recommended. SAS Enterprise drives offer higher performance than SAS MDL and should also be considered for random workloads when performance is a premium. For high performance random workloads, SSDs would be appropriate when using the MSA 2040 array.
SAS MDL drives are not recommended for constant high workload applications. SAS MDL drives are intended for archival purposes.
Best practices to improve availability
There are many methods to improve availability when using the HP MSA 1040/2040 array. High availability is always advisable to protect your assets in the event of a device failure. Outlined below are some options that will help you in the event of a failure.
Volume mapping
Using volume mapping correctly can provide high availability from the hosts to the array. For high availability during a controller failover, a volume must be mapped to at least one port accessible by the host on both controllers. Mapping a volume to ports on both controllers ensures that at least one of the paths is available in the event of a controller failover, thus providing a preferred/optimal path to the volume.
In the event of a controller failover, the surviving controller will report that it is now the preferred path for all Disk Groups. When the failed controller is back online, the Disk Groups and preferred paths switch back to the original owning controller.
Best practice is to map volumes to two ports on each controller to take advantage of load balancing and redundancy to each controller.
Mapping a port will make a mapping to each controller; thus, mapping port 1 will map host ports A1 and B1. Mapping to port 2 will map host ports A2 and B2.
With this in mind, make sure that physical connections are set up correctly on the MSA, so that a server has a connection to both controllers on the same port number. For example, on a direct attach MSA 2040 SAS with multiple servers, make sure that ports A1 and B1 are connected to server A, ports A2 and B2 are connected to server B, and so on.

 

Figure 9. Direct Attach Cabling

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

It is not recommended to enable more than 8 paths to a single host, i.e., 2 HBA ports on a physical server connected to 2 ports on the A controller and 2 ports on the B controller. Enabling more paths from a host to a volume puts additional stress on the operating system’s multipath software which can lead to delayed path recovery in very large configurations.
Note
Volumes should not be mapped to multiple servers at the same time unless the operating systems on the servers are cluster aware. However, since a server may contain multiple unique initiators, mapping a volume to multiple unique initiators (that are contained in the same server) is supported and recommended. Recommended practice is to put multiple initiators for the same host into a host and map the host to the LUNs, rather than individual maps to initiators.
Redundant paths
To increase the availability of the array to the hosts, multiple, redundant paths should be used along with multipath software. Redundant paths can also help in increasing performance from the array to the hosts (discussed later in this paper). Redundant paths can be accomplished in multiple ways. In the case of a SAN attach configuration, best practice would be to have multiple, redundant switches (SANs) with the hosts having at least one connection into each switch (SAN), and the array having one or more connections from each controller into each switch. In the case of a direct attach configuration, best practice is to have at least two connections to the array for each server. In the case of a direct attach configuration with dual controllers, best practice would be to have at least one connection to each controller.
Multipath software
To fully utilize redundant paths, multipath software should be installed on the hosts. Multipath software allows the host operating system to use all available paths to volumes presented to the host; redundant paths allow hosts to survive SAN component failures. Multipath software can increase performance from the hosts to the array. Table 1 lists supported multipath software by operating systems.
Note
More paths are not always better. Enabling more than 8 paths to a single volume is not recommended

 

Table 1. Multipath and operating systems
Operating system
Multipath name
Vendor ID
Product ID
Windows® 2008/2012
Microsoft® multipath I/O (MPIO)
HP
MSA 2040 SAN
MSA 2040 SAS
MSA 1040 SAN
MSA 1040 SAS
Linux®
Device mapper/multipath
HP
MSA 2040 SAN
MSA 2040 SAS
MSA 1040 SAN
MSA 1040 SAS
VMware
Native multipath (NMP)
HP
MSA 2040 SAN
MSA 2040 SAS
MSA 1040 SAN
MSA 1040 SAS
Installing MPIO on Windows Server® 2008 R2/2012
Microsoft has deprecated servermanagercmd for Windows Server 2008 R2 so you will use the ocsetup command instead.
1. Open a command prompt window and run the following command:

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

Note
There are 6 spaces between HP and MSA in the mpclaim command.
The mpclaim –n option avoids rebooting. Reboot is required before MPIO is operational.
The MPIO software is installed. When running the mpclaim command, type in the correct product ID for your MSA product. See table 1 above.
2. If you plan on using MPIO with a large number of LUNs, configure your Windows Server Registry to use a larger PDORemovePeriod setting.
–If you are using a Fibre Channel connection to a Windows server running MPIO, use a value of 90 seconds.
–If you are using an iSCSI connection to a Windows server running MPIO, use a value of 300 seconds.
See “Long Failover Times When Using MPIO with Large Numbers of LUNs” below for details.
Once the MPIO DSM is installed, no further configuration is required; however, after initial installation, you should use Windows Server Device Manager to ensure that the MPIO DSM has installed correctly as described in “Managing MPIO LUNs” below.

 

Long Failover Times When Using MPIO with Large Numbers of LUNs
Microsoft Windows servers running MPIO use a default Windows Registry PDORemovePeriod setting of 20 seconds. When MPIO is used with a large number of LUNs, this setting can be too brief, causing long failover times that can adversely affect applications.
The Microsoft Technical Bulletin Configuring MPIO Timers, describes the PDORemovePeriod setting:
“This setting controls the amount of time (in seconds) that the multipath pseudo-LUN will continue to remain in system memory, even after losing all paths to the device. When this timer value is exceeded, pending I/O operations will be failed, and the failure is exposed to the application rather than attempting to continue to recover active paths. This timer is specified in seconds. The default is 20 seconds. The max allowed is MAXULONG.”
Workaround: If you are using MPIO with a large number of LUNs, edit your registry settings so that HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\mpio\Parameters\PDORemovePeriod is set to a higher value.
• If you are using a Fibre Channel connection to a Windows server running MPIO, use a value of 90 seconds.
• If you are using an iSCSI connection to a Windows server running MPIO, use a value of 300 seconds.
For more information, refer to Configuring MPIO Timers at: technet.microsoft.com/en-us/library/ee619749%28WS.10%29.aspx
Managing MPIO LUNs
The Windows Server Device Manager enables you to display or change devices, paths, and load balance policies, and enables you to diagnose and troubleshoot the DSM. After initial installation of the MPIO DSM, use Device Manager to verify that it has installed correctly.
If the MPIO DSM was installed correctly, each MSA 1040/2040 storage volume visible to the host will be listed as a multi-path disk drive as shown in the following example.

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

 

To verify that there are multiple, redundant paths to a volume, right-click the Multi-Path Disk Device and select Properties.

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

Click the MPIO tab to view the MPIO property sheet, which enables you to view or change the load balance policy and view the number of paths and their status.

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

The Details tab shows additional parameters.

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

Dual power supplies
The HP MSA 1040/2040 chassis and supported expansion enclosures ship with dual power supplies. At a minimum, connect both power supplies in all enclosures. For the highest level of availability, connect the power supplies to separate power sources.
Dual controllers
The HP MSA 2040 can be purchased as a single or dual controller system; the HP MSA 1040 is sold only as a dual controller system. Utilizing a dual controller system is best practice for increased reliability for two reasons. First, dual controller systems will allow hosts to access volumes during a controller failure or during firmware upgrades (given correct volume mapping discussed above). Second, if the expansion enclosures are cabled correctly, a dual controller system can withstand an expansion IO Module (IOM) failure, and in certain situations a total expansion enclosure failure.
Reverse cabling of expansion enclosures
The HP MSA 1040/2040 firmware supports both fault tolerant (reverse cabling) and straight-through SAS cabling of expansion enclosures. Fault tolerant cabling allows any expansion enclosure to fail or be removed without losing access to other expansion enclosures in the chain. For the highest level of fault tolerance, use fault tolerant (reverse) cabling when connecting expansion enclosures.

Figure 10. Reverse cabling example using the HP MSA 1040 system

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

See the MSA Cable Configuration Guide for more details on cabling the HP MSA 1040/2040.
The HP MSA 1040/2040 Cable Configuration Guides can be found on the MSA support pages.
For MSA 1040: hp.com/support/msa1040
For MSA 2040: hp.com/support/msa2040
Create Disk Groups across expansion enclosures
HP recommendation is to stripe Disk Groups across shelf enclosures to enable data integrity in the event of an enclosure failure. A Disk Group created with RAID 1, 10, 3, 5, 50, or 6 can sustain one or more expansion enclosure failures without loss of data depending on RAID type. Disk Group configuration should take into account MSA drive sparing methods such as dedicated, global, and dynamic sparing.
Drive sparing
Drive sparing, sometimes referred to as hot spares, is recommended to help protect data in the event of a disk failure in a fault tolerant Disk Group (RAID 1, 3, 5, 6, 10, or 50) configuration. In the event of a disk failure, the array automatically attempts to reconstruct the data from the failed drive to a compatible spare. A compatible spare is defined as a drive that has sufficient capacity to replace the failed disk and is the same media type (i.e., SAS SSD, Enterprise SAS, Midline SAS, or SED drives). The HP MSA 2040 supports dedicated, global, and dynamic sparing. The HP MSA 1040/2040 will reconstruct a critical or degraded Disk Group.
Important
An offline or quarantined Disk Group is not protected by sparing.
Supported spare types:
• Dedicated spare—reserved for use by a specific Disk Group to replace a failed disk. This method is the most secure way to provide spares for Disk Groups. The array supports up to 4 dedicated spares per Disk Group. Dedicated spares are only applicable to Linear Storage.
• Global spare—reserved for use by any fault-tolerant Disk Group to replace a failed disk. The array supports up to 16 global spares per system. At least one Disk Group must exist before you can add a global spare. Global Spares are applicable to both Virtual and Linear Storage.
• Dynamic spare—all available drives are available for sparing. If the MSA has available drives and a Disk Group becomes degraded any available drive can be used for Disk Group reconstruction. Dynamic spares are only applicable to Linear Storage.

Sparing process
When a disk fails in a redundant Disk Group, the system first looks for a dedicated spare for the Disk Group. If a dedicated spare is not available or the disk is incompatible, the system looks for any compatible global spare. If the system does not find a compatible global spare and the dynamic spares option is enabled, the system uses any available compatible disk for the spare. If no compatible disk is available, reconstruction cannot start.
During reconstruction of data, the effected Disk Group will be in either a degraded or critical status until the parity or mirror data is completely written to the spare, at which time the Disk Group returns to fault tolerant status. For RAID 50 Disk Groups, if more than one sub-Disk Group becomes critical, reconstruction and use of spares occurs in the order sub-Disk Groups are numbered. In the case of dedicated spares and global spares, after the failed drive is replaced, the replacement drive will need to added back as a dedicated or global spare.
Best practice for sparing is to configure at least one spare for every fault tolerant Disk Group in the system.
Drive replacement
In the event of a drive failure, replace the failed drive with a compatible drive as soon as possible. As noted above, if dedicated or global sparing is in use, mark the new drive as a spare (either dedicated or global), so it can be used in the future for any other drive failures.
Working with Failed Drives and Global Spares
When a failed drive rebuilds to a spare, the spare drive now becomes the new drive in the Disk Group. At this point, the original drive slot position that failed is no longer part of the Disk Group. The original drive should be replaced with a new drive.
In order to get the original drive slot position to become part of the Disk Group again, do the following:
1. Replace the failed drive with a new drive.
2. When the new drive is online and marked as “Available”, configure the drive as a global spare drive.
3. Fail the drive in the original global spare location by removing it from the enclosure. The RAID engine will rebuild to the new global spare which will then become an active drive in the RAID set again.
4. Replace the drive you manually removed from the enclosure.
5. If the drive is marked as “Leftover”, clear the disk metadata.
6. Re-configure the drive as the new global spare.
Virtual Storage only uses Global sparing. Warnings alerts are sent out when the last Global spare is used in a system.
Implement Remote Snap replication with Linear Storage
The HP MSA 1040/2040 storage system Remote Snap feature is a form of asynchronous replication that replicates block- level data from a volume on a local system to a volume on the same system or on a second independent system. The second system may be at the same location as the first, or it may be located at a remote site.
Best practice is to implement Remote Snap replication for disaster recovery.
Note
Remote Snap requires a purchasable license in order to implement.

To obtain a Remote Snap license, go to: h18004.www1.hp.com/products/storage/software/p2000rs/index.html
See the HP MSA Remote Snap Technical white paper: h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-0977ENW.pdf

Use VMware Site Recovery Manager with Remote Snap replication
VMware vCenter Site Recovery Manager (SRM) is an extension to VMware vCenter that delivers business-continuity and disaster-recovery solution that helps you plan, test, and execute the recovery of vCenter virtual machines. SRM can discover and manage replicated datastores, and automate migration of inventory from one vCenter to another. Site Recovery Manager integrates with the underlying replication product through a storage replication adapter (SRA).
SRM is currently supported on the MSA 1040/2040 in linear mode only.
For best practices with SRM and MSA Remote Snap replication, see the “Integrate VMware vCenter SRM with HP MSA Storage” technical white paper: h20195.www2.hp.com/V2/GetPDF.aspx/4AA4-3128ENW.pdf
Note
This paper was written for the HP MSA P2000, but is also applicable for the MSA 1040/2040 FC and iSCSI models.
Best practices to enhance performance
This section outlines configuration options for enhancing performance for your array.
Cache settings
One method to tune the storage system is by choosing the correct cache settings for your volumes. Controller cache options can be set for individual volumes to improve a volume’s I/O performance.
Caution
Only disable write-back caching if you fully understand how the host operating system, application, and adapter move data. If used incorrectly, you might hinder system performance.
Using write-back or write-through caching
By default, volume write-back cache is enabled. Because controller cache is backed by super-capacitor technology, if the system loses power, data is not lost. For most applications, write-back caching enabled is the best practice. With the transportable cache feature, write-back caching can be used in either a single or dual controller system. See the MSA 1040/2040 User Guide for more information on the transportable cache feature.
You can change a volume’s write-back cache setting. Write-back is a cache-writing strategy in which the controller receives the data to be written to disks, stores it in the memory buffer, and immediately sends the host operating system a signal that the write operation is complete, without waiting until the data is actually written to the disk. Write-back cache mirrors all of the data from one controller module cache to the other unless cache optimization is set to no-mirror. Write-back cache improves the performance of write operations and the throughput of the controller. This is especially true in the case of random I/O, where write-back caching allows the array to coalesce the I/O to the Disk Groups.
When write-back cache is disabled, write-through becomes the cache-writing strategy. Using write-through cache, the controller writes the data to the disks before signaling the host operating system that the process is complete. Write-through cache has lower write operation and throughput performance than write-back, but all data is written to non-volatile storage before confirmation to the host. However, write-through cache does not mirror the write data to the other controller cache because the data is written to the disk before posting command completion and cache mirroring is not required. You can set conditions that cause the controller to change from write-back caching to write-through caching. Please refer to the HP MSA 1040/2040 User Guide for ways to set the auto write through conditions correctly. In most situations, the default settings are acceptable.
In both caching strategies, active-active failover of the controllers is enabled.

Optimizing read-ahead caching
You can optimize a volume for sequential reads or streaming data by changing its read ahead, cache settings. Read ahead is triggered by sequential accesses to consecutive LBA ranges. Read ahead can be forward (that is, increasing LBAs) or reverse (that is, decreasing LBAs). Increasing the read-ahead cache size can greatly improve performance for multiple sequential read streams. However, increasing read-ahead size will likely decrease random read performance.
• Adaptive—this option works well for most applications: it enables adaptive read-ahead, which allows the controller to dynamically calculate the optimum read-ahead size for the current workload. This is the default.
• Stripe—this option sets the read-ahead size to one stripe. The controllers treat non-RAID and RAID 1 Disk Groups internally as if they have a stripe size of 512 KB, even though they are not striped.
• Specific size options—these options let you select an amount of data for all accesses.
• Disabled—this option turns off read-ahead cache. This is useful if the host is triggering read ahead for what are random accesses. This can happen if the host breaks up the random I/O into two smaller reads, triggering read ahead.
Caution
Only change read-ahead cache settings if you fully understand how the host operating system, application, and adapter move data so that you can adjust the settings accordingly.
Optimizing cache modes
You can also change the optimization mode for each volume.
• Standard—this mode works well for typical applications where accesses are a combination of sequential and random; this method is the default. For example, use this mode for transaction-based and database update applications that write small files in random order.
• No-mirror—in this mode each controller stops mirroring its cache metadata to the partner controller. This improves write I/O response time but at the risk of losing data during a failover. Unified LUN presentation (ULP) behavior is not affected, with the exception that during failover any write data in cache will be lost. In most conditions No-mirror is not recommended, and should only be used after careful consideration.
Parameter settings for performance optimization
You can configure your storage system to optimize performance for your specific application by setting the parameters as shown in table 2. This section provides a basic starting point for fine-tuning your system, which should be done during performance baseline modeling.

Table 2. Optimizing performance for your application
Application
RAID level
Read-ahead cache size
Cache write optimization
Default
5 or 6
Adaptive
Standard
High-Performance Computing (HPC)
5 or 6
Adaptive
Standard
Mail spooling
1
Adaptive
Standard
NFS_Mirror
1
Adaptive
Standard
Oracle_DSS
5 or 6
Adaptive
Standard
Oracle_OLTP
5 or 6
Adaptive
Standard
Oracle_OLTP_HA
10
Adaptive
Standard
Random 1
1
Stripe
Standard
Random 5
5 or 6
Stripe
Standard
Sequential
5 or 6
Adaptive
Standard
Sybase_DSS
5 or 6
Adaptive
Standard
Sybase_OLTP
5 or 6
Adaptive
Standard
Sybase_OLTP_HA
10
Adaptive
Standard
Video streaming
1 or 5 or 6
Adaptive
Standard
Exchange database
5 for data; 10 for logs
Adaptive
Standard
SAP®
10
Adaptive
Standard
SQL
5 for data; 10 for logs
Adaptive
Standard
Other methods to enhance array performance
There are other methods to enhance performance of the HP MSA 1040/2040. In addition to the cache settings, the performance of the HP MSA 1040/2040 array can be maximized by using the following techniques.
Place higher performance SSD and SAS drives in the array enclosure
The HP MSA 1040/2040 controller is designed to have a single SAS link per drive in the array enclosure and only four SAS links to expansion enclosures. Placing higher performance drives (i.e., SSD for HP MSA 2040 only and Enterprise SAS drives for both the HP MSA 1040 and HP MSA 2040) in the storage enclosure allows the controller to utilize the performance of those drives more effectively than if they were placed in expansion enclosures. This process will help generate better overall performance.

Fastest throughput optimization
The following guidelines list the general best practices to follow when configuring your storage system for fastest throughput:
• Host ports should be configured to match the highest speed your infrastructure supports.
• Disk Groups should be balanced between the two controllers.
• Disk drives should be balanced between the two controllers.
• Cache settings should be set to match table 2 (“Optimizing performance for your application”) for the application.
• In order to get the maximum sequential performance from a Disk Group, you should only create one volume per Disk Group. Otherwise you will introduce randomness into the workload when multiple volumes on the Disk Group are being exercised concurrently.
• Distribute the load across as many drives as possible.
• Distribute the load across multiple array controller host ports.
Creating Disk Groups
When creating Disk Groups, best practice is to add them evenly across both controllers when using linear storage or across both pools when using virtual storage. With at least one Disk Group assigned to each controller, both controllers are active. This active-active controller configuration allows maximum use of a dual-controller configuration’s resources.
Choosing the appropriate RAID levels
Choosing the correct RAID level when creating Disk Groups can be important for performance. However, there are some trade-offs with cost when using the higher fault tolerant RAID levels.
See table 3 below for the strengths and weaknesses of the supported HP MSA 1040/2040 RAID types.
Table 3. HP MSA 1040/2040 RAID levels
RAID level
Minimum disks
Allowable disks
Description
Strengths
Weaknesses
NRAID
1
1
Non-RAID, non-striped mapping to a single disk
Ability to use a single disk to store additional data
Not protected, lower performance (not striped)
0
2
16
Data striping without redundancy
Highest performance
No data protection: if one disk fails all data is lost
1
2
2
Disk mirroring
Very high performance and data protection; minimal penalty on write performance; protects against single disk failure
High redundancy cost overhead: because all data is duplicated, twice the storage capacity is required
3
3
16
Block-level data striping with dedicated parity disk
Excellent performance for large, sequential data requests (fast read); protects against single disk failure
Not well-suited for transaction-oriented network applications; write performance is lower on short writes (less than 1 stripe)
5
3
16
Block-level data striping with distributed parity
Best cost/performance for transaction-oriented networks; very high performance and data protection; supports multiple simultaneous reads and writes; can also be optimized for large, sequential requests; protects against single
Write performance is slower than RAID 0 or RAID 1

 

Table 3. HP MSA 1040/2040 RAID levels (continued)
RAID level
Minimum disks
Allowable disks
Description
Strengths
Weaknesses
6
4
16
Block-level data striping with double distributed parity
Best suited for large sequential workloads; non-sequential read and sequential read/write performance is comparable to RAID 5; protects against dual disk failure
Higher redundancy cost than RAID 5 because the parity overhead is twice that of RAID 5; not well-suited for transaction-oriented network applications; non-sequential write performance is slower than RAID 5
10
(1+0)
4
16
Stripes data across multiple RAID 1 sub-Disk Groups
Highest performance and data protection (protects against multiple disk failures)
High redundancy cost overhead: because all data is duplicated, twice the storage capacity is required; requires minimum of four disks
50
(5+0)
6
32
Stripes data across multiple RAID 5 sub-Disk Groups
Better random read and write performance and data protection than RAID 5; supports more disks than RAID 5; protects against multiple disk failures
Lower storage capacity than RAID 5
Note
RAID types NRAID, RAID 0, and RAID 3 can only be created using the Command Line Interface (CLI) and are not available in the SMU. When using Virtual Storage, only non-fault tolerant RAID types can be used in the Performance, Standard, and Archive and Tiers. NRAID and RAID 0 are used with Read Cache as the data in the Read Cache SSDs is duplicated on either the Standard or Archive Tier.
Volume mapping
For increased performance, access the volumes from the ports on the controller that owns the Disk Group, which would be the preferred path. Accessing the volume on the non-preferred path results in a slight performance degradation.
Optimum performance with MPIO can be achieved with volumes mapped to multiple paths on both controllers. When the appropriate MPIO drivers are installed on the host, only the preferred (optimized) paths will be used. The non-optimized paths will be reserved for failover.
Best practices for SSDs
SSDs are supported in the MSA 2040 system only. The performance capabilities of SSDs are a great alternative to traditional spinning hard disk drives (HDD) in highly random workloads. SSDs cost more in terms of dollars per GB throughput than spinning hard drives; however, SSDs cost much less in terms of dollars per IOP. Keep this in mind when choosing the numbers of SSDs per MSA 2040 array.
Use SSDs for randomly accessed data
The use of SSDs can greatly enhance the performance of the array. Since there are no moving parts in the drives, data that is random in nature can be accessed much faster.

Data such as database indexes and TempDB files would best be placed on a volume made from an SSD based Disk Group since this type of data is accessed randomly.
Another good example of a workload that would benefit from the use of SSDs is desktop virtualization, for example, virtual desktop infrastructure (VDI) where boot storms require high performance with low latency.
SSD and performance
There are some performance characteristics which can be met with linear scaling of SSDs. There are also bandwidth limits in the MSA 2040 controllers. There is a point where these two curves intersect. At the intersecting point, additional SSDs will not increase performance. See figure 8.
The MSA 2040 reaches this bandwidth at a low number of SSDs. For the best performance using SSDs on the MSA 2040, use a minimum of 4 SSDs with 1 mirrored pair of drives (RAID 1) per controller. RAID 5 and RAID 6 are also good choices for SSDs, but require more drives using the best practice of having one Disk Group owned by each controller. This would require 6 SSDs for RAID 5 and 8 SSDs for RAID 6. All SSD volumes should be contained in fault tolerant Disk Groups for data integrity.
Base the number of SSDs to use on the amount of space that is needed for your highly random, high performance data set. For example, if the amount of data that is needed to reside in the SSD volumes exceeds a RAID 1 configuration, use a RAID 5 configuration.
Figure 11. SSD performance potential vs. MSA 2040 controller limit

استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

Note
There is no limit to the number of SSDs that can be used in the MSA 2040 array system.
SSD Read Cache
SSD Read Cache is a feature that extends the MSA 2040 controller cache.
Read cache is most effective for workloads that are high in random reads. The user should size the read cache capacity based on the size of the hot data being randomly read. A maximum of 2 SSD drives per pool can be added for read cache.
HP recommends beginning with 1 SSD assigned per storage pool for read cache. Monitor the performance of the read cache and add more SSDs as needed.
Note
You can have SSDs in a fault tolerant Disk Group as a Performance Tier or as a non-fault tolerant (up to 2 disks) Disk Group as Read Cache. But neither pool can have both a Performance Tier and a Read Cache. For example, pool A can have a Performance Tier and pool B can have a Read Cache.

 

SSD wear gauge
SSDs have a limited number of times they can be written and erased due to the memory cells on the drives. The SSDs in the HP MSA 2040 come with a wear gauge as well as appropriate events that are generated to help detect the failure. Once the wear gauge reaches 0%, the integrity of the data is not guaranteed.
Best practice is to replace the SSD when the events and gauge indicate <5% life remaining to prevent data integrity issues.
Full Disk Encryption
Full Disk Encryption (FDE) is a data security feature used to protect data on disks that are removed from a storage array. The FDE feature uses special Self-Encrypting Drives (SED) to secure user data. FDE functionality is only available on the MSA 2040.
The SED is a drive with a circuit built into the drive’s controller chipset which encrypts/decrypts all data to and from the media automatically. The encryption is part of a hash code which is stored internally on the drive’s physical medium. In the event of a failure of the drive or the theft of a drive, a proper key sequence needs to be entered to gain access to the data stored within the drive.
Full Disk Encryption on the MSA 2040
The MSA 2040 storage system uses a passphrase to generate a lock key to enable securing the entire storage system. All drives in a Full Disk Encryption (FDE) secured system are required to be SED (FDE Capable). By default, a system and SED drive are not secured and all data on the disk may be read/written by any controller. The encryption on the SED drive conforms to FIPS 140-2.
To secure an MSA 2040, you must set a passphrase to generate a lock key and then FDE secure the system. Simply setting the passphrase does not secure the system. After an MSA 2040 system has been secured, all subsequently installed disks will automatically be secured using the system lock key. Non-FDE capable drives will be unusable in a secured MSA 2040 system.
Note
The system passphrase should be saved in a secure location. Loss of the passphrase could result in loss of all data on the MSA 2040 Storage System.
All MSA 2040 storage systems will generate the same lock key with the same passphrase. It is recommended that you use a different passphrase on each FDE secured system. If you are moving the entire storage system, it is recommended to clear the FDE keys prior to system shutdown. This will lock all data on the disks in case of loss during shipment. Only clear the keys after a backup is available and the passphrase is known. Once the system is in the new location, enter the passphrase and the SED drives will be unlocked with all data available.
SED drives which fail in an FDE secured system can be removed and replaced. Data on the drive is encrypted and cannot be read without the correct passphrase.

 

Best practices for Disk Group expansion
With the ever changing storage needs seen in the world today, there comes a time when storage space gets exhausted quickly. The HP MSA 1040/2040 gives you the option to grow the size of a LUN to keep up with your dynamic storage needs.
A Disk Group expansion allows you to grow the size of a Disk Group in order to expand an existing volume or create volumes from the newly available space on the Disk Group. Depending on several factors, Disk Group expansion can take a significant amount of time to complete. For faster alternatives, see the “Disk Group expansion recommendations” section.
Note
Disk Group Expansion is not supported with Virtual Storage. If you have Virtual Storage and are running out of storage space, the procedure to get more storage space would be to add another Disk Group to a pool.
The factors that should be considered with respect to Disk Group expansion include but are not limited to:
• Physical disk size
• Number of disks to expand (1–4)
• I/O activity during Disk Group expansion
Note
Disk Group Expansion is only available when using Linear Storage.
During Disk Group expansion, other disk utilities are disabled. These utilities include Disk Group Scrub and Rebuild.
Disk Group expansion capability for supported RAID levels
The chart below gives information on the expansion capability for the HP MSA 2040 supported RAID levels.
Expansion capability for each RAID level
RAID level
Expansion capability
Maximum disks
NRAID
Cannot expand
1
0, 3, 5, 6
Can add 1–4 disks at a time
16
1
Cannot expand
2
10
Can add 2 or 4 disks at a time
16
50
Can expand the Disk Group one RAID 5 sub-Disk Group at a time. The added RAID 5
sub-Disk Group must contain the same number of disks as each original sub-Disk Group
32
Important
If during the process of a Disk Group expansion one of the disk members of the Disk Group fails, the reconstruction of the Disk Group will not commence until the expansion is complete. During this time, data is at risk with the Disk Group in a DEGRADED or CRITICAL state.
If an expanding Disk Group becomes DEGRADED (e.g., RAID 6 with a single drive failure) the storage administrator should determine the level of risk of continuing to allow the expansion to complete versus the time required to backup, re-create the Disk Group (see “Disk Group expansion recommendations”) and restore the data to the volumes on the Disk Group.
If an expanding Disk Group becomes CRITICAL (e.g., RAID 5 with a single drive failure) the storage administrator should immediately employ a backup and recovery process. Continuing to allow the expansion places data at risk of another drive failure and total loss of all data on the Disk Group.
Disk Group expansion can be very time consuming. There is no way to reliably determine when the expansion will be complete and when other disk utilities will be available.

 

Follow the procedure below.
1. Backup the current data from the existing Disk Group.
2. Using the WBI or CLI, start the Disk Group expansion.
3. Monitor the Disk Group expansion percentage complete.
Note
Once a Disk Group expansion initiates it will continue until completion or until the Disk Group is deleted.
Disk Group expansion recommendations
Before expanding a Disk Group, review the information below to understand the best alternative method for allocating additional storage to hosts.
Allocate “quiet” period(s) to help optimize Disk Group expansion
Disk Group expansion can take a few hours with no data access for smaller capacity hard drives and may take several days to complete with larger capacity hard drives. Priority is given to host I/O or data access over the expansion process during normal array operation. While the system is responding to host I/O or data access requests, it may seem as if the expansion process has stopped. When expanding during “quiet” periods, expansion time is minimized and will allow quicker restoration of other disk utilities.
This method of expansion utilizes the expand capability of the system and requires manual intervention from the administrator. The procedure below outlines the steps to expand a Disk Group during a “quiet” period.
In this context, a “quiet” period indicates a length of time when there is no host I/O or data access to the system. Before starting the Disk Group expansion:
1. Stop I/O to existing volumes on the Disk Group that will be expanded.
2. Backup the current data from the existing volumes on the Disk Group.
3. Shutdown all hosts connected to the HP MSA 1040/2040 system.
4. Label and disconnect host side cables from the HP MSA 1040/2040 system.
Start and monitor Disk Group expansion:
1. Using the WBI or CLI, start the Disk Group expansion.
2. Monitor the Disk Group expansion percentage complete.
When expansion is complete or data access needs to be restored:
1. Re-connect host side cables to the HP MSA 1040/2040 system.
2. Restart hosts connected to the HP MSA 1040/2040 system.
If additional “quiet” periods are required to complete the Disk Group expansion:
1. Shutdown all hosts connected to the HP MSA 1040/2040 system.
2. Label and disconnect host side cables from the HP MSA 1040/2040 system.
3. Monitor the Disk Group expansion percentage complete.

 

Re-create the Disk Group with additional capacity and restore data
This method is the easiest and fastest method for adding additional capacity to a Disk Group. The online Disk Group initialization allows a user to access the Disk Group almost immediately and will complete quicker than the expansion process on a Disk Group that is also servicing data requests. The procedure below outlines the steps for recreating a Disk Group with additional capacity and restoring data to that Disk Group.
Procedure:
1. Stop I/O to existing volumes on the Disk Group that will be expanded.
2. Backup the current data from the existing volumes on the Disk Group.
3. Delete the current Disk Group.
4. Using the WBI or CLI, create a new Disk Group with the available hard drives using online initialization.
5. Create new larger volumes as required.
6. Restore data to the new volumes.
Best practices for firmware updates
The sections below detail common firmware update best practices for the MSA 1040/2040.
General MSA 1040/2040 device firmware update best practices
• As with any other firmware upgrade, it is a recommended best practice to ensure that you have a full backup prior to the upgrade.
• Before upgrading the firmware, make sure that the storage system configuration is stable and is not being reconfigured or changed in any way. If any configurations changes are in progress, monitor them using the SMU or CLI and wait until they are completed before proceeding with the upgrade.
• Do not power cycle or restart devices during a firmware update. If the update is interrupted or there is a power failure, the module could become inoperative. Should this happen, contact HP customer support.
• After the device firmware update process is completed, confirm the new firmware version is displayed correctly via one of the MSA management interfaces—e.g., SMU or CLI.
MSA 1040/2040 array controller or I/O module firmware update best practices
• The array controller (or I/O module) firmware can be updated in an online mode only in redundant controller systems.
• When planning for a firmware upgrade, schedule an appropriate time to perform an online upgrade.
– For single controller systems, I/O must be halted.
– For dual controller systems, because the online firmware upgrade is performed while host I/Os are being processed, I/O load can impact the upgrade process. Select a period of low I/O activity to ensure the upgrade completes as quickly as possible and avoid disruptions to hosts and applications due to timeouts.
• When planning for a firmware upgrade, allow sufficient time for the update.
– In single-controller systems, it takes approximately 10 minutes for the firmware to load and for the automatic controller restart to complete.
– In dual-controller systems, the second controller usually takes an additional 20 minutes, but may take as long as one hour.
• When reverting to a previous version of the firmware, ensure that the management controller (MC) Ethernet connection of each storage controller is available and accessible before starting the downgrade.
– When using a Smart Component firmware package, the Smart Component process will automatically first disable partner firmware update (PFU) and then perform downgrade on each of the controllers separately (one after the other) through the Ethernet ports.
– When using a binary firmware package, first disable the PFU option and then downgrade the firmware on each of the controller separately (one after the other).

 

MSA 1040/2040 disk drive firmware update best practices
• Disk drive upgrades on the HP MSA 1040/2040 storage systems is an offline process. All host and array I/O must be stopped prior to the upgrade.
• If the drive is in a Disk Group, verify that it is not being initialized, expanded, reconstructed, verified, or scrubbed. If any of these tasks is in progress, before performing the update wait for the task to complete or terminate it. Also verify that background scrub is disabled so that it doesn’t start. You can determine this using SMU or CLI interfaces. If using a firmware smart component, it would fail and report if any of the above pre-requisites are not being met.
• Disk drives of the same model in the storage system must have the same firmware revision. If using a firmware smart component, the installer would ensure all the drives are updated.
Miscellaneous best practices
Boot from storage considerations
When booting from SAN, the best option is to create a linear Disk Group and allocate the entire Disk Group as a single LUN for the host boot device. This can improve performance for the boot device and avoid I/O latency in a highly loaded array. Booting from LUNs provisioned from pools where the volumes share all the same physical disks as the data volumes is also supported, but is not the best practice.
8Gb/16Gb switches and small form-factor pluggable transceivers
The HP MSA 2040 storage system uses specific small form-factor pluggable (SFP) transceivers that will not operate in the HP 8Gb and 16Gb switches. Likewise, the HP Fibre Channel switches use SFPs which will not operate in the HP MSA 2040.
The HP MSA 2040 controllers do not include SFPs. Qualified SFPs for the HP MSA 2040 are available for separate purchase in 4 packs. Both 8G and 16G SFPs are available to meet the customer need and budget constraints. All SFPs in an HP MSA 2040 should conform to the installation guidelines given in the product Quick Specs. SFP speeds and protocols can be mixed, but only in the specified configurations.
In the unlikely event of an HP MSA 2040 controller or SFP failure, a field replacement unit (FRU) is available. SFPs will need to be moved from the failed controller to the replacement controller.
Please see the HP Transceiver Replacement Instructions document for details found at hp.com/support/msa2040/manuals.
The MSA 1040 8Gb Dual Controller FC arrays include 8Gb FC SFPs in all ports. These are the same 8Gb FC SFPs available for the MSA 2040 and will only function in MSA arrays.
In the unlikely event of an HP MSA 1040 controller or SFP failure, a field replacement unit (FRU) is available. SFPs will need to be moved from the failed controller to the replacement controller.
MSA 1040/2040 iSCSI considerations
When using the MSA 2040 SAN controller in an iSCSI configuration or using the MSA 1040 1GbE or 10GbE iSCSI storage systems, it is a best practice to use at least three network ports per server, two for the storage (Private) LAN and one or more for the Public LAN(s).This will ensure that the storage network is isolated from the other networks.
The Private LAN is the network that goes from the server to the MSA 1040 iSCSI or MSA 2040 SAN controller. This Private LAN is the storage network and the Public LAN is used for management of the MSA 1040/2040. The storage network should be isolated from the Public LAN to improve performance.

 

Figure 12. MSA 2040 SAN iSCSI Network
استوریج-های-HP-MSA-2040-1040-قیمت-مشاوره-فنی

 

IP address scheme for the controller pair
The MSA 2040 SAN controller in iSCSI configurations or the MSA 1040 iSCSI should have ports on each controller in the same subnets to enable preferred path failover. The suggested means of doing this is to vertically combine ports into subnets. See examples below.
Example with a netmask of 255.255.255.0:
MSA 2040 SAN:
Controller A port 1: 10.10.10.100
Controller A port 2: 10.11.10.110
Controller A port 3: 10.10.10.120
Controller A port 4: 10.11.10.130
Controller B port 1: 10.10.10.140
Controller B port 2: 10.11.10.150
Controller B port 3: 10.10.10.160
Controller B port 4: 10.11.10.170
MSA 1040 iSCSI:
Controller A port 1: 10.10.10.100
Controller A port 2: 10.11.10.110
Controller B port 1: 10.10.10.120
Controller B port 2: 10.11.10.130
Jumbo frames
A normal Ethernet frame can contain 1500 bytes whereas a jumbo frame can contain a maximum of 9000 bytes for larger data transfers. The MSA reserves some of this frame size; the current maximum frame size is 1400 for a normal frame and 8900 for a jumbo frame. This frame maximum can change without notification. If you are using jumbo frames, make sure to enable jumbo frames on all network components in the data path.

Summary
HP MSA 1040/2040 administrators should determine the appropriate levels of fault tolerance and performance that best suits their needs. Understanding the workloads and environment for the MSA SAN is also important. Following the configuration options listed in this paper can help optimize the HP MSA 1040/2040 array accordingly.

 

 


  • 0

معرفی EMC VNX MirrorView

معرفی EMC VNX MirrorView

معرفی EMC VNX MirrorView
معرفی EMC VNX MirrorView

معرفی EMC VNX MirrorView: نرم افزار EMC VNX MirrorView ارائه دهنده راهکاری برای Replication در لایه بلاک در محصولات EMC VNX محسوب می شود.

این نرم افزار دارای دو Mode برای Remote Mirroring می باشد:

  • MirroView/S Synchronous
  • MirrorView/A Asynchronous

نرم افزار MirrorView به تکنولوژی گفته می شود که در آن اصل دیتای بلاک استوریج که در حالت اکتیو و استفاده می باشند در استوریج سایت ریموت ذخیره می شوند. این سایت ریموت که به آن سایت پشتیبان نیز گفته می شود در حال حاضر یکی از راهکارهای مناسب برای Disaster Recovery محسوب می شود. راهکار MirrorView بر پایه LUN کار می کند به این مفهوم که در عمل Replication یک LUN از سایت اصلی بر روی یک LUN در سایت دوردست یا همان سایت Disaster Recovery ذخیره می شود.

 

MirrorView/S

در حالت MirrorView/S که synchronous replication نیز نامیده می شود در فاصله های نزدیک قابلیت Replication همزمان را ارائه می نماید. ار آنجا که در راهکار Synchronous Replication همه چیز همزمان انحام می پذیرد اندازه RPO Recovery Point Objective صفر می باشد.

گردش دیتا در MirrorView/S به شرح ذیل می باشد:

1- هاست به استوریج اصلی VNX وصل شده و یک عمل Write را آغاز می کند.

2- استوریج اصلی VNX داده ها را در استوریج ثانویه Replicate می نماید.

3- استوریج ثانویه VNX خبر اتمام عملیات Write را به استوریج اصلی VNX می دهد.

4- استوریج ثانویه VNX خبر اتمام عملیات Write را به هاست می دهد.

خیلی مهم است که ما از نحوه گردش دیتا در MirrorView/S آگاه شویم. به عنوان شما باید بدانید زمان رفت و برگشت RTT Round Trip Time بین دو استوریج VNX نباید بیشتر از 10 میلی ثانیه باشد. اگر زمان RTT بالاتر از این مقدار باشد مدت زمان پاسخگویی هاست نیز بالاتر خواهد رفت بدلیل اینکه مدت زمان بیشتری برای ارسال Acknowledge برای درخواست Write طول خواهد کشید.

 

MirrorView/A

در حالت MirrorView/A قابلیت Replication برای فاصله های بلند را فراهم می سازد. این حالت می تواند برای انجام Replication بین دو دستگاه استوریج VNX مورد استفاده قرار گیرد. در این حالت مدت زمان RTT بسیار بالا می باشد ولی باید به این نکته توجه داشت که این زمان نباید بیشتر از 200 میلی ثانیه باشد. در حالت MirrorView Asynchronous  این نرم افزار ، مدلی برای آپدیت زمان دار تغییرات Track ها در سایت اصلی وجود دارد که با استفاده از این مدل تغییرات در سایت ثانویه با استفاده از فاصله زمانی RPO مورد نظر کاربر انجام می پذیرد.

با استفاده از MirrorView/A Replication اعلام اتمام عملیات نوشتن در زمان دریافت توسط استوریج اصلی به هاست ارسال می شود که اساسا هیچ تاثیر منفی در محیط عملیاتی ندارد در حالی که با استفاده از MirrorView/S همه عملیات نوشتن باید توسط دو استوریج Acknowledge شوند.

گردش دیتا در MirrorView/A به شرح ذیل می باشد:

1- هاست به استوریج اصلی VNX وصل شده و یک عمل Write را آغاز می کند.

2- استوریج اصلی VNX ارسال کرده و هاست را Acknowledge می کند.

3- استوریج اصلی VNX تغییرات را دنبال کرده و بر روی استوریج ثانویه با دستورالعمل RPO مشخص شده توسط کاربر Replicate می نماید.

4- استوریج ثانویه داده ها را دریافت و Ack را به استوریج اصلی ارسال می کند.

 

نکته مهم: در هر دو حالت MirrorView/A و MirrorView/S گروه های Consistent پشتیبانی می شوند. گروه های Consistent زمانی مورد استفاده قرار می گیرند که ما بخواهیم اطلاعات را با یک ترتیب مشخص بر روی چند LUN مشخص بنویسیم. به عنوان مثال شما اگر در ساختار VMWare خود یک Datastore متشکل از 6 عدد LUN داشته باشید نیاز خواهید داشت برای هر کدام از آنها در وضعیت Consistent بوده و در ارتباط با یکدیگر باشند تا کارکرد بهتری داشته باشند.

 

 

 


دستورات اصلی EMC Navisphere در Naviseccli

Navisphere CLI is a command line interface tool for EMC storage system management.

You can use it for storage provisioning and manage array configurations from any one of the managed storage system on the LAN.

It can also be used to automate the management functions through shell scripts and batch files.

CLI commands for many functions are server based and are provided with the host agent.

The remaining CLI commands are web-based and are provided with the software that runs in storage system service processors (SPs).

Configuration and Management of storage-system using Navisphere CLI:

The following steps are involved in configuring and managing the storage system (CX series, AX series) using CLI:

  • Install the Navisphere on the CLI on the host that is connected to the storage. This host will be used to configure the storage system.
  • Configure the Service processor (SP) agent on the each SP in the storage system.
  • Configure the storage system with CLI
  • Configuring and managing remote mirrors (CLI is not preferred to manage mirrors)

The following are two types of Navisphere CLI:

  1. Classic CLI is old version and it does not support any new features. But, this will still get the typical storage array jobs done.
  2. Secure CLI is most secured and preferred interface. Secure CLI includes all the commands as Class CLI with additional features. It also provides role-based authentication, audit trails of CLI events, and SSL-based data encryption.

Navisphere CLI is available for various OS including Windows, Solaris, Linux, AIX, HP-UX, etc.

Two EMC CLARiiON Navisphere CLI commands:

  1. naviseccli (Secure CLI) command sends storage-system management and configuration requests to a storage system over the LAN.
  2. navicli (Classic CLI) command sends storage-system management and configuration requests to an API (application programming interface) on a local or remote server.

In storage subsystem (CLARiiON, VNX, etc), it is very important to understand the following IDs:

  • LUN ID – The unique number assigned to a LUN when it is bound. When you bind a LUN, you can select the ID number. If you do not specify the LUN ID then the default LUN ID bound is 0, 1 and so on..
  • Unique ID – It usually refers to the storage systems, SP’s, HBAs and switch ports. It is WWN (world wide Name) or WWPN (World wide Port Name).
  • Disk ID 000 (or 0_0_0) indicates the first bus or loop, first enclosure, and first disk, and disk ID 100 (1_0_0) indicates the second bus or loop, first enclosure, and first disk.
  1. Create RAID Group

The below command shows how to create a RAID group 0 from disks 0 to 3 in the Disk Processor Enclosure(DPE).

naviseccli –h H1_SPA createrg 0  0_0_0   0_0_1   0_0_2  0_0_3

In this example , -h Specifies the IP address or network name of the targeted SP on the desired storage system. The default, if you omit this switch, is localhost.

Since each SP has its own IP address, you must specify the IP address to each SP. Also a new RAID group has no RAID type (RAID 0, 1, 5) until it is bound. You can create more RAID groups 1, 2 and so on using the below commands:

naviseccli –h H1_SPA createrg 1  0_0_4 0_0_5 0_0_6

 

naviseccli –h H1_SPA createrg 2 0_0_7 0_0_8

This is similar to how you create raid group from the navsiphere GUI.

  1. Bind LUN on a RAID Group

In the previous example, we created a RAID group, but did not create a LUN with a specific size.

The following examples will show how to bind a LUN to a RAID group:

navisecli -h H1_SPA bind r5 6 -rg 0  -sq gb -cap 50

In this example, we are binding a LUN with a LUN number/LUN ID 6 with a RAID type 5 to a RAID group 0 with a size of 50G. –sq indicates the size qualifier in mb or gb. You can also use the options to enable or disable rc=1 or 0(read cache), wc=1 or 0 (write cache).

  1. Create Storage Group

The next several examples will shows how to create a storage group and connect a host to it.

First, create a stroage group:

naviseccli -h H1_SPA storagegroup -create -gname SGroup_1

  1. Assign LUN to Storage Group

In the following example, hlu is the host LUN number. This is the number that host will see from its end. Alu is the array LUN number, which storage system will see from its end.

naviseccli -h H1_SPA storagegroup -addhlu -gname SGroup_1 -hlu 12 -alu 5

  1. Register the Host

Register the host as shown below by specificing the name of the host. In this example, the host server is elserver1

naviseccli -h H1_SPA elserver1 register

  1. Connect Host to Storage Group

Finally, connect the host to the storage group as shown below by using -connecthost option as shown below. You should also specify the storagegroup name appropriately.

naviseccli -h H1_SPA storagegroup -connecthost -host elserver1 -gname SGroup_1

  1. View Storage Group Details

Execute the following command to verify the details of an existing storage group.

naviseccli  -h H1_SPA storagegroup –list –gname SGroup_1

Once you complete the above steps, your hosts should be able to see the newly provisioned storage.

  1. Expand RAID Group

To extend a RAID group with new set of disks, you can use the command as shown in the below example.

naviseccli -h H1_SPA chgrg 2 -expand 0_0_9  0_1_0 -lex yes -pri high

This extends the RAID group with the ID 2 with the new disks 0_0_9 & 0_1_0 with lun expansion set to yes and priority set to high.

  1. Destroy RAID Group

To remove or destroy a RAID group, use the below command.

naviseccli -h H1_SPA destroyrg 2  0_0_7 0_0_8 0_0_9 0_1_0 –rm yes –pri high

This is similar to how you destroy raid group from the navisphere GUI.

  1. Display RAID Group Status

To display the status RAID group with ID 2 use the below command.

naviseccli -h H1_SPA getrg 2 -lunlist

  1. Destroy Storage Group

To destroy a storage group called SGroup_1, you can use the command like below:

naviseccli -h H1_SPA storagegroup -destroy -gname SGroup_1

  1. Copy Data to Hotspare Disk

The naviseccli command initiates the copying of data from a failing disk to an existing hot spare while the original disk is still functioning.

Once the copy is made, the failing disk will be faulted and the hotspare will be activated. When the faulted disk is replaced, the replacement will be copied back from the hot spare.

naviseccli –h H1_SPA copytohotspare 0_0_5  -initiate

  1. LUN Migration

LUN migration is used to migrate the data from the source LUN to a destination LUN that has more improved performance.

naviseccli migrate –start –source 6 –dest 7 –rate low

Number 6 and 7 in the above example are the LUN IDs.

To display the current migration sessions and its properties:

naviseccli migrate –list

  1. Create MetaLUN

MetaLUN is a type of LUN whose maximum capacity is the combined capacities of all LUNs that compose it. The metaLUN feature lets you dynamically expand the capacity of a single LUN in to the larger capacity called a metaLUN. Similar to LUN, a metaLUN can belong to storage group and can be used for Snapview, MirrorView and SAN copy sessions.

You can expand a LUN or metaLUN in two ways — stripe expansion or concatenate expansion.

A stripe expansion takes the existing data on the LUN or metaLUN, and restripes (redistributes) it across the existing LUNs and the new LUNs you are adding.

The stripe expansion may take a long time to complete. A concatenate expansion creates a new metaLUN component that includes the new LUNs and appends this component to the end of the existing LUN or metaLUN. There is no restriping of data between the original storage and the new LUNs. The concatenate operation completes immediately

To create or expand a existing metaLUN, use the below command.

naviseccli -h H1_SPA metalun -expand -base 5 -lun 2 -type c -name newMetaLUN-sq gb –cap 50G

This creates a new meta LUN with the name “newMetaLUN” with the meta LUN ID 5 using the LUN ID 2 with a 50G concatenated expansion.

  1. View MetaLUN Details

To display the information about MetaLUNs, do the following:

naviseccli -h H1_SPA metalun –info

The following command will destroy a specific metaLUN. In this example, it will destory metaLUN number 5.

naviseccli –h H1_SPA metalun –destroy –metalun 5

Add your comment

 


  • 0

استوریج های سری VNX شرکت EMC

EMC VNX

Home, Optimizer, Benchmarks, Server Systems, Systems Architecture, Processors, Storage,
Storage Overview, System View of Storage, SQL Server View of Storage, File Layout,

PCI-ESASFCHDDSSD Technology RAID ControllersDirect-Attach,
SAN, Dell MD3200, EMC CLARiiON AX4, CLARiiON CX4, VNX, V-Max,
HP P2000, EVA, P9000/VSP, Hitachi AMS
SSD products: SATA SSDs, PCI-E SSDs , Fusion iO , other SSD

Updated 2013-10

Update 2013-10
A link to Storage Review EMC Next Generation VNX Series Released … by Brian Beeler. While the preliminary slidedecks mention VNX2, the second generation VNX is jsut VNX. No 2 at the end.

In the table below, the second generation VNX specifications (per SP except as noted). The VNX5200 will come out next year?

VNX5200 VNX5400 VNX5600 VNX5800 VNX7600 VNX8000
Max FE ports 12? 16? 20 20 20 40
Max UltraFlex I/O 2 4 5 5 5 11
embedded I/O 2 SAS? 2 SAS 2 SAS 2 SAS 2 SAS none
Max SAS 2 6 6 6 16
Max FAST Cache 600GB 1TB 2TB 3TB 4.2TB 4.2TB
Max drive 125 250 500 750 1000 1500*
Xeon E5 1.2GHz 4c 1.8GHz 4c 2.4GHz 4c 2.0GHz 6c 2.2GHz 8c 2×2.7G 8c
Memory 16GB 16GB 24GB 32GB 64GB 128GB
Cores 4 4 4 6 8 16

 

Below is the back-end of the VNX5400 DPE. The DPE has 2 SPs. Each SP has 5 slots? One the top are the power supply, battery backup unit (BBU) and SAS module with 2-ports. One the bottom, the first module is for management.

EMC VNX

Close-up

EMC VNX

Below is the EMC VNX 8000 SPE back-end

EMC VNX

I/O module options for the VNX are: quad-port 8Gb/s FC, quad-port 1Gb/s Ethernet, dual-port 10GbE. The VNX 5600 and up can also support a quad-port 6Gb/s SAS module.

 

Updated 2013-02

While going through the Flash Management Summit 2012 slide decks, I came across the session Flash Implications in Enterprise Storage Designs by Denis Vilfort of EMC, that provided information on performance of the CLARiiON, VNX, a VNX2 and VNX Future.

A common problem with SAN vendors is that it is almost impossible to find meaningful performance information on their storage systems. The typical practice is to cited some meaningless numbers like IOPS to cache or the combined IO bandwidth of the FC ports, conveying the impression of massive IO bandwidth, while actually guaranteeing nothing.

VNX (Original)

The original VNX was introduced in early 2011? The use of the new Intel Xeon 5600 (Westmere-EP) processors was progressive. The decision to employ only a single socket was not.

EMC VNX

Basic IO functionality does not require huge CPU resources. But the second socket would double memory bandwidth, which is necessary for driving IO. Data read from storage must first be written to memory before being sent to host? The second processor would also better support a second IOH. Finally, the additional CPU resources would support the value-add features that SAN vendors so desparately try to promote.

EMC did provide the table below on their VNX mid-range systems in the document “VNX: Storage Technology High Bandwidth Application” (h8929) showing the maximum number of front-end FC and back-end SAS channels along with the IO bandwidths for several categories. This is actually unusual for a SAN storage vendor, so good for EMC. Unfortunately, there is no detailed explanation of the IO patterns for each category.

EMC VNX

Now obviously the maximum IO bandwidth can be reached in the maximum configuration, that is with all IO channels and all drive bays populated. There is also no question that maximum IO bandwidth requires all back-end IO ports populated and a sufficient number of front-end ports populated. (The VNX systems may support more front-end ports than necessary for configuration flexibility?)

However, it should not be necessary to employ the full set of hard disks to reach maximum IO bandwidth. This is because SAN systems are designed for capacity and IOPS. There are Microsoft Fast Track Data Warehouse version 3.0 and 4.0 documents for the EMC VNX 5300 or 5500 system. Unfortunately Microsoft has backed away from the bare table scan test of disk rate in favor of a composite metric. But it does seem to indicate that 30-50MB/s per disk is possible in the VNX.

What is needed is a document specifying the configuration strategy for high bandwidth specific to SQL Server. This includes the number and type of front-end ports, the number of back-end SAS buses, the number of disk array enclosures (DAE) on each SAS bus, the number of disks in each RAID group and other details for each significant VNX model. It is also necessary to configure the SQL Server database file layout to match the storage system structure, but that should be our responsibility as DBA.

It is of interest to note that the VNX FTDW reference architectures do not employ Fast Cache (flash caching) and (auto) tiered-storage. Both of these are an outright waste of money on DW systems and actually impedes performance. It can make good sense to employ a mix of 10K/15K HDD and SSD in the DW storage system, but we should use the SQL Server storage engine features (filegroups and partitioning) to place data accordingly.

A properly configured OLTP system should also employ separate HDD and SSD volumes, again using of filegroups and partitioning to place data correctly. The reason is that the database engine itself is a giant data cache, with perhaps as much as 1000GB of memory. What do we really expect to be in the 16-48GB SAN cache that is not in the 1TB database buffer cache? The IO from the database server is likely to be very misleading in terms of what data is important and whether it should be on SSD or HDD.

CLARiiON, VNX, VNX2, VNX Future Performance

Below are performance characteristics of EMC mid-range for CLARiiON, VNX, VNX2 and VNX Future. This is why I found the following diagrams highly interesting and noteworthy. Here, the CLARiiON bandwidth is cited as 3GB/s and the current VNX as 12GB/s (versus 10GB/s in the table above).

EMC VNX

I am puzzled that the VNX is only rated at 200K IOPS. That would correspond to 200 IOPS per disk and 1000 15K HDDs at low queue depth. I would expect there to be some capability to support short-stroke and high-queue depth to achieve greater than 200 IOPS per 15K disk.

The CLARiiON CX4-960 supported 960 HDD. Yet the IOPS cited corresponds to the queue depth 1 performance of 200 IOPS x 200 HDD = 40K. Was there some internal issue in the CLARiiON. I do recall a CX3-40 generating 30K IOPS over 180 x 15K HDD.

A modern SAS controller can support 80K IOPS, so the VNX 7500 with 8 back-end SAS buses should handle more than 200K IOPS (HDD or SSD), perhaps as high as 640K? So is there some limitation in the VNX storage processor (SP), perhaps the inter-SP communication? or a limitation of write-cache which requires write to memory in both SP?

VNX2?

Below (I suppose) is the architecture of the new VNX2. (Perhaps VNX2 will come out in May with EMC World?) In addition to transitioning from Intel Xeon 5600 (Westmere) to E5-2600 series (Sandy Bridge EP), the diagram indicates that the new VNX2 will be dual-processor (socket) instead of single socket on the entire line of the original VNX. Considering that the 5500 and up are not entry systems, this was disappointing.

EMC VNX

VNX2 provides 5X increase in IOPS to 1M and 2.3X in IO bandwidth to 28GB/s. LSI mentions a FastPath option that dramatically increases IOPS capability of their RAID controllers from 80K to 140-150K IOPS. My understanding is that this is done by completely disabling the cache on the RAID controller. The resources to implement caching for large array of HDDs can actually impede IOPS performance, hence caching is even more degrading on an array of SSDs.

The bandwidth objective is also interesting. The 12GB/s IO bandwidth of the original VNX would require 15-16 FC ports at 8Gbps (700-800MBps per port) on the front-end. The VNX 7500 has a maximum of 32 FC ports, implying 8 quad-port FC HBAs, 4 per SP.

The 8 back-end SAS busses implies 4 dual-port SAS HBAs per SP? as each SAS bus requires 1 SAS port to each SP? This implies 8 HBAs per SP? The Intel Xeon 5600 processor connects over QPI to a 5220 IOH with 32 PCI-E gen 2 lanes, supporting 4 x8 and 1×4 slots, plus a 1×4 Gen1 for other functions.

In addition, a link is needed for inter-SP communication. If one x8 PCI-E gen2 slot is used for this, then write bandwidth would be limited to 3.2GB/s (per SP?). A single socket should only be able to drive 1 IOH even though it is possible to connect 2. Perhaps the VNX 7500 is dual-socket?

An increase to 28GB/s could require 40 x8Gbps FC ports (if 700MB/s is the practical limit of 1 port). A 2-socket Xeon E5-2600 should be able to handle this easily, with 4 memory channels and 5 x8 PCI-E gen3 slots per socket.

VNX Future?

The future VNX is cited as 5M IOPS and 112GB/s. I assume this might involve the new NVM-express driver architecture supporting distributed queues and high parallelism. Perhaps the reason both VNX2 and VNX Future are described is that the basic platform is ready but not all the components to support the full bandwidth?

EMC VNX

The 5M IOPS should be no problem with an array of SSDs, and the new NVM express architecture of course. But the 112GB/s bandwidth is curious. The number of FC ports, even at a future 16Gbit/s is too large to be practical. When the expensive storage systems will finally be able to do serious IO bandwidth, it will also be time to ditch FC and FCOE. Perhaps the VNX Future will support infini-band? The puprose of having extreme IO bandwidth capability is to be able to deliver all of it to the database server on demand. If not, then the database server should have its own storage system.

The bandwidth is also too high for even a dual-socket E5-2600. Each Xeon E5-2600 has 40 PCI-E gen3 lanes, enough for 5 x8 slots. The nominal bandwidth per PCIe G3 lane is 1GB/s, but the realizable bandwidth might be only 800MB/s per lane, or 6.4GB/s. A socket system in theory could drive 64GB/s. The storage system is comprised of 2 SP, each SP being a 2-socket E5-2600 system.

To support 112GB/s each SP must be able to simultaneously move 56GB/s on storage and 56GB/s on the host-side ports for a total of 112GB/s per SP. In addition, suppose the 112GB/s bandwidth for read, and that the write bandwidth is 56GB/s. Then it is also necessary to support 56GB/s over the inter-SP link to guarantee write-cache coherency (unless it has been decided that write caching flash on the SP is stupid).

Is it possible the VNX Future has more than 2 SP’s? Perhaps each SP is a 2-socket E5-4600 system, but the 2 SPs are linked via QPI? Basically this would be a 4-socket system, but running as 2 separate nodes, each node having its own OS image. Or that it is a 4-socket system? Later this year, Intel should be releasing an Ivy Bridge-EX, which might have more bandwidth? Personally I am inclined to prefer a multi-SP system over a 4-socket SP.

Never mind, I think Haswell-EP will have 64 PCIe gen4 lanes at 16GT/s. The is 2GB/s per lane raw, and 1.6GB/s per lane net, 12.8GB/s per x8 slot and 100GB/s per socket. I still think it would be a good trick if one SP could communicate with the other over QPI, instead of PCIe. Write caching SSD at the SP level is probably stupid if the flash controller is already doing this? Perhaps the SP memory should be used for SSD metadata? In any case, there should be coordination between what each component does.

Summary

It is good to know that EMC is finally getting serious about IO bandwidth. I was of the opinion that the reason Oracle got into the storage business was that they were tired of hearing complaints from customers resulting from bad IO performance on the multi-million dollar SAN.

My concern is that the SAN vendor field engineers have been so thoroughly indoctrinated in the SaaS concept that only capacity matters while having zero knowledge of bandwidth, that they are not be able to properly implement the IO bandwidth capability of the existing VNX, not to mention the even higher bandwidth in VNX2 and Future.

unsorted misc

EMC VNX

EMC VNX

EMC VNX

EMC VNX

EMC VNX

EMC VNX Early 2011?

VNX came out in early 2011 or late 2010? All VNX models use the Xeon 5600 processors, ranging from 2.13 to 2.8GHz, and four to six cores (actually from 1.6GHz and 2 cores?). The 5100, 5300 and 5500 are comprised of two Disk-processor enclosures (DPE) that house both the storage processors and the first tray of disks. The 5700 and 7500 models are comprised of Storage-processor enclosures (SPE) that house only the storage processors. Two DPE or SPE comprise an Array.

VNX 5100 VNX 5300 VNX 5500 VNX 5700 VNX 7500
Max Drives 75 125 250 500 1000
Enclosure 3U Disk + SP 3U Disk + SP 3U Disk + SP 2U SP 2U SP
DAE Options 25 x 2.5″-2U
15 x 3.5″-3U
25 x 2.5″-2U
15 x 3.5″-3U
25 x 2.5″-2U
15 x 3.5″-3U
25 x 2.5″-2U
15 x 3.5″-3U
60 x 3.5″-4U
25 x 2.5″-2U
15 x 3.5″-3U
60 x 3.5″-4U
Memory per Array 8GB 16GB 24GB 36GB 48GB
Max UltraFlex IO
Modules per Array
0 4 4 10 10
Embedded IO Ports
per Array
8 FC & 4 SAS 8 FC & 4 SAS 8 FC & 4 SAS
Max FC Ports
per Array
8 16 16 24 32
SAS Buses
(to DAEs)
2 2 2 or 6 4 4 or 8
Freq 1.6GHz 1.6GHz 2.13GHz 2.4GHz 2.8GHz
cores 2 4 4 4 6
Mem/DPE 4GB 8GB 12GB 18GB 24GB

The VNX front-end is FC with options for iSCSI. The back-end is all SAS (in the previous generation, this was FC). The 5100-5500 models have 4 FC ports and 1 SAS bus (2 ports) embedded per DPE, for 8 FC and 2 SAS busses per array. The 5100 does not have IO expansion capability. The 5300 and 5500 have 2 IO expansion ports per DPE, for 4 total in the array. The 5300 only allows front-end IO expansion, while the 5500 allows expansion on both front-end and back-end IO.

The 5300 and higher VNX models can also act as file servers with X-blades in the IO expansion slots. This capability is not relevent to high-performance database IO and is not considered here.

Disk-array enclosure (DAE) options are now 25 x 2.5″ in 2U, 15 x 3.5″ in 3U or a 60 x 3.5″ in 4U. The hard disk long dimemsion is vertical, the 1″ disk height is aligned along the width of the DAE for 12 across, and this is 5 deep.

An UltraFlex IO Module attaches to a PCI-E slot? (x8?). Module options are quad-port FC (upto 8Gbps), dual-port 10GbE, or quad-port 1GbE, or 2 SAS busses (4-ports). So a module could be 1 SAS bus (2 ports), 4 FC ports etc?

Each SAS port is 4 lanes at 6Gbp/s and 2 ports make 1 “bus” with redundant paths.

The first 4 physical disks reserves 192GB per disk.

There is an EMC document h8929-vnx-high-bandwidth-apps-ep with useful information. more later. The diagram below is from the EMC VNC document. The VNX storage engine core is a Westmere processor (1 or 2?) with 3 memory channels, and IO adapters on PCI-E gen 2. The backend is SAS, and the front-end to host can be FC, FCoE or iSCSI.

Below is the VNX System architecture from an EMC slidedeck titled “Introducing VNX Series”. Note EMC copyright, so I suppose I should get permission to use? In block mode, only the VNX SP is required. In file mode, up to four X-Blades can be configured?

Below from: Introducing VNX Series, Customer Technical Update

EMC VNX

Below from h8929

EMC VNX

As far as I can tell, the VNX models have a single Xeon 5600 processor (socket). While it may not take much CPU-compute capability to support a SAN storage system, there is a significant difference in the IO capability with 2 processor sockets populated (6 memory channels instead of 3), noting that IO must be writen to memory from the inbound side, then read from memory to the outbound side.

VNX 5100 VNX 5300 VNX 5500 VNX 5500* VNX 5700 VNX 7500
Backend SAS Buses 2 2 2 6* 4 4 or 8 Max Frontend FC 8 16 16 16 24 32 DSS Bandwidth (MB/s) 2300 3600 4200 4200 6400 7300 DW Bandwidth (MB/s) 2000 3200 4200 6400 6400 10000 Backup BW – Cache bypass mode 700 900 1700 1900 3300 7500 Rich Media BW 3000 4100 5700 5700 6200 9400

Note: * VNX 5500 Hi-BW option consumes all the flex-IO modules, and bandwidth is limited by FC front-end?

EMC VNX

The SAS IO expansion module adds 4 SAS ports for a total of 6 ports, since 2 are integrated. However, only a total of 4 ports (2 busses) are used per DPE.

Full Data Warehouse bandwidth requires at least 130 x 15K SAS drives.

h8177

h8177 says:
VNX5100 can support 1,500MB/s per DPE or 3,000MB/s for the complete 5100 unit.
VNX5300 can support 3,700MB/s for the complete unit.
VNX5500 can support 4,200MB/s for the complete unit on integrated back-end SAS ports, and 6,000MB/s with additional back-end SAS ports.

Flare 31.5 talks about the VNX5500 supporting additional front-end and back-end ports in the 2 expansion slots. To achieve the full 6,000MB/s, the integrated 8Gbps FC ports must be used in combination with additional Front-end ports in one of the expansion slots

h8297

The Cisco/EMC version of the SQL Server Fast Track Data Warehouse employs one VNX 5300 with 75 disks and two 5300’s with a total of 150 disks, each VNX with 4 x 10Gb FCoE connections. The throughput is 1985 for the single 5300 and 3419MB/s for two. This is well below the list bandwidth of 3GB/s+ for the VNX5300, which may have required 130 disks?

Of the 75 disks on each 5300, only 60 are allocated to data. So the 1985MB/s bandwidth corresponds to 33MB/s per disk, well below the Microsoft FTDW reference of 100MB/s per disk. The most modern 15K 2.5in hard drives are rated for 202MB/s on the outer tracks (Seagate Savvio 15K.3). The Seagate Savvio 10K.5 is rated for 168MB/s on the outer tracks. With consideration for the fact that perfect sequential placement is difficult to achieve, the Microsoft FTDW target of 100MB/s per disk or even a lower target of 50MB/s per disk is reasonable, but 33MB/s per disk is rather low.

EMC VNX

Data warehouse IO is 512KB random read.
DSS is 64KB sequential read.
Backup is 512KB sequential write.
Rich media is 256KB sequential read.

Deploying EMC VNX Unified Storage Systems for Data Warehouse Applications (Dec 2011)
h8177-deploying-vnx-data-warehouse-wp,
Introduction to the EMC VNX Series, A Detailed Review (Sep 2011)
h8217-introduction-vnx-wp,
h8046-clariion-celerra-unified-fast-cache-wp

Other EMC documents h8297 Cisco Reference Configurations for Microsoft SQL Server 2008 R2 Fast Track Data Warehouse 3.0 with EMC VNX5300 Series Storage Systems
h8929 VNX: Storage Technology High Bandwidth Applications


  • 0

انتخاب سرور مناسب HP

با جک سرور HP ProLiant حمل و نقل را در هر 0011 ثانیه تجربه کنید، باا
فروشای بایش از 31 میلیاون، سارورهای HP ProLiant رهباران بلامناازع
بازارهای مشارک هساند. اجن سطح بالای پذجرش بازار در قسمای از تعهد ماا
در اججاد اسااندارد زجرساخای برای صنعت سارور کاه ارائاه دهناده اعامااد و
اطمینان می باشد، رجشه دارد.
خواه سرور جک دپارتمان باشد، خواه Data Center سازمانی، جا هرچیزی در
اجن بین، HP می تواند نیاز های شما را دقیقاا بار آورده کناد. شاما میاوانیاد
سطح مناسبی از عملکرد، دسارسی، توسعه پذجری و مدجرجت را اناخاب کنید.
HP سرورهای ProLiant را جهت برآوردن کلیه نیازهای مشاارجان در چهاار
خانواده مخالف ارائه می دهد.
در حال حاضر ما مقام گسارده ترجن سرورها را در مقیاس های صنعای
blades(BL), rack-optimized(DL), tower-servers (ML)
تا extreme HyperScale (SL) در اخایار دارجم .
به منظور کمک به شما در پیدا کردن مدل تجهیزات فعلیاان، اناخاب بهاارجن
سرور برای محیط شما و پیادا کاردن بهاارجن هاا بارای نیازهاجااان، طفاا از
راهنمای Hp ProLiant Gen8 Model دجدن فرماجید.

 

خانواده Hp ProLiant ML
سرورهای توسعه پذجر tower ، برای دفاتر راه دور و تجارتهای در حال رشد
اجده آل هساند.
سرورهای خانواده HP ProLiant ML
سرورهای Hp ProLiant ML سرورهاجی انعطاف پذجر به صاورت اجساااده و
توسعه پذجر میباشند. سرورهاجی که اناخابی اجده آل بارای دفااتر شاعب از راه
دور، DataCenter جا SMB هساند که باه ساروری نیااز دارناد کاه باواناد
بالاترجن عملکرد را با تجهیزات موجود فراهم کند و قابل توسعه برای رشاد در
آجنده را نیز دارا باشد .
ساریهای ProLiant ML 300 هام در مادلهای tower و هام در مادل
rack موجود است. اجن سرورها برای برنامههای شعب دفاتر در حال فعا یت از
راه دور تا data center هاجی که به حجم زجاادی از حافظاه داخلای و I/O
برای SMB نیازمند محاسبات ضروری و ظرفیتهای قابال توساعه میباشاند،
اجده آل است . سریهای ML300 شامل آخرجن فن آوری Gen8 در ظرفیات
حافظه و مدجرجت میباشند .
خانواده HP ProLiant Gen8ML شامل :
 ساریهای HP ProLiant ML300p ، مراکاز داده انعطااف پاذجر
rack جا tower اسااندارد با عملکردی پیشرو و تطبیق پذجر است.
 ساریهای HP ProLiant ML300e سارورهای tower کاه
اسافاده آسان و راحت را بازتعرجف میکنند .

 

خانواده HP ProLiant DL
سرور چند منظوره Rack ، جهت بالانس میان راندمان و مدجرجت بهیناه شاده
است.
سرورهای خانواده HP ProLiant DL The
سرورهای HP ProLiant DL سرورهاجی چند منظوره و بهینه شده در Rack
را که بالانس کننده مدجرجت، اجارا و رانادمان مای باشاند، ارائاه مای دهناد.
سرورهای HP DL نماجانگر چندجن دهه دانش فنی مهندسی مای باشاند کاه
برای تسرجع در پیاده سازی فن آوری محاسبات تجاری جدجد، تجربه را با کاار
خود ادغام می کنند.
سرورهای خانواده HP ProLiant DL سرورهاجی قدرتمند در ابعاد 0 و 0 و 5
و 4 و جا U 8 هساند که برای عملکارد فرآجناد هاای مبانای بار محاسابه، باا
مجموعه ای از گزجنه های ظرفیت داخلای در جاک rack package فشارده،
اجده آل می باشند. برای Gen8 ، سرور هاجی با تعداد هساه های پاردازش گار
بیشار، حافظه و توان داخلی، ظرفیای ارتقا جافاه را به همراه نسل بعادی فان-
آوری HP Smart Array ارائه می دهند.
خانواده HP ProLiant Gen8 DL شامل :
 سری های HP ProLiant DL100 عملکرد محاساباتی باالا در –
طرحی انبوه و مقرون به صرفه.
 سری هاای HP ProLiant DL300p – Rack server Data center اسااندارد با عملکردی برجساه و تطبیق پذجر.
 سرورهای HP ProLiant DL300e سرورهای Rack که نیازهای
ضروری و اسافاده آسان محاسبات را بازتعرجف میکنند.
 HP ProLiant DL500 افزاجش مقیاس سرورها برای محاسبات –
حجیم کاری.

 

خانواده HP ProLiant SL
هدفمند ساخت شده برای اکثر – Data Center های فوقا عاده در جهان
سرورهای خانواده HP ProLiant SL
سرورهای HP ProLiant SL با سیسامی هدفمند، ساخت شاده بارای اکثار
ماقاضیان محیطهای HyperScale و همچنین اجده آل برای ارائه دهنادگان
سروجس وب / هاساینگ / cloud و عملکرد بالای محیط محاسباتی، خانواده
سرورهای SL توزجع پذجری سرجع

، چابکی بیشار و کاهش هزجنههای اجراجای
را قادر میسازد.
خانواده HP ProLiant Gen8SL شامل :
 سرورهای HP ProLiant SL6500 : هدفمند به منظاور انجاام –
محاسبات با عملکرد بالا با اسافاده از سرورهای مدولار ساخت شده
است .
 سرورهای HP ProLiant SL6500 : شامل وجژگیهاجی با کاراجی
بالا از جمله FDR InfiniBand و Integrated GPUs میباشند،
که نو آوری زجر ساخای را برای کاهش چشمگیر هزجناهها، افازاجش
بازده انرژی، به اشاراک گذاشاه است


آخرین دیدگاه‌ها

    دسته‌ها