Jul 09, 2015 Hey, has anyone got the LSI SMIS provider working with ESXi 6? The LSI support site only lists downloads for 5.0 and 5.x. I tried both, but when I try to discover the host with MegaRAID Storage Manager, I'm unable to find it. PowerCLI Script to get ESXi Network/Storage Firmware and Driver Version Posted by fgrehl on March 11, 2017 Leave a comment (14) Go to comments For a healthy vSphere virtualization infrastructure and to receive support from VMware it is important to verify that IO devices are listed in VMwares Compatibility Guide/Hardware Compatibility List (HCL) with their correct driver and firmware version.
I've been running Esxi for awhile now on several small hosts (5.x/6.0 3 to 6 VMs), Very happy with it but two issues have always bugged me and now that I'm growing towards larger more complex install I want to learn how to address them. Typically with metal installs you have a piece of software that talks to your LSI/Adaptec raid controller so you can see the health of the array and drives.
With Esxi this is not the case. In searching the web I see there is PCI pass through which is what i think I need to do but the warning are very ominous. Assuming I boot off an on-board USB stick and I only have a single Raid controller on the host, can I pass it through to one of the VMs and install the software to monitor it? Will doing this effect the ability of the host to boot? Will the other VMs on this host still be able to see the array as a datastore?
My second question is similar, just less dangerous. I've read a lot and tried several solutions but never been able to get a VM to have access to my UPS. I have several Cyber Power units with USB/Serial connections. I see on Cyberpower site they have some kind of VM appliance now. Anyone have experience with it or know of a better/simpler way to shutdown a host(s) in the event of a power outage?
Thanks, Ralph. Thanks for the quick replies! All the hosts I've built myself. All Supermicro Motherboards. Specifically on the raid I have: 1 Host - LSI MegaRAID SATA/SAS 9260-4i Card 2 Host - SUPERMICRO MBD-X10SL7-F-O with onboard LSI 2308 2 Hosts - Adaptec RAID 58-R Card I'm working on the procedure on the LSI Link, thanks!
Sounds like it will find and manage LSI controllers on any host in my domain (same subnet). I'm hoping the 5.5 version works with 6.0. My two hosts with on-board LSI Raid are in very basic sites (2012 R2 Essentials, two windows 7 Pro VMs) where I don't currently have vCenter. They are both essentials licenses so I can install it but just haven't felt the need to add the complexity. Any thoughts on whether I should install vCenter or use the vCSA? The two 5805's are in my main site with vCenter but being an older card I'm not sure if the 5.0 driver ( aacraidvmwaredrivers1.1.7-29100.tgz) from 2012 will work with v6.0.
Will check it out.
It's older in terms of generation. In the Dell world. The H700 is at least 2 generations back. It can support SSD.but I have never done it and not sure of differences/need for Enterprise SSD vs consumer grade makes a difference. I had a long drawn out post.but thought twice about posting it. Bottom line is you may want to do some testing on your own with the SSDs you have on the LSI 2308 and see how they perform for you before buying anything. Your 8-10 VM load is half of my 20 VM lab.but my lab is not IO intensive and my storage is a NAS.
I have seen my PERC H700 do 400-600MB/s reads with 8 7200RPM SATA Drive in RAID10.the drives sit in enclosures/adapter and I never tested direct connect from the H700 to the drives to know if it would be faster. Don't remember writes.but of course they were slower. So yes.the controller will do the 200-300MB/sec you asked about.but it depends on your config (RAID level + number of spindles used). Forgot.my testing was Windows 2008R2 and bare metal FreeNAS 8 or early 9.
It's older in terms of generation. In the Dell world. The H700 is at least 2 generations back. It can support SSD.but I have never done it and not sure of differences/need for Enterprise SSD vs consumer grade makes a difference. I had a long drawn out post.but thought twice about posting it.
Bottom line is you may want to do some testing on your own with the SSDs you have on the LSI 2308 and see how they perform for you before buying anything. Your 8-10 VM load is half of my 20 VM lab.but my lab is not IO intensive and my storage is a NAS. I have seen my PERC H700 do 400-600MB/s reads with 8 7200RPM SATA Drive in RAID10.the drives sit in enclosures/adapter and I never tested direct connect from the H700 to the drives to know if it would be faster. Don't remember writes.but of course they were slower. So yes.the controller will do the 200-300MB/sec you asked about.but it depends on your config (RAID level + number of spindles used).
Forgot.my testing was Windows 2008R2 and bare metal FreeNAS 8 or early 9. Click to expand.Yeah, my next post was going to outline what I was going to do next before purchasing anything!
The plan is to format the server and install ESXi on a USB key. Then I was going to test the following with my SSD drives (the Samsung Pro 840 128GB drives) and HDD (1TB and 2TB drives) on the LSI 2308: 1) Single SSD 2) Mirrored SSD 3) Single HDD I was just going to setup a couple test VMs and copy file between them to see what sort of read/write speed I get from the LSI2308. I was also going to flash the firmware back to IR mode to test the mirroring. I'm really hoping ESXi sees the mirrored volume and not the individual drives. If the speeds are decent then GREAT!
I don't need a RAID card. If they aren't great (which is what I am expecting since ESXi doesn't do caching) then I am keen on the LSI MegaRAID SAS 9260-8i RAID card. When setting up the system for production use, the drives will probably be setup as follows: 1) Mirrored 128GB Samsung Pro 2) Mirrored 512GB Samsung Pro 3) Single 1TB HDD 4) Single 2TB HDD So will I get the 200-300MB/sec speed with the 9260-8i RAID card with the above config for the SSD drives? Yeah, my next post was going to outline what I was going to do next before purchasing anything! The plan is to format the server and install ESXi on a USB key. Then I was going to test the following with my SSD drives (the Samsung Pro 840 128GB drives) and HDD (1TB and 2TB drives) on the LSI 2308: 1) Single SSD 2) Mirrored SSD 3) Single HDD I was just going to setup a couple test VMs and copy file between them to see what sort of read/write speed I get from the LSI2308.
I was also going to flash the firmware back to IR mode to test the mirroring. I'm really hoping ESXi sees the mirrored volume and not the individual drives. If the speeds are decent then GREAT! I don't need a RAID card. If they aren't great (which is what I am expecting since ESXi doesn't do caching) then I am keen on the LSI MegaRAID SAS 9260-8i RAID card.
When setting up the system for production use, the drives will probably be setup as follows: 1) Mirrored 128GB Samsung Pro 2) Mirrored 512GB Samsung Pro 3) Single 1TB HDD 4) Single 2TB HDD So will I get the 200-300MB/sec speed with the 9260-8i RAID card with the above config for the SSD drives? Click to expand.Yes but this is going to be too much On ebay they start around £350-400. I'm still quite keen on the 9271-8i (feel free to recommend another card if I am wrong!). It has PCI Express 3.0 and 1GB cache memory.
I have found one online for £250 and it includes the LSICVM01 CacheVault & Battery. If I look at the accessories on the LSI website for the 9271-8i: The one accessory that caught my eye and that interests me is: MegaRAID FastPath Software I contacted the seller of the 9271-8i card and asked him if I needed any license keys to unlock the full potential of the card and he said: All of the features are available without further license (I emailed him again asking about FastPath) Does anyone know what kind of performance I can expect with SSD drives connected to this card in RAID 1 (mirroring) when used in an ESXi host? Is MegaRAID FastPath needed to fully unlock the potential of this card? I'm also interested to hear anyones experience or thoughts on the CacheVault & Battery that this card comes with! Just to expand on my previous post, I started looking at Adaptec cards and found this amazing card: Its more than my budget allowed but oh well!
About £380 on ebay. It has great specs and has VMware certified drivers for ESXi 5.5.
It also comes with the backup battery built in.nice! The one thing that caught my eye was this: Operating Temperature 0°C to 50°C. ( with 200 LFM airflow) 200LFM.first time I have come across that term so I looked into it. I use the following fans in my case: - I have two of these in the front of the case and one at the rear. LFM rating is about 570 each. I have one of these at the top of my case. LFM rating is about 687.
(The CPU fans does about 650 LFM) The reason I mention this is due to me reading about some peoples experience with RAID cards that have overheated (the RAID chip gets hot!). I think with the cooling in my case that I'll have sufficient cooling for this Adaptec RAID card? Anyone have any experience with this RAID card in an ESXi 5.5 host? Today I finally managed to install ESXi 5.5 Update 2 on my server. I basically had the following in the server to test the disk performance: 1) LSI is in IT mode with firmware version 16 2) I setup two datastores in ESXi.one on each of the Samsung Pro 840 SSDs 3) I installed one Windows Server 2012 R2 virtual machine on disk 1 and another on disk 2. I then did some big file copies between the two machines and copied the files between different folders in the same VM. The performance surprised me!
It was better than I thought. I was seeing speeds of 100MB/s and more. In some cases I was getting 200-300MB/s.
Sorry I don't have more info yet but I really rushed this test before having to pop out. I am hoping to do some more tests tomorrow. I still want to test flashing the firmware to IR mode again and upgrading to firmware version 19.
Click to expand.Ok, ran HD Tune on each server and here are the results. Server 1: Minimum: 227MB/s Maximum: 2399MB/s Average: 466MB/s Burst rate: 257MB/s Server 2: Minimum: 237MB/s Maximum: 3181MB/s Average: 743MB/s Burst rate: 238MB/s How does this look?
The maximum speeds were a bit surprising. I'm only using firmware version 16 since that is what was available when I flashed the LSI2308 to IT mode in January. It does need updating. I can't remember, do you have to unplug your SATA drives before upgrading the firmware?
I have been thinking about the heat issue too. Will a RAID card run ok in my case? I have lots of high quality fans in the case (3 x 120mm and 1 x 140mm) and the CPU has dual 120mm fans. I was thinking of maybe getting a 'PCI card' fan to put next to the RAID card? Ok, ran HD Tune on each server and here are the results.
Server 1: Minimum: 227MB/s Maximum: 2399MB/s Average: 466MB/s Burst rate: 257MB/s Server 2: Minimum: 237MB/s Maximum: 3181MB/s Average: 743MB/s Burst rate: 238MB/s How does this look? The maximum speeds were a bit surprising. I'm only using firmware version 16 since that is what was available when I flashed the LSI2308 to IT mode in January. It does need updating.
I can't remember, do you have to unplug your SATA drives before upgrading the firmware? I have been thinking about the heat issue too. Will a RAID card run ok in my case?
I have lots of high quality fans in the case (3 x 120mm and 1 x 140mm) and the CPU has dual 120mm fans. I was thinking of maybe getting a 'PCI card' fan to put next to the RAID card? Click to expand.Those numbers look better for local storage via HDTune. I flashed mine to version 19 of IT firmware from Supermicro's site.
I do not see a reason or know of a requirement to detach drives before updating firmware. I do perform a complete power down (not just a reset) after a successful flash. As far as heat goes, it will depend on your equipment in use: Case, fans, devices in use generating heat, etc.
In my situation, my storage box is an NZXT Source 210 case with 5 fans. I only run 8 3.5' 7200RPM drives and nothing else. I was seeing my CPU temp stay around 55C while ambient temps were 80F. I found out I had made a rookie mistake: My top fan (exhaust sitting just above the CPU) was backwards and pulling air into the case.
Once I flipped that around I now see CPU temps around 40C with ambient temp currently around 75F. A 15C degree difference - pretty amazed myself.
Heat is a concern for any performance add-in cards (RAID or 10Gb NICs) so you do want to make sure there is airflow around them. I am not using any add-in cards in mine at this time. My old storage box (PowerEdge T110 with PERC H700) had a shroud that helped direct fan flow over add-in cards, memory and CPU. So I never saw any issues. Click to expand.With LSI it is very important that your drivers and firmware are on the same phase as they call it (we would just call it version) LSI won't even support you unless they are on the same phase.
Phase mismatch can result in everything from performance and stability issues, to outright data loss. The version of FreeBSD that the current release of FreeNAS is based off of, has Phase 16 drivers, so it is best to stick with Phase 16 firmware. Next FreeNAS release will presumably be 9.3.0, and will be on Phase 17 firmware, requiring a re-flash. In future revisions they have written a script to warn you about these firmware to driver mismatches. For right now, just read the release notes, and it will say which phase the included drivers are at.
With LSI it is very important that your drivers and firmware are on the same phase as they call it (we would just call it version) LSI won't even support you unless they are on the same phase. Phase mismatch can result in everything from performance and stability issues, to outright data loss. The version of FreeBSD that the current release of FreeNAS is based off of, has Phase 16 drivers, so it is best to stick with Phase 16 firmware.
Next FreeNAS release will presumably be 9.3.0, and will be on Phase 17 firmware, requiring a re-flash. In future revisions they have written a script to warn you about these firmware to driver mismatches. For right now, just read the release notes, and it will say which phase the included drivers are at. I don't use ssd's on the raid controller on my lab machine, those are hooked up directly to sata motherboard ports, so I can't help you there. I just have slow sata drives on the raid. Also not sure of the pro/con of the firmware.
I've used both long term, no issues either way. I've seen some people say you shouldn't use the m5015 with the ibm firmware in non-ibm systems, but for the client site it's running in an HP server with retail wd red and wd Re4 in different arrays and it's been rock solid for 18 months or so. Since my home server is a lab machine, it gets rebooted a bit more often (5-6 times a year), so I went with the lsi firmware. Also rock solid for probably 2.5 years now. I had a read through: And my Samsung 840 Pro 128GB SSD drives are listed there I was planning on getting two Samsung Pro 850 512GB SSD drives but I am concerned that they will not work with this RAID card since they are not on the list?
Would it be better to get the Samsung Pro 840 512GB drives rather since they are on the interoperability list? I know they are 'older' but they are still excellent SSD drives. The specs between the two drives are almost identical. I just don't want to have compatability issues! Also, are these the correct cables to use from the IBM RAID card (mini SAS to 4 SATA drives): Theres so many different types of these cables so I just want to make sure! I ended up ordering the IBM ServeRAID M5015 SAS/SATA Controller today. I was having a look through my motherboard manual (Supermicro X10SL7-F) and it has two slots on it: PCI Express 2.0 x4 in x8 slot (= 2000MB/sec) PCI Express 3.0 x8 in x16 slot (=8000MB/sec) I know the IBM ServeRAID M5015 has a x8 PCI Express 2.0 host interface (=4000MB/sec) so which slot should I use on the motherboard?
Is the first one (PCI Express 2.0 x4 in x8) too slow for the IBM card? Will I get a boost in speed if I use the PCI Express 3.0 x8 in x16 slot? For drive compatibility, I would guess that the 850s would work if the 840s do. But, getting drives you know are compatible will potentially save you time and energy. If you won't see any practical difference in performance between the drives, might as well get the cheaper drives that are compatible (assuming the 840s are cheaper). You'll need SFF-8087 FORWARD breakout cables for your card, since you're connecting the single plug on your RAID card to four individual drives. I can't tell if the cable on ebay is that exact cable type.
There are REVERSE cables, for when you want to connect four SATA ports to a SFF-8087 connector. These are less commonly used internally. I believe that putting your RAID card in the x8 PCI-e 3.0 slot will get the full speed of the interface, as the PCI-e 3.0 will downgrade itself to PCI-e 2.0.
That will eliminate that interface as a bottleneck to the largest extent possible. Unless you have another device that can make better use of the bandwidth, such as a GPU, you might as well use the best interface you have. For drive compatibility, I would guess that the 850s would work if the 840s do.
But, getting drives you know are compatible will potentially save you time and energy. If you won't see any practical difference in performance between the drives, might as well get the cheaper drives that are compatible (assuming the 840s are cheaper). You'll need SFF-8087 FORWARD breakout cables for your card, since you're connecting the single plug on your RAID card to four individual drives. I can't tell if the cable on ebay is that exact cable type.
There are REVERSE cables, for when you want to connect four SATA ports to a SFF-8087 connector. These are less commonly used internally. I believe that putting your RAID card in the x8 PCI-e 3.0 slot will get the full speed of the interface, as the PCI-e 3.0 will downgrade itself to PCI-e 2.0. That will eliminate that interface as a bottleneck to the largest extent possible. Unless you have another device that can make better use of the bandwidth, such as a GPU, you might as well use the best interface you have. Does anyone know if the IBM ServeRAID M5015 card supports 6Gb/sec for SATA drives? I have read the specs on the IBM site: Specifications The ServeRAID M5015 and ServeRAID M5014 adapter cards have the following specifications:.
Eight internal 6 Gbps SAS/SATA ports. 6 Gbps throughput per port But then further up the page it says: 6 Gbps SAS 2.0 technology has been introduced to address data off-load bottlenecks in the direct-access storage environment. This new throughput doubles the transfer rate of the previous generation. SAS 2.0 is designed for backward compatibility with 3 Gbps SAS as well as with 3 Gbps SATA hard drives. So will a 6 Gbps SSD connected to this card run at 6 Gbps or 3Gbps?