So my little ESXi White-boxes are still going strong! They started out on ESXi 5.1 (here), then updated to 5.5 (here), have been happily running on 6.0 (I missed this post but it was quite an uneventful upgrade) and now they’ve taken to ESXi 6.5 after a few small changes.
The first issue I had was the on-board Realtek NICs are of course not supported, I believe you can still inject the appropriate VIBs to get it going, however it was cheap enough to now pick up some Intel Pro PT Dual-port PCIe Network cards (which are supported natively). I found these on eBay for about £20 for two (ended up with 3 as the seller kindly threw in an additional).
The other more serious issue I noticed was that disk throughput had become insanely slow! Whilst ESXi 6.5 is numbered to sound like a ‘step’ upgrade it’s definitely more like a major release version (e.g. 6.0 to 7.0). It’s had some big under the hood changes from what I understand – this particular speed issue is due to one such change, the introduction of a new native AHCI driver. Fortunately you can disable it, so if you’re also suffering from slow disk speed issues head over to NXHut for a solution (here). The brief version is below –
- Connect to the ESXi box via SSH (don’t forget to start/enable SSH/Shell services)
- Run the following – esxcli system module set –enabled=false –module=vmw_ahci
- You can check the module using – esxcli system module list | grep ahci
That was about it, everything then seemed to settle and 6.5 was in action. I was however progressing with my VCP studies so wanted to move away from local storage in order to start using shared storage (via iSCSI to play with HA/FT/DRS etc). This change actually works really well for me now as it removes even more of the hardware support requirements just helping to stretch their life a little further (along with opening up more of the flexibility/features VMware offers).
To do this I removed all the local SSD drives I’d accumulated in the ESXi hosts and put them all in a small Windows Server 2016 box. This ‘server’ is another white-box I’d built up a while ago that has the same specs as the ESXi hosts – only difference is it runs a small Gigabyte board instead of the ASRocks. To save this post getting too long I’ll split the actual setup stages out in to ‘Part 2’. As an overview I just used Windows to configure a raid (I know I know, ewww software raid) and then presented the volumes as iSCSI targets. Connected up the ESXi hosts, deployed vCenter appliance and can now shift bits between the boxes really quickly/easily.