I’ve been running a small NAS server for several years using the excellent HP MicroServer N40L upgraded to 8GB RAM (officially its max, although I know some people have got it to work with 16GB), and with five WD Red 2TB drives, running FreeNAS v9.
The N40L only has 4 warm swap bays, or “Non-Hot Plug” as HP calls them, but also has a 5¼” optical bay into which I had put a 3½” bay adaptor tray to run a 5th drive. I had it configured as a 5x2TB RAID-Z array giving 8TB of filestore.
However, with FreeNAS now needing a minimum of 8GB RAM, and the 5th bay dragging the speed of the SATA ports down to 1.5 GB/s, I felt I had outgrown it.
I have a very old tower case spare which has four internal 3½” HDD bays, two 3½” front panel bays, and four 5¼” front panel bays, so this seemed like a good starting point.
Obviously I needed a server motherboard that would take ECC RAM, so desktop boards were out of the question which was a shame as I had an Intel Core2 Quad Q6600 CPU and corresponding LGA775 motherboard which would otherwise have been fine. Instead I put them up for sale on eBay. I put together a bundle of an old Core2 Duo E6320, the motherboard, and 4GB RAM, which fetched a measly £12.50. The Q6600 only fetched £5. So not off to a great start on the budget. I did get £17.66 for an old GTX 460 that I no longer needed though. The real money would come from selling the N40L.
After insufficient research, my first purchase was somewhat of a disaster. It was a SuperMicro X7DBE+ which turned out to be a proprietary “EE-ATX” size and simply didn’t fit my case. Fortunately the seller was a business who took returns, although it was a bit of a saga to finally get the refund.
EE-ATX is unusual inasmuch as unlike almost all other ATX family boards, it is taller as well as deeper. It turns out that the only cases that can take this board are proprietary SuperMicro ones.
I then did a lot more research. A LOT more. I learned all about the different motherboard sizes, what my case would take, what motherboards were available and suitable, and where the best value for money lay, because I was on a budget but didn’t want to buy too old either.
Finally, based on suitability and availability, I decided that any of the motherboards in SuperMicro’s X8DTL or X8DAL range would be suitable as they are long ATX / short E-ATX (305mm x 254mm), all take Xeon 5500/5600 series CPUs, and all take up to 96GB of Registered ECC DDR3 memory in 6 slots. All of those components are reasonably priced on the used market but still modern enough to be worth using.
The X8DAL motherboards have audio but no graphics, whilst the X8DTL boards have no audio but do have graphics, so that seemed like the better choice for a server. The top choice would have been the X8DTL-6F as this has a SAS 2.0 6GB/s controller on board, but I realised that spinning HDDs simply don’t come anywhere near even 3GB/s so money could be saved by going for something with only 3GB/s (SAS or SATA). The only SATA devices that really use 6GB/s are SSDs and I wasn’t planning on using one. Should I need one at a later data I can always add a SATA-III PCIe adapter card.
After trawling eBay it became clear that a cost-effective way of getting what I wanted was to buy a used SuperMicro 1U server with a X8DTL-3F motherboard in it since these are in plentiful supply and hence keenly priced. The X8DTL-3F has six SATA-II 3GB/s (aka SATA300) ports on board, plus an on-board SAS 3GB/s controller (provided as eight SATA-II connectors), and also has IPMI. Buying as a server also meant being able to sell off the bits I didn’t want – I figured that at the very least the drive caddies and PSU would be worth something.
The one I ended up buying was a SuperMicro 1026T-M3F with a X8DTL-3F, a single Xeon E5645 CPU, and 12GB (3x4GB) of RAM, which seemed like a good starter system for me. The E5645 is a 6-core CPU with HyperThreading giving 12 threads of execution, which is comfortably more than the 2 core N40L. Likewise 12GB RAM was more than the minimum for FreeNAS, so I knew it would have me up and running immediately. Also, buying a board with minimal RAM meant there would be less wastage as the plan was to slowly populate it with 16GB RDIMMs up to the maximum of 96GB, which would mean any memory it came with would eventually be discarded.
[Update: If you go down a similar route, and are planning to use two 5600-series CPUs, be sure that the X8DAL/DTL motherboard is a Rev 2.01 – see this post for why]
When the server arrived, I thought I would fire it up “as is” to start with, simply to see it boot and verify it was working. It did, and it was, but my goodness it was a noisy bugger! So that got stripped down in pretty short order.
Before I started the build, though, I thought I would address the issue of cooling on the case.
The case had two boggo-standard 3-pin 120mm fans in it – one at the front and one at the back. However, it had never been designed for one at the front and I’d modded it in the past to take a 120mm fan instead of the two small side-by-size fans it had been designed for. However, that had only involved drilling 4 mounting holes, and the case was severely restricted on airflow.
I decided to address this by modifying the case to make it a full fan opening.
Because so much of the metal was perforated I went to work with heavy duty snippers and, where there was too much metal, an electric drill.
I forgot to take a “before” pic, and only managed a “partway” pic. Before I started all of the front was perforated, and the pic shows it after I had snipped out each grille. I then opened it up further into a full 120mm opening.
Then I filed the jagged edges with a large metal file.
Finally, to smooth the airflow a little, I edged it with gaffer tape. It’s not pretty, but can’t be seen once the front of the case is attached.
So, with the case ready, it was time to commence the build.
The motherboard easily went into the case, but the first fly in the ointment was the PSU. The old PSU in the tower case has a single 4 pin EPS CPU power connector (in addition to the standard 24-pin ATX connector), whilst this server board needs two 8-pin EPS connectors (plus the 24-pin connector). Fortunately my spare desktop PC had a 600W semi-modular PSU in it with both an 8-pin EPS and a 4+4-pin EPS connector, plus the 24-pin, so I grabbed that and ordered a like-for-like replacement.
With the motherboard mounted in the case, and the 600W PSU connected, it was time to fire it up and it worked first time – the POST succeeded and I was able to go into BIOS. Yay!
Next step was to connect a HDD and install FreeNAS v11 on it to get a working system. I chose to use a WD Black 160GB 2.5″ SATA HDD that had come out of a laptop which I had upgraded to a SSD. The reason being that a HDD that small could be easily stashed somewhere in the case without using up a drive bay. However I could have instead used a USB drive connected to the on-board internal USB port, like I did with the N40L, but chose not to.
This went without a hitch, which was a pleasant surprise.
I got a little carried away on spending whilst doing the research and accumulated four WD Red 4TB drives – one that I already had, a used one which I won on an eBay auction, and two that I just bought brand new as the prices they were going for on eBay weren’t far off the ex-VAT prices new (and I can claim the VAT back). So the next task was to fit these into the case.
I knew that the motherboard would drive up to four 4-pin PWM fans, and I also knew that I had a pair of Corsair SP120 Quiet Edition PWM fans in stock, so with the case now
mutilated modified, one of those went in the front to blow air over the lower HDD bays and into the case.
The plan had been to rsync the files on the old NAS onto the new, over the network, but instead I decided to just go ahead and mount the old drives into the new case, using the four lower bays plus one of the 3½” front panel bays. I also decided “what the hell”; I’d keep those drives rather than selling them and would have two separate RAID-Z arrays.
The four 4TB drives would therefore need to go in the upper bays, but of course these would need 5¼” to 3½” bay adaptors. There are plenty of options here, with hot swap caddies available that fit 4 or even 5 drives into 3 bays. Another option was a pair of Evercool Armour HD-AX “3-in-2″ (ie. three 3½” drives in two 5¼”) cages, which I liked because they would maximise my four front panel bays, and they had fans. However, I had concerns about the small 80mm fans causing noise. Plus they would work out quite expensive when buying two.
But they did get me thinking about fan cooling, and cages that take up three bays are the right size to use a 120mm fan, which would mean being able to use the other SP120.
In the end I decided on the Lian-Li EX-36B1. I did consider the Icy Dock Vortex MB074SP-B, but I went for the Lian-Li for three reasons – firstly it was much cheaper, secondly I think the Icy Dock looks hideous, but the killer was the Lian-Li can mount three 3½” HDDs plus two 2½” HDDs, thus letting me conveniently mount the boot drive.
Granted, the drives need to be bolted in with the Lian-Li, rather than having caddies, rails, or any other quick change mechanism, but I don’t feel it justified the extra cost, especially as the lower drives aren’t removable either. I figured I could cope with the extra downtime in the event of a drive failure.
It came with a 3-pin 120mm fan but I wasn’t fussed by this as I was going to swap it out for an SP120 anyway.
On the subject of fans, I realised that all of the 120mm fans I had in stock, apart from the two SP120s, were 3-pin so I ordered a general purpose Arctic 120mm PWM fan for the case exhaust.
Whilst I was waiting for parts to arrive, I borrowed the detachable auxiliary cage from my Cooler Master case and used it as a convenient way to temporarily stack the four 4TB drives. It made for a bit of a frankencomputer temporarily, but worked well enough for bench testing.
I connected the four 4TB drives to the SAS controller’s SATA ports and booted, but I noticed that the controller only recognised them as 2TB drives. A quick google revealed that the on-board SAS controller on this motherboard is the ancient LSI / Broadcom SAS1068E, and this is a known issue. So I instead connected them to the SATA300 ports, and they were recognised as 4TB drives. So that was fine.
I connected the five 2TB drives to the SAS controller’s SATA ports since, being 2TB drives, they wouldn’t be affected by this and indeed they were not.
I set the 4TB drives up as a new RAID-Z pool in FreeNAS, and then imported the old 5x2TB RAID-Z pool from the old NAS, which mounted without a hitch. This was all going too easily!
I did a config backup of the system, and then tried importing the old config from the old FreeNAS v9 machine, to see if I could get a quick win on getting all the shares, jails, user accounts, etc. back. Hopes weren’t high and it messed things up so thoroughly that I needed to do a fresh install of FreeNas v11 again. But that was ok as it didn’t take long.
I think that, with hindsight, what I should have done was upgrade FreeNAS v9 on the N40L system to FreeNAS v11, backup the config, and then import it onto the new system with only the 5x2TB drives fitted, and it might have had a far better chance of just working “as is”.
By now the Lian-Li cage had arrived, so I took it apart to fit the SP120 fan.
Once reassembled, I fitted the drives and the whole assembly fouled the case’s drive bays. I realised that the cage was asymmetric top to bottom, and that I had mounted the drives the wrong way up. In fairness, I had mounted them the right way up, but they were designed to be mounted upside down! I decided I didn’t like this, so simply rotated the mesh front panel through 180° to allow me to mount it the other way up and everything fitted beautifully.
I was pleasantly surprised to see the Lian-Li cage came with rubber shock mounts for all drives plus all drive screws, which was a nice bonus. It’s a lovely bit of kit, and is all metal with lots of it aluminium.
Since I had a spare 5¼” front panel bay, I decided to fit a DVD R/W drive in there. It can always come out at a later date, but I figured that it was lying in a drawer unused, I had enough spare power connectors and SATA connectors unused that I could spare one of each, and it may as well be there as a blanking plate.
I also had a 3½” front panel bay free, and happened to have a connector box lying around which gives two USB2 ports and two USB3 ports by attaching to headers on the motherboard, so I fitted that. Since the motherboard doesn’t have USB3 I ordered a cheap PCIe board that gives two rear connectors and a header.
I was concerned about cooling on the CPU, and sure enough after several hours of test running I got an overheat alarm on the CPU.
This was mitigated somewhat by attaching a 70mm PWM fan over the heatsink with cable ties.
This enabled the CPU to run hot but stable, however it was still running hotter than desirable so a more permanent solution was needed.
Although the motherboard uses LGA1366 and hence you would think that any cooler designed for LGA1366 would work, that’s not the case. Server motherboards use a different mounting to consumer boards. The latter often use push pins but with server boards you screw directly to an M3 threaded mounting plate.
There is an informative post on servethehome that pointed me in the right direction on this.
As that post mentions, the Noctua NH-U12DX and NH-U12DX i4 are specifically designed for server motherboards, however they are expensive. A much more cost-effective solution is the CoolerMaster Hyper 212+ / EVO / 212X. These three coolers all use a bolt-through design with their own backing plate. For a Xeon server motherboard, this is discarded and, instead, you use standoffs to bridge the gap between the screws on the cooler and the threaded sockets on the heatsink mounting plates.
I decided to go for the 212 EVO as it is more readily available than the new 212X. Again, used prices on eBay weren’t far off the ex-VAT price of buying new, and I managed to save a little by finding a B-grade one on Amazon Warehouse.
In the meantime, I was keeping an eye out on eBay for memory. The motherboard specs state that it will drive DDR3 memory at 1333MHz but notes that with quad rank RDIMMs (ie. 1x16GB) this is reduced to 1066MHz so PC3-8500R is sufficient. The three specific parts on SuperMicro’s memory compatibility list are Hynix HMT42GR7BMR4C-G7, Samsung M393B2K70CM0-CF8, and Micron MT72JSZS2G72PZ-1G1D1. I searched on those part numbers on eBay and managed to pick up a Samsung item for £30 inc delivery from a VAT-registered seller, and another for £23 inc delivery from a seller who is not VAT-registered.
The first one arrived very quickly, and I swapped it out for one of the 4GB RDIMMs already in there, and upon powering up it was instantly recognised by the BIOS and by FreeNAS, giving 24GB in total. The second took far longer to arrive, but when it finally did arrive it too worked fine, giving 36GB in total. The two 4GB RDIMMs that came out can’t be used unless I fit an extra CPU, as the 2nd bank of 3 memory slots are not available with only one CPU fitted.
[Update: I subsequently managed to source a Hynix HMT42GR7BMR4C-G7 for £20, and put that in as well. It worked quite happily alongside the two Samsungs and gave a total of 48GB which is the maximum for one CPU]
By now the cooler had arrived, so I decided to do a dry fit on the unpopulated CPU socket on the motherboard rather than disturbing the fitted CPU, so that I could evaluate the fitting method. If it didn’t work I could then send it back in an unused condition.
Examining the thread size of the CPU cooler mounts on the motherboard showed them to be standard metric M3, and the screws on the cooler were also M3, but nowhere long enough to reach.
The standoffs that I had were too short and also not quite the right thread at the socket end (I suspect they are not actually M3), so I discarded that idea and destructively removed the circlips on the cooler mounting plate so I could replace the screws with much longer M3 bolts. Then I found that the M3 bolts that I had were too short – the ones that I thought were long enough were actually M4. So it was off to Amazon to see what was available on next day delivery. However, whilst looking, I found an assortment of M3 hex spacers for sale for £9.99 so bought those instead. Of course, I only found those after I had destructively removed the circlips. But that is partially the reason for this blog post – so others can learn from my mistakes.
The assortment came with 6mm, 10mm, 15mm, and 20mm standoffs, so I felt sure that one of those sizes would fit.
The USB 3.0 card had also arrived by now, so I fitted that and FreeNAS recognised it with no additional drivers needed, which was nice.
Whilst all this had been going on, the N40L had been up for sale on eBay, and it eventually sold for £79. Not as much as I had hoped for, but not a bad result I suppose, given it only cost £129+VAT (after cash back) in 2012. I’d been hoping for something closer to £100 though.
The next day, the M3 spacers arrived, so I continued the dry fit of the 212 EVO cooler.
I tried the 20mm standoffs first, but these sat flush with the bracket so that was no good. Likewise the 15mm didn’t give enough space for the cooler’s screws to have any tightening room. The sweet spot seemed to be the 10mm ones. So I went ahead and removed the heatsink on the CPU and fitted it for real.
Since I had removed the circlips, I substituted them for M3 washers instead. This made the fitting process way more fiddly than it needed to be and made me regret my impetuousness in removing the circlips unnecessarily. After several attempts with springs going ping (and fortunately found again) and washers falling off, and much cursing, I hit on the idea of screwing the standoffs onto the captive screws, which kept the springs and washers in place, and then screwed the whole lot into the motherboard sockets. This worked far better.
There was enough play in the springs that I think the 6mm standoffs would have worked equally well, if not better, but I did not feel like stripping it down again.
[Update: having now fitted another 212 EVO on another build with the same motherboard, I can confirm that 6mm is a little too short and that 10mm is the ideal length]
Finally it was all in place and tightened up, and I re-fitted the fan.
Upon booting, there was a RAM failure, but powering off, reseating the memory modules, and powering on again fixed this. I guess they got disturbed during the fitment of the cooler.
So it was all finished and working. However, there was just some final configuration to do on FreeNAS. I needed to re-create the users and shares, but this wasn’t too much of a hardship. The main issue was the Plex Server that had been running in a jail as a plugin on the old machine. I found a useful post that showed how to manually install the plugin and associate it with the existing jail, and this worked fine and I could see all my Plex media immediately when I browsed to the Plex server. However, upon reboot it all disappeared again and I was left with an empty library. However, on a subsequent reboot it came back again, so I will need to keep an eye on that.
[Update: It seems that after a reboot, the jail needs to be manually restarted for the Plex server to work. Perhaps it is possible to automate that task, but since the NAS server is rarely rebooted this isn’t a priority for me]
Finally, the costings on this project…
[Note: For sales, Gross is what the buyer paid, whilst Net is what I was left with after fees and postage costs]
As you can see, the costs spiralled a little, but I think it’s a pretty good system for the money. If I had not bought the four 4TB drives then it would have of course been a lot cheaper. And, honestly, I don’t think I needed as much RAM either.
I recouped some of these costs by selling the parts from the server – the PSU, the HDD caddies, the rails, the optical drive, the fans & fan housing, and the case itself. I also recouped some by selling the N40L and some other parts I had lying around.
So, we are done and I have a new NAS server. I shall call him George, and I shall love him and squeeze him and pet him and store files on him.
A full report on the new build is available here.
Since the new build was cash positive, it mitigated the costs of this NAS build somewhat. In fact, had I not bought the new HDDs, the two projects together would have been cash positive overall.