My Home Server/NAS Build

My Home Server/NAS Build

I needed to share files with my PC, laptop and Allyson’s laptop. How did I do that? With a custom built home server of course!

The problem

If you have multiple computers with a number of files that you want to be readily available on each machine, you know there is an issue with keeping files sync’d. Things such as music, movies, pictures or documents that you want to pick up at a moments notice, things such as program installation files, or more personal items such papers, perhaps that novel your working on or maybe some finance documents. How do you keep these all up to date and easily accessable?

The answer could be to use cloud storage and sync files, but if you are media heavy, your internet speed and storage limit just wont cut it. Also if you have sensitive data, sending it to the cloud may not be the best idea either. External hard drives? Not exactly convenient to move between machines. So, what should you do?

My solution

So I spent the better part of three years getting my home server built from the time of concept to fully working and running 24×7. I have tried using old spare parts and a number of Linux distros and finally decided to scap my spare part build and do it proper. Also I wanted to try to master Linux and Samba shares before getting things running. I found that was taking ages and wasting more time in details that didn’t matter. So the first lesson to building your own home server is to just get something up and running. Worry about the details later. If you are working with storage like I am, you can easily reformat and redo the entire OS later and no affect your data. Just don’t get bogged down in the details and get things started.

Ok, so with that out of the way, let’s talk about parts and configuration. My final server is built in a full Mid-Tower chassis which is rather large compared to most desktops these days. Why such a large build? I want to have a lot of room for hard drives as NautilusMODE continues to generate content and more importantly, actively creates videos for our YouTube channel we have a new need for storage. My philosophy is that you don’t have any file on your PC unless you have two copies. It’s even better to have three. So as Allyson and I have separate computers and need access to the same files, a tower server was the answer to have vast storage and local back up.

I realize the size of this is more than what most people would want to deal with, however I feel it’s a good compromise from the alternative of going full rack server. I don’t have space or the money to buy full on real server gear. So I tried to use the best cheap consumer gear available.

Parts List

So what parts did I use in my build? I have listed each below with some information on why I picked each part and how it fell into the overall need. Low cost and high storage expandability were my must haves and this is what I came up with.

CPU(APU) – AMD Athlon 5350 (AM1)
(~$50)

home-server-amd-athlon-5350-am1The heart of any build really is two things, the CPU and motherboard. Both determine your max performance and expandability. As my home server will be running 24×7 to have my files available at all times, I choose to go with a power sipping processor. I am sure some people may think that AMD and power sipping are not two words that go together. However this APU is rated at 25W TDP.

To top it off, this is a quad core at 2.05GHz with integrated Radeon R3 graphics(it is an APU). So no need for any graphics card to be able to output to console or in my case a remote X session in Linux. Being a quad core it also means that I don’t really have to worry about the APU causing any bottlenecks and handle any load. I think it should be able to serve files and act as a media center at the same time. Sadly, I haven’t had time to build out the media center side to test that yet. Perhaps someday in the future I can really test this APU to it’s limit. I certainly am not doing so at the moment.

CPU Heatsink/Fan – Arctic Alpine M1
(~$10)

home-server-arctic-alpine-m1-am1-heatsink-fan-boxAfter my initial assembly last fall I found that living in small spaces meant that any noise from this server could be heard clearly in the living room where it sits. That’s where the Alpine M1 comes in. The APU’s fan was the loudest thing in the case and was easily heard anywhere in the room. So I looked for a replacement. This is one of few that are available for the AM1 platform. It’s very quite and honestly the only noise it makes is from the air it’s moving. It’s near silent for an 80mm fan.

Important Note for MSi AM1M

One issue I did run into was that on my MSi board is it sometimes fails to start when the server is booted. It takes a nudge to start it spinning. I ordered a replacement and found that it not only did the same thing but also the fan clicked when spinning. So I kept the original and sent back the defective second version. From additional testing I found that my other low RPM fans do the same, they are unable to start spinning but can do so just fine after a gentle nudge.

It seems as I have bad luck or a defective motherboard as the unit itself is great and at the price can’t be beat. So to work around this I discovered that my hotswap cage fan(standard RPM) runs fine from the CPU header so I have the fans connections switched. The CPU fan runs fine on direct power and the cage runs fine on the CPU header. Odd, but it works. So case closed, I guess?

RAM – Crucial Ballistixs Tactical LP(8GB)
(~$50)

home-server-crucial-ballistix-8gb-1600mhzCheapest single stick of 8GB RAM I could find and runs at 1600MHz with a CAS latency of 8-8-8-24. A solid choice as the motherboard supports up to 32GB of 1600MHz RAM. It’s also low voltage(1.35v) and low profile.

The motherboard only has two slots so I wanted a single stick of 8GB to ensure I could upgrade if needed. However, I would love to know what I would be doing with this if I maxed it out at 16GB. As I am not serving dozens of devices I don’t think I will ever need more than 8GB of RAM.

No ECC

You may have noticed and are curious as to why I did not build with ECC RAM. I mean, if you are building a server that’s the go to found in any build list on any website or forum. The first reason is that although the APU can support ECC, no current AM1 platform motherboards support ECC. (This is a shame really as these APUs are awesome for small servers.) Secondly I determined from hours of research that ECC really isn’t that important. For what I am doing, standard RAM is very reliable and not worth the increase cost of parts. Going with ECC in my case means I would more than likely double the cost of my build needing a more expensive CPU, motherboard and RAM(keep reading for additional links to ECC facts below)

So, what is ECC RAM? ECC stands for error-correcting code, which describes how it works. ECC RAM is designed to be able to detect errors within the data in RAM and correct them when they occur. Hence the term “error-correcting”. Traditionally ECC RAM is used for mission critical devices such as servers where errors cannot be tolerated.

With that explanation though, after hours of research I found that standard RAM should be just fine and this extra feature isn’t really needed(for my needs anyway). Still not too sure? After coming to this conclusion myself I found the same answer from Jeff Atwood over at Coding Horror in this post: To ECC or Not To ECC. This post does a great in depth dive into the topic so go check it out! It goes into much more depth than I researched and tells you about where he uses and omits ECC in his servers. Basically, I am not running a commercial service and database server, so ECC isn’t really needed and worth the increase in price.

Motherboard – MSi AM1M
(~$35)

home-server-msi-am1m-am1The MSi AM1M is a micro-ATX motherboard. This means a larger case, but that’s exactly what I want. I picked this motherboard because of it’s three available PCI slots. One is PCIe 2.0 x16 and the other two being PCIe 2.0 x1. PCIe 2.0 x1 can support up to 500MB/s which is great because so long as I run mechanical drives I shouldn’t be able to flood the connection. Even flooding a full gigabit pipe means at max of 125MB/s which times four is 500MB/s. Also the odds of having the perfect load to get sustained read/writes at that speed would be very rare. So unless I switch to all SSD, this shouldn’t be a bottleneck. Even so, I think I could stand to wait the few extra seconds to read or write the file. This makes the expandibility perfect for me.

Additionally this board has a Gigabit Ethernet port which is perfect for my Gigabit home network. Also, if needed, there are 2 available USB 3 ports that I could purchase additional Gigabit adapters for if I really needed to flood my network with data. I doubt I would ever need this, but it’s nice to know the option exists in case I really want redundant connections in the future.

PSU – Corsair CS450M (80+ Gold)
(~$70)

home-server-corsair-CS450M-gold-boxBeing concerned about power consumption I wanted to be sure I had enough power but also that it was efficient. To ensure this I decided I wanted to get a Gold rated power supply so there was minimal power waste.

I then calculated out my power needs. Each HDD uses about 8 watts average during use, so 8 watts x 12 bays = 96 watts. The CPU uses only 25 watts but we’ll double for good measure. This means ~100 watts and 50 watts. So toss in another 50 for miscellaneous power and you have about 200-250 watts required at maximum load. Ok, there must be something I am missing, so let’s just say 300 watts total at maximum load. Not wanting to cut myself short I opted for a 450w power supply. I also wanted it to be modular as I have been burned by non-modular PSU’s in the past with cable management. So the cheapest option that I felt would be quality was this model.

The only issue is that each SATA cage will require a Molex connector and it only has a single section devoted to Molex with 3 adaptors in sequence. To properly distribute the electrical load I will need to separate this out so I don’t melt cables if I do fill all 12 bays. So I will need to buy adapters to convert SATA to Molex for future expansions. However this would be needed for any PSU.

Storage

SanDisk 64GB SSD
(~$50 – Reused)

home-server-sandisk-ssd-64gbI purchased this SSD for my home server initially about three years ago, so technically this is new and reused. Although not necessary to be an SSD, the fast operational time is nice for when it needs it(although it’s completele overkill) and 64GB is plenty for a bare-bones Linux install. 64GB should also be enough for Windows as well if you need it. Just don’t expect to get a lot of programs installed. But hey, this is a network storage server, if you want it to do more ensure you buy your OS drive accordingly.

Seagate Barracuda (1TB x 2)
(~$50 x 2)

home-server-seagate-barracudaCurrently these are the main storage drives with more to come in the near future(although per drive capacity may increase). The idea is that one is used for primary storage and the other is a back up. Of course why did I pick Seagate? I know there is a preference war and really it all came down to price. I snagged these on sale over two years ago for about $50 a piece and that is also what their going rate is at time of writing.

Seagate?

But Seagates fail often right? What about that Backblaze report about Seagate being the worst(arstechnica)? Well, that is something to consider, however they are also using consumer drives in an environment that isn’t exactly what they are designed for(lots of heat and vibration in a data center). But what about all the reviews that also claim high failure rates? Yes, those are alarming but at the time I also found that the WD Red drives I was considering  were having a similar number of reviews stating they had issues. So I took this to mean that all was pretty much equal and bought the more cost effective option. Also, I am pretty sure there are just biases that create these kinds of perception. I want to find out for myself instead of following what’s already been stated. I mean if Seagate was that bad they wouldn’t still be in business. Right?

Possible Issues

However, once I did get my home server online I did notice that the HDD’s were making a noise once every 2 minutes(exactly). Turns out that they have very aggressive head parking which are also known as load/unload cycles. This is very high and somewhat around the same level as the WD Green drives(the last I researched). The Greens save power with the same aggressive head parking which led to high failure rates when they first launched. Although I haven’t looked at them since so I am not sure if that is still a common problem.

Explanation

What does this mean? Drives are rated for a maximum number of load/unload cycles which are a good indicator of drive life. So I took a look after they had only been powered on for a short while they had over 600 load/unloads. According to it’s fact sheet it can sustain 300,000 load/unload cycles.  So at one per 2 minutes it would take ~416.5 days or about 1.5 years for the drive to die with 24×7 use. Granted this drive is rated for 8hours a day for 5 days a week for two years so I am still pushing it’s specs regardless. However from my research online, these load/unload cycles are most often the killer of your drive.

Possible Solution

So how does one counter this? Well, basically you have to tell the OS to not use power saving on the drive. Using Linux this meant finding out a command to run at boot which turns APM(Auto Power Management) off. This does increase the power usage, but prevents the drives from parking which should, in theory, make the drives last much longer than they would otherwise. This will have to be seen.

As I said before I don’t feel I have a file unless it’s in at least three locations, so hopefully when a drive dies it won’t be an issue at all. Then the length of time I have had it running will determine if I use these drives again or use something else(even a different Seagate model properly rated for server use). So use whatever you are comfortable with and please don’t take my advise as final say. I am risking my own data, so do this only at your own risk.

SYBA PCI Express SATA III 4 Port Card
(~$31 x 2)

home-server-syba-4-port-sata-pcie-cardAs I am going for maximum storage I needed more than the 2 SATA ports available on the motherboard. These cards each have 4 ports and with a third card will allow me to have all 12 SATA ports for the max of 12 drives I can fit in the 3 total possible hotswap cages. This leaves the two on board SATA ports for internal storage. One is for the OS and the other could be used for another SSD for storing VM’s if I ever feel like it.

Why two cards? Well once I get 4 drives installed I plan to split the load of the drives between the PCI slots to ensure I don’t hit a bottleneck early on. No other reason for it.

Rosewill SATA Cage HotSwap
(~$50)

home-server-rosewill-sata-hotswap-careTo fit as many drives into my case as possible and to make future drive replacements easy, I opted for a SATA hotswap cage. This converted 3 x 5.25″ bays into 4 x 3.5″ bays that are accessible externally. Although it’s an additional upfront cost I know the ease of swapping drives will help temper my mood once a drive dies and needs replaced. The last thing I will want to do is open my server and fight with a mess of cables to get to the dead drive and replace it.

This model has plastic trays with a metal cage and a 120mm fan to pull air over the drives for cooling. It’s powered by Molex as well so be sure to have at least one free. It works very well, and feels durable so long as you don’t abuse the plastic HDD caddies.

Case – Zalman MS800
(~$85)

home-server-salman-ms800With HDD space being my most important part of this build I was thrilled to find this mid-tower case. It has 10 (YES TEN!) 5.25″ drive bays. This is perfect because I wanted to use hotswap bays so when a drive dies I don’t need to power the server off and take it apart to replace the affected drive. So, hotswap bays take up 3 x 5.25″ bays and can hold 4 x 3.5″(or 2.5″) drives. This means the case when maxed out can hold a total of 12 drives and still have a free 5.25″ bay for the internal OS drive or 2 SSD’s.

Twelve drives multiplied by the current maximum of 10TB means it could hold up to 120TB of data. I however don’t have that kind of need or cash but it’s nice to know that if I really need it, this case can provide some serious data down the line. Most likely I will use RAID and cheap drives to get to maximum case usage and still have more TB than I know what to do with.

Case Fans

Enermax Everest Advance 120mm
(~$15 x 2 – Reused)

home-server-enermax-everest-clear-blue-led-120mmI purchased these fans a few years ago to replace the noisy case fans in my desktop and Allyson’s desktop. However when we got rid of her PC and she upgraded to a MacBook I kept these aside. I love these fans as they are practically silent and have their own thermal sensor to adjust the fan speed. Additionally they have a push button switch to enable and disable the LED lights so they don’t have to be flashy if you don’t want them to be.

Basically, because I had them, I used them to help save on cost. However if you are looking for near silent fans, these are certainly ones to consider. I had picked them previously from extensive research and they are totally worth the premium price.

These ended up fitting the color scheme for the build in an unexpected way as well. The hot-swap cage has bright blue LEDs to show HDD status(powered or not) which creates some piercing light at night in our living room. Since the Zalman case has a mesh fan location on the top of the tower I thought I would put these here to create additional points of light. So now it offers a soft glow of blue from all openings. This certainly isn’t a look for everyone, but since I integrated the server into our media center it acts as a night light as well as back light next to the TV. Not bad, but not ideal and again, certainly not for everyone.

Standard Zalman 120mm
(Free)

home-server-stock-zalman-case-fan-120mmThe other fans I am using are what came with my case. These are actually very nice stock fans. The best of any case fan I have seen, but I haven’t splurged on an expensive one for my PC ever. These fans are very quiet and black and white which also match my aftermarket CPU fan. They are nice enough that I removed the 120mm Rosewill fan from my hotswap cage and replaced with this. The Rosewill fan isn’t too loud but certainly enough to hear it running. The other 120mm fan came pre-installed in the back pulling air out of the case. It makes a nice glow with the white blades reflecting the  inner blue lights :P.

OS – openSUSE 42.1 Leap (XFCE)
(Free!)

home-server-opensuse-installYou can find openSUSE here: opensuse.org

This is one of my favorite Linux distro’s mainly because it’s a server OS acting as a desktop. That means it’s ready to be a server with minimal effort. Right after installing you can go in and set up a Samba file share with out much fuss and it contains a ton of other server functions as well. I know there are debates over if you should use a GUI on your server or not, but because I want simple administration and I don’t plan on exposing it to the world like web servers, I installed the lightweight XFCE desktop environment to work in. Being that I have an APU, there isn’t any real issue rendering it either.

Putting it all together

home-server-assembled

No this post is long enough, so we won’t get into how to assemble it. Although you can watch me do it in high speed here 😀 :

You may notice it’s slightly different from what’s written here with no aftermarket CPU heatsink/fan, no extra case fans and also extra SSD’s? Some changes have been made to make it as it is for my current set up. This should be how it stays sans additional drives and the cages needed for them to sit in.

Cost

What if you want to build one yourself? Adding up the costs listed above it comes to ~$560 USD. Not bad considering stand alone NAS devices cost about $200 and then you need to buy the disks for in them. So to get the same storage it would cost between $300 and $400. Cheaper? Yes, but you will need to fork out an additional $200 for every 4 bays you need. You could go even cheaper and just use a HDD sharing feature on your router if available. However there have been issues with Routers making files available to the internet. I just don’t trust consumer routers, and want a dedicated box I can update and apply additional firewall rules on.

So cheaper in the long run and flexibility are what drove me down this path. It also allows me to solve potential issues manually instead of relying on the manufactures firmware (HDD APM anyone?). This level of control certainly isn’t for everyone, but I prefer having flexibility over convenience here.

Power Consumption

After purchasing a Kill-a-watt meter I tested this servers power consumption. Currently with 2 1TB drives and 1 2TB drive it idles around 35watts and peak pull at start up is around 45watts. This is fantastic as my desktop currently idles with twice the power needs (around 80-90watts). Although not tested against using old hardware, if repurposing an old PC would save more energy, this is certainly low enough I don’t have to worry about it drastically contributing to my power bill.

What do you have?

Have you built or bought a home server or NAS? If so I would love to hear how you solutioned this problem! Certainly a full tower running Linux is only one way to solve it and I am fully aware that many opt for FreeNAS and other options instead of rolling their own. So let us all know in the comments what you have used or are currently using!

 

Amazon Parts List:

Looking to build this yourself? I used a mixture of Newegg and Amazon to gather my parts. However I have compiled the entire list for Amazon below. The links below will take you to each products page on Amazon.com.

Comments are closed.