Hello I’ve been playing around with an old laptop as my home server for 1 year and I think that now it’s a good time to upgrade to something better since it feels a bit too slow.

I was thinking to buy a synology but I would prefer something custom because I hate that sometimes the manufacturers decide to abandon support or change all their terms of service.

My budget is about 1000$ USD, I’m looking for it to have at least 20TB and the option to later add a graphics card would be nice.

What do you recommend to buy? Also what software do you recomend? Also could it work with an n100 mini PC?

I’ve been using Ubuntu server, with docker containers for several services, but I mainly use it for Nextcloud

  • lorentz@feddit.it
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    I got a terramaster nas and I’m super happy https://www.terra-master.com/global/f4-5067.html

    The main reason to choose it is that it is just a PC in the form factor of a NAS. You can just boot it from a pendrive and install your favourite operating system. I had a Qnap before, and while it was great to start, self hosting wasn’t the best experience on their OS.

    this is a small form factor, it should be low power consumption (I’ve never measured to confirm it) and supports both nvme and sata drives. Currently I’ve an nvme for the OS and two sata for storage. CPU is powerful enough to run home assistant, vpn, pihole, commafeed, and a bunch of other Docker images. I just plan to increase the ram soonish because the stock feels a little constrained.

  • iggy@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    I have a couple Aoostar R7’s (4x in a hyper-converged ceph+cloud-hypervisor+k0s cluster, but that’s overkill for most). They have been rock solid. They also have an n100 version with less storage expansion if you don’t need it. My nodes probably idle at about 20w fully loaded with drives (2x nvme, 1x sata SSD, 1x sata HDD). Running ~15 containers and a VM or 2. You should be able to easily get 1 (plus memory and drives) for $1000. Throw proxmox and/or some NAS OS on it and you’re good to go.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 days ago

    Best bang for your buck is business workstations. $1000 is a fairly big budget and is likely a but overkill. Get 3 decently speced workstations and put storage and fast networking in them. Cluster them and then setup high availability. Depending on your setup you could also modify one to also be a NAS. Get a sata or SAS card and put some drives in the chassis. You may need to get dirty but that’s the fun part.

  • scholar@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    I built a server a few years ago in a Fractal Design Node (big square box) which has 4 6TB drives in raid 5 for 18TB of storage and a 6 core AMD cpu. It cost around £1200 and half of that was the hard drives.

    It’s been really good, so if you’re looking to build one yourself I’d recommend having a look at the case and the price of drives.

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      This is a good way to do it.

      I went one smaller with the Node 304 which only can do 4 HDDs with a GPU inserted. Going used for consumer desktop CPU is the most powerful play for the money I think.

      This is a good path forward OP for a pretty powerful server

      • Node 804
      • Used AM4 motherboard ( microatx B550) (can be around 150€)
      • used 5700X or similar (seen as low as 100€)
      • new 500W power supply
      • 32GB DDR4 3200 ram in 16GB sticks
      • WD red plus 10TB helium filled for balance of noise and performance and price. My 10TB drives are as quiet as my 4TB. My scheme is ZFS mirror of 4TB (2 drives) for important docs, and 10TB drives for non critical data. Drives are by far the most expensive unless you get good second hand drives
      • if you want to do Jellyfin media server, pick up an arc A310
  • lemming741@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    I think the N100 type CPUs are limited on PCIe lanes. You end up with less nvme, less sata, and usually no slots.

    You can find x570 am4 boards for less than $100 now. Two nvme, 8 sata, 2 big slots and 2 small.

    But all of that flexibility and expandability is going to cost you in power. My 7700x w/A380, 3 hdd is 125 watts 24/7. $10 a month on my power bill. I think those n100 mini PCs only have a 35w brick and idle at less than 15w.

  • phucyall@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    I have a Synology. I love it, but if you’re on a budget build one server and use that for storage and hosting all your stuff.

    Use PCPartsPicker and build yourself a full desktop tower. Something like https://pcpartpicker.com/list/gHLHxg. You can get a lot for your money on the used market, but it will use way more power and will be slower.

    For above build I picked lower to mid range components, but you can see what matters to you most. Maybe get a CPU with more cores and less storage to start and add more storage later. Or do the opposite if you don’t care about CPU but want more storage now.

    Some hardware notes, do get AMD CPU and stay away from Intel. Last 2 years of their CPUs are plagued with major issues. Do also get DDR5 ram and whatever motherboard supports that. Get a fast NVMe for your OS drive. 1Tb should be plenty.

    Finally don’t install Ubuntu on it. Two options for OS: if you want to use it as a nas then use TrueNAS Scale otherwise use ProxMox. Then you can create a virtual machine on either one of those and install Ubuntu on that if you still want to. You can also run containers on both of those.

    • Nutbolt@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      You mention about getting an AMD cpu, and I’ve heard similar stories about Intel quality lately, however I’ve also heard in the past that AMD cpus aren’t very good at going low power. Electricity is expensive and I want it to idle as low as possible. Plus for my build, I’d certainly make use of quicksync on an Intel CPU.

      https://uk.pcpartpicker.com/user/Nutbolt/saved/#view=rrchkL

      Any thoughts as I’m looking for opinions on the intel vs amd but also on my proposed build. Thanks

  • Friend of DeSoto@startrek.website
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    I purchased a case, SilverStone Technology CS382 8-Bay. Around $200-225.

    Bought used parts off eBay:

    Asus P8Z77-M LGA 1155 DDR3 SDRAM Desktop Motherboard $75

    32GB DDR3 1333 $35

    LSI 6Gbps SAS HBA 9200-81 IT Mode P20 $35

    Nvidia Quadro P620 2GB GDDR5 4x mini DisplayPort $70

    I have six 12tb drives (seagate exos), purchased refurb from serverpartdeals.com and had great luck with them and their support. I found that on Reddit data hoarder sub.

    I run Truenas. 4 drives for primary. 2 drives for backup of the first 4. And I have a qnap 4 bay dumb raid box for a third backup with old drives I had. My paranoia but not related really to the nas.

    Anyway it’s possible and I enjoy what I built. Also that case is loud, get a fan controller too.

      • Friend of DeSoto@startrek.website
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        I was limited by the processor and some existing ram which basically dictated my purchases to save money.

        You’re completely right though, a more modern system would be similar in price and more capable.

        I blew my budget on drives and a hot swap case. The rest is easy to upgrade when the time comes.

  • thirdBreakfast@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    There’s lots of ways to skin this particular cat. My current approach is low powered Synology (j series?) for mass storage, then 1 litre PC’s running proxmox for my compute power using their NVME for storage, all backed up to the Synology.

    • pezhore@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      This is basically my homelab. Synology 1618 + 3x Lenovo M920Q systems with 1TB names. I upgraded to a 10gb fibre switch so they run Proxmox + Ceph, with the Synology offering additional fibre storage with the add on 10gb fibre card.

      That’s probably a few steps up from what the OP is asking for.

      Splitting out storage and computer is definitely good first step to increase optimization and increase failure resiliency.

      • ddh@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 days ago

        I’m interested in how you like Ceph.

        My setup is similar, using a DS1522+ volume as shared block storage for an iSCSI SAN for three Proxmox nodes. Two nodes are micro PCs and the third is running on the 1522+. There’s a DS216j for backups.

        • pezhore@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          Ceph is… fine. I feel like I don’t know it enough to properly maintain it. I only went with 10gbe because I was basically told on a homelab reddit that Ceph will fail in unpredictable ways unless you give it crazy speeds for it’s storage and network. And yet, it has perpetually complained about too many placement groups.

          1 pools have too many placement groups
          
          Pool tank has 128 placement groups, should have 32
          

          Aside from that and the occasional falling over of monitors it’s been relatively quiet? I’m tempted to use use the Synology for all the storage and let the 10GbE network be carved up into VM traffic instead. Right now I’m using bonded USB 1GbE copper and it’s kind of sketchy.

          • nickwitha_k (he/him)@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            I maintained a CEPH cluster a few years back. I can verify that speeds under 10GbE will cause a lot of weird issues. Ideally, you’ll even want a dedicated 10GbE purely for CEPH to do its automatic maintenance stuff and not impact storage clients.

            The PGs is a separate issue. Each PG is like a disk partition. There’s some funky math and guidelines to calculate the ideal number for each pool, based upon disks, OSDs, capacity, replicas, etc. Basically, more PGs means that there are more (but smaller) places for CEPH to store data. This means that balancing over a larger number of nodes and drives is easier. It also means that there’s more metadata to track. So, really, it’s a bit of a balancing act.