_______ __ _______ | |
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----. | |
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --| | |
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____| | |
on Gopher (inofficial) | |
Visit Hacker News on the Web | |
COMMENT PAGE FOR: | |
Mini NASes marry NVMe to Intel's efficient chip | |
jhancock wrote 5 hours 29 min ago: | |
thanks for the article. | |
I'm dreaming of this: mini-nas connected direct to my tv via HDMI or | |
USB. I think I'd want HMDI and let the nas handle streaming/decoding. | |
But if my TV can handle enough formats. maybe USB will do. | |
anyone have experience with this? | |
I've been using a combination of media server on my Mac with client on | |
Apple TV and I have no end of glitches. | |
deanc wrote 2 hours 58 min ago: | |
Just get a nvidia shield. It plays pretty much anything still even | |
though a fairly old device. Your aim should not be to transcode but | |
to just send data when it comes to video. | |
dwood_dev wrote 4 hours 12 min ago: | |
I've been running Plex on my AppleTV 4k for years with few issues. | |
It gets a lot of use in my household. I have my server (a headless | |
Intel iGPU box) running it in docker with the Intel iGPU encoder | |
passed through. | |
I let the iGPU default encode everything realtime, and now that plex | |
has automatic subtitle sync, my main source of complaints is gone. I | |
end up with a wide variety of formats as my wife enjoys obscure | |
media. | |
One of the key things that helped a lot was segregating Anime to it | |
own TV collection so that anime specific defaults can be applied | |
there. | |
You can also run a client on one of these machines directly, but then | |
you are dealing with desktop Linux. | |
aesh2Xa1 wrote 5 hours 18 min ago: | |
Streaming (e.g., Plex or Jellyfin or some UPnP server) helps you send | |
the data to the TV client over the network from a remote server. | |
As you want to bring the data server right to the TV, and you'll | |
output the video via HDMI, just use any PC. There are plenty of them | |
designed for this (usually they're fanless for reducing noise)... | |
search "home theater PC." | |
You can install Kodi as the interface/organizer for playing your | |
media files. It handles the all the formats... the TV is just the | |
ouput. | |
A USB CEC adapter will also allow you to use your TV remote with | |
Kodi. | |
jhancock wrote 5 hours 8 min ago: | |
thanks! | |
I've tried Plex, Jellyfin etc on my Mac. I've tried three different | |
Apple TV apps as streaming client (Infuse, etc). They are all | |
glitchy. Another key problem is if I want to bypass the streaming | |
server on my Mac and have Infuse on the Apple TV just read files | |
from the Mac the option is Windows NFS protocol...which gives way | |
too much sharing by providing the Infuse app with a Mac | |
id/password. | |
waterhouse wrote 6 hours 42 min ago: | |
> Testing it out with my disk benchmarking script, I got up to 3 GB/sec | |
in sequential reads. | |
To be sure... is the data compressible, or repeated? I have | |
encountered an SSD that silently performed compression on the data I | |
wrote to it (verified by counting its stats on blocks written). I | |
don't know if there are SSDs that silently deduplicate the data. | |
(An obvious solution is to copy data from /dev/urandom. But beware of | |
the CPU cost of /dev/urandom; on a recent machine, it takes 3 seconds | |
to read 1GB from /dev/urandom, so that would be the bottleneck in a | |
write test. But at least for a read test, it doesn't matter how long | |
the data took to write.) | |
ksec wrote 6 hours 44 min ago: | |
Something that Apple should have done with TimeCapsule iOS but they | |
were too focused on service revenue. | |
getcrunk wrote 6 hours 55 min ago: | |
Whenever these things come up I have to point out the most of these | |
manufactures donât do bios updates. Since spectre/meltdown we see cpu | |
and bios vulnerabilities every few months-yearly. | |
I know u can patch microcode at runtime/boot but I donât think that | |
covers all vulnerabilities | |
transpute wrote 6 hours 34 min ago: | |
Hence the need for coreboot support. | |
ezschemi wrote 7 hours 4 min ago: | |
I was about to order that GMKtek G9 and then saw Jeff's video about it | |
on the same day. All those issues, even with the later fixes he showed, | |
are a big no-no for me. Instead, I went with a Odroid H4-Ultra with an | |
Intel N305, 48GB Crucial DDR5 and 4x4TB Samsung 990 Evo SSDs (low-power | |
usage) + a 2TB SATA SSD to boot from. Yes, the SSDs are way overkill | |
and pretty expensive at $239 per Samsung 990 Evo (got them with a deal | |
at Amazon). It's running TrueNAS. | |
I am somewhat space-limited with this system, didn't want spinning | |
disks (as the whole house slightly shakes when pickup or trash trucks | |
pass by), wanted a fun project and I also wanted to go as small as | |
possible. | |
No issues so far. The system is completely stable. Though, I did add a | |
separate fan at the bottom of the Odroid case to help cool the NVMe | |
SSDs. Even with the single lane of PCIe, the 2.5gbit/s networking gets | |
maxed out. Maybe I could try bonding the 2 networking ports but I don't | |
have any client devices that could use it. | |
I had an eye on the Beelink ME Mini too, but I don't think the NVMe | |
disks are sufficiently cooled under load, especially on the outer side | |
of the disks. | |
wpm wrote 3 hours 23 min ago: | |
> (as the whole house slightly shakes when pickup or trash trucks | |
pass by) | |
I have the same problem, but it is not a problem for my Seagate X16s, | |
that have been going strong for years. | |
kristianp wrote 2 hours 54 min ago: | |
How does this happen? Wooden house? Only 2-3 metres from the road? | |
atmanactive wrote 5 hours 26 min ago: | |
Which load, 250MB/s? Modern NVMes are rated for ~20x speeds. Running | |
at such a low bandwidth, they'll stay at idle temperatures at all | |
times. | |
riobard wrote 10 hours 15 min ago: | |
Iâve been always puzzled by the strange choice of raiding multiple | |
small capacity M.2 NVMe in these tiny low-end Intel boxes with severely | |
limited PCIe lanes using only one lane per SSD. | |
Why not a single large capacity M.2 SSD using 4 full lanes and proper | |
backup with a cheaper , larger capacity and more reliable spinning | |
disk? | |
tiew9Vii wrote 10 hours 6 min ago: | |
The latest small M.2 NASâs make very good consumer grade, small, | |
quiet, power efficient storage you can put in your living room, next | |
to the tv for media storage and light network attached storage. | |
Itâd be great if you could fully utilise the M.2 speed but they are | |
not about that. | |
Why not a single large M.2? Price. | |
riobard wrote 10 hours 0 min ago: | |
Would four 2TB SSD be more or less expensive than one 8TB SSD? And | |
also counting power efficiency and RAID complexity? | |
adgjlsfhk1 wrote 8 hours 14 min ago: | |
4 small drives+raid gives you redundancy. | |
foobiekr wrote 3 hours 18 min ago: | |
Given the write patterns of RAID and the wear issues of flash, | |
it's not obvious at all that 4xNVME actually gives you | |
meaningful redundancy. | |
geerlingguy wrote 7 hours 59 min ago: | |
And often are about the same price or less expensive than the | |
one 8TB NVMe. | |
I'm hopeful 4/8 TB NVMe drives will come down in price someday | |
but they've been remarkably steady for a few years. | |
FloatArtifact wrote 10 hours 30 min ago: | |
I think the N100 and N150 suffer the same weakness for this type of use | |
case in the context of SSD storage 10gb networking. We need a next | |
generation chip that can leverage more PCI lanes with roughly the same | |
power efficiency. | |
I would remove points for a built-in non-modular standardized power | |
supply. It's not fixable, and it's not comparable to Apple in quality. | |
monster_truck wrote 10 hours 48 min ago: | |
These are cute, I'd really like to see the "serious" version. | |
Something like a Ryzen 7745, 128gb ecc ddr5-5200, no less than two | |
10gbe ports (though unrealistic given the size, if they were sfp+ | |
that'd be incredible), drives split across two different nvme raid | |
controllers. I don't care how expensive or loud it is or how much power | |
it uses, I just want a coffee-cup sized cube that can handle the kind | |
of shit you'd typically bring a rack along for. It's 2025. | |
windowsrookie wrote 7 hours 6 min ago: | |
The Mac Studio is pretty close + silent and power efficient. But it's | |
isn't cheap like an N100 PC. | |
varispeed wrote 8 hours 41 min ago: | |
Best bet probably Flashstor FS6812X [1] Not the "cube" sized, but | |
surprisingly small still. I've got one under the desk, so I don't | |
even register it is there. Stuffed it with 4x 4TB drives for now. | |
[1]: https://www.asustor.com/en-gb/product?p_id=91 | |
Palomides wrote 10 hours 27 min ago: | |
the minisforum devices are probably the closest thing to that | |
unfortunately most people still consider ECC unnecessary, so options | |
are slim | |
gorkish wrote 11 hours 32 min ago: | |
NVMe NAS is completely and totally pointless with such crap | |
connectivity. | |
What in the WORLD is preventing these systems from getting at least | |
10gbps interfaces? I have been waiting for years and years and years | |
and years and the only thing on the market for small systems with good | |
networking is weird stuff that you have to email Qotom to order direct | |
from China and _ONE_ system from Minisforum. | |
I'm beginning to think there is some sort of conspiracy to not allow | |
anything smaller than a full size ATX desktop to have anything faster | |
than 2.5gbps NICs. (10gbps nics that plug into NVMe slots are not the | |
solution.) | |
windowsrookie wrote 6 hours 54 min ago: | |
You can order the Mac mini with 10gbps networking and it has 3 | |
thunderbolt 4 ports if you need more. Plus it has an internal power | |
supply making it smaller than most of these mini PCs. | |
geerlingguy wrote 5 hours 53 min ago: | |
That's what I'm running as my main desktop at home, and I have an | |
external 2TB TB5 SSD, which gives me 3 GB/sec. | |
If I could get the same unit for like $299 I'd run it like that for | |
my NAS too, as long as I could run a full backup to another device | |
(and a 3rd on the cloud with Glacier of course). | |
lmz wrote 8 hours 3 min ago: | |
Not many people have fiber at home. Copper 10gig is power hungry and | |
demands good cabling. | |
zerd wrote 8 hours 23 min ago: | |
It's annoying, around 10 years ago 10gbps was just starting to become | |
more and more standard on bigger NAS, and 10gbps switches were | |
starting to get cheaper, but then 2.5GbE came out and they all | |
switched to that. | |
atmanactive wrote 5 hours 10 min ago: | |
That's because 10GbE tech is not there yet. Everything overheats | |
and drops-out all the time, while 2.5GbE just works. In several | |
years from now, this will all change, of course. | |
wpm wrote 3 hours 13 min ago: | |
Speak for yourself. I have AQC cards in a PC and a Mac, Intel | |
gear in my servers, and I can easily sustain full speed. | |
PhilipRoman wrote 9 hours 51 min ago: | |
It especially sucks when even low end mini PCs have at least multiple | |
5Gbps USB ports, yet we are stuck with 1Gbps (or 2.5, if manufacturer | |
is feeling generous) ethernet. Maybe IP over Thunderbolt will finally | |
save us. | |
9x39 wrote 10 hours 39 min ago: | |
>What in the WORLD is preventing these systems from getting at least | |
10gbps interfaces? | |
Price and price. Like another commenter said, there is at least one | |
10Gbe mini NAS out there, but it's several times more expensive. | |
What's the use case for the 10GbE? Is ~200MB/sec not enough? | |
I think the segment for these units is low price, small size, shared | |
connectivity. The kind of thing you tuck away in your house invisibly | |
and silently, or throw in a bag to travel with if you have a few | |
laptops that need shared storage. People with high performance needs | |
probably already have fast nvme local storage is probably the | |
thinking. | |
wpm wrote 3 hours 14 min ago: | |
> What's the use case for the 10GbE? Is ~200MB/sec not enough? | |
When I'm talking to an array of NVMe? No where near enough, not | |
when each drive could do 1000MB/s of sequential writes without | |
breaking a sweat. | |
CharlesW wrote 11 hours 26 min ago: | |
> What in the WORLD is preventing these systems from getting at least | |
10gbps interfaces? | |
They definitely exist, two examples with 10 GbE being the QNAP | |
TBS-h574TX and the Asustor Flashstor 12 Pro FS6712X. | |
QuiEgo wrote 11 hours 30 min ago: | |
Consider the terramaster f8 ssd | |
sorenjan wrote 11 hours 39 min ago: | |
Is it possible (and easy) to make a NAS with harddrives for storage and | |
an SSD for cache? I don't have any data that I use daily or even | |
weekly, so I don't want the drives spinning needlessly 24/7, and I | |
think an SSD cache would stop having to spin them up most of the time. | |
For instance, most reads from a media NAS will probably be biased | |
towards both newly written files, and sequentially (next episode). This | |
is a use case CPU cache usually deals with transparently when reading | |
from RAM. | |
Nursie wrote 7 hours 10 min ago: | |
I used to run a zfs setup with an ssd for L2ARC and SLOG. | |
Canât tell you how it worked out performance-wise, because I | |
didnât really benchmark it. But it was easy enough to set up. | |
These days I just use SATA SSDs for the whole array. | |
QuiEgo wrote 11 hours 25 min ago: | |
[1] I do this. One mergerfs mount with an ssd and three hdds made to | |
look like one disk. Mergerfs is set to write to the ssd if itâs not | |
full, and read from the ssd first. | |
A chron job moves out the oldest files on the ssd once per night to | |
the hdds (via a second mergerfs mount without the ssd) if the ssd is | |
getting full. | |
I have a fourth hdd that uses snap raid to protect the ssd and other | |
hdds. | |
[1]: https://github.com/trapexit/mergerfs/blob/master/mkdocs/docs... | |
QuiEgo wrote 11 hours 16 min ago: | |
Also, [1] which moves files between disks based on their state in a | |
Plex DB | |
[1]: https://github.com/bexem/PlexCache | |
op00to wrote 11 hours 39 min ago: | |
Yes. You can use dm-cache. | |
sorenjan wrote 11 hours 32 min ago: | |
Thanks. I looked it up and it seems that lvmcache uses dm-cache and | |
is easier to use, I guess putting that in front of some kind of | |
RAID volume could be a good solution. | |
guerby wrote 11 hours 58 min ago: | |
Related question: does anyone know of an usb-c powerbank that can be | |
effectively used as UPS? That is to say is able to be charged while | |
maintaining power to load (obviously with rate of charge greater by a | |
few watts than load). | |
Most models I find reuse the most powerful usb-c port as ... recharging | |
port so unusable as DC UPS. | |
Context: my home server is my old [1] motherboard running proxmox VE | |
with 64GB RAM and 4 TB NVME, powered by usb-c and drawing ... 2 Watt at | |
idle. | |
[1]: https://frame.work | |
j45 wrote 4 hours 26 min ago: | |
Any reaons you can't run a USB-C brick attached to a UPS? Some UPS' | |
likely have USB plugs in them too. | |
1oooqooq wrote 12 hours 26 min ago: | |
I will wait until the have AMD efficient chip for one very simple | |
reason: AMD graciously allow ECC on some* cpus. | |
*well, they allowed on all CPUs, but after zen3 they saw how much money | |
intel was making and joined in. now you must get a "PRO" cpu, to get | |
ECC support, even on mobile (but good luck finding ECC sodimm). | |
wpm wrote 3 hours 8 min ago: | |
And good luck finding a single fucking computer for sale that even | |
uses these "Pro" CPUs, because they sure as hell don't sell them to | |
the likes of Minisforum and Beelink. | |
There was some stuff in DDR5 that made ECC harder to implement | |
(unlike DDR4 where pretty much everything AMD made supported | |
unbuffered ECC by default), but its still ridiculous how hard it is | |
to find something that supports DDR5 ECC that doesn't suck down 500W | |
at idle. | |
bhouston wrote 12 hours 56 min ago: | |
I am currently running a 8 4TB NVMe NAS via OpenZFS on TrueNAS Linux. | |
It is good but my box is quite large. I made this via a standard AMD | |
motherboard with both built-in NVMe slots as well as a bunch of | |
expansion PCEi cards. It is very fast. | |
I was thinking of replacing it with a Asustor FLASHSTOR 12, much more | |
compact form factor and it fits up to 12 NVMes. I will miss TrueNAS | |
though, but it would be so much smaller. | |
layer8 wrote 11 hours 9 min ago: | |
You can install TrueNAS on it: | |
[1]: https://www.jeffgeerling.com/blog/2023/how-i-installed-truen... | |
moondev wrote 11 hours 25 min ago: | |
You can install truenas Linux on the flashstor12. It has no GPU or | |
video out, but I installed a m.2 GPU to attach a HDMI monitor | |
archagon wrote 13 hours 8 min ago: | |
These look compelling, but unfortunately, we know that SSDs are not | |
nearly as reliable as spinning rust hard drives when it comes to data | |
retention: [1] (I assume M.2 cards are the same, but have not | |
confirmed.) | |
If this isnât running 24/7, Iâm not sure I would trust it with my | |
most precious data. | |
Also, these things are just begging for a 10Gbps Ethernet port, since | |
you're going to lose out on a ton of bandwidth over 2.5Gbps... though I | |
suppose you could probably use the USB-C port for that. | |
[1]: https://www.tomshardware.com/pc-components/storage/unpowered-s... | |
ac29 wrote 10 hours 44 min ago: | |
Your link is talking about leaving drives unpowered for years. That | |
would be a very odd use of a NAS. | |
archagon wrote 10 hours 41 min ago: | |
True, but it's still concerning. For example, I have a NAS with | |
some long-term archives that I power on maybe once a month. Am I | |
going to see SSD data loss from a usage pattern like that? | |
adgjlsfhk1 wrote 8 hours 13 min ago: | |
no. SSD data loss is in the ~years range | |
ozim wrote 13 hours 40 min ago: | |
So Jeff is really decent guy that doesnât keep terabytes of Linux | |
ISOs. | |
attendant3446 wrote 13 hours 45 min ago: | |
I was recently looking for a mini PC to use as a home server with, | |
extendable storage. After comparing different options (mostly Intel), I | |
went with the Ryzen 7 5825U (Beelink SER5 Pro) instead. It has an M.2 | |
slot for an SSD and I can install a 2.5" HDD too. The only downside is | |
that the HDD is limited by height to 7 mm (basically 2 TB storage | |
limit), but I have a 4 TB disk connected via USB for "cold" storage. | |
After years of using different models with Celeron or Intel N CPUs, | |
Ryzen is a beast (and TDP is only 15W). In my case, AMD now replaced | |
almost all the compute power in my home (with the exception of the | |
smartphone) and I don't see many reasons to go back to Intel. | |
miladyincontrol wrote 13 hours 56 min ago: | |
Still think its highly underrated to use fs-cache with NASes (usually | |
configured with cachefilesd) for some local dynamically scaling | |
client-side nvme caching. | |
Helps a ton with response times with any NAS thats primarily spinning | |
rust, especially if dealing with decent amount of small files. | |
sandreas wrote 13 hours 56 min ago: | |
While it may be tempting to go "mini" and NVMe, for a normal use case I | |
think this is hardly cost effective. | |
You give up so much by using an all in mini device... | |
No Upgrades, no ECC, harder cooling, less I/O. | |
I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for | |
roughly 5 years now, paid 350 bucks for the whole thing and upgraded | |
the storage once from 1tb to 2tb. It draws 12-14W in normal day use and | |
has 10 docker containers and 1 windows VM running. | |
So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over | |
these toy boxes... | |
However, Jeff's content is awesome like always | |
layoric wrote 3 hours 24 min ago: | |
No ECC is the biggest trade off for me, but the C236 express chipset | |
has very little choice for CPUs, they are all 4 core 8 thread. Ive | |
got multiple x99 platform systems and for a long time they were the | |
king of cost efficiency, but lately the ryzen laptop chips are | |
becoming too good to pass up, even without ECC. Eg Ryzen 5825u minis | |
mytailorisrich wrote 40 min ago: | |
For a home NAS, ECC is as needed as it is on your laptop. | |
ndiddy wrote 10 hours 34 min ago: | |
Another thing is that unless you have a very specific need for SSDs | |
(such as heavily random access focused workloads, very tight space | |
constraints, or working in a bumpy environment), mechanical hard | |
drives are still way more cost effective for storing lots of data | |
than NVMe. You can get a manufacturer refurbished 12TB hard drive | |
with a multi-year warranty for ~$120, while even an 8TB NVMe drive | |
goes for at least $500. Of course for general-purpose internal | |
drives, NVMe is a far better experience than a mechanical HDD, but my | |
NAS with 6 hard drives in RAIDz2 still gets bottlenecked by my | |
2.5GBit LAN, not the speeds of the drives. | |
throw0101d wrote 5 hours 41 min ago: | |
> [â¦] mechanical hard drives are still way more cost effective | |
for storing lots of data than NVMe. | |
Linux ISOs? | |
acranox wrote 9 hours 57 min ago: | |
Donât forget about power. If youâre trying to build a low | |
power NAS, those hdds idle around 5w each, while the ssd is closer | |
to 5mw. Once youâve got a few disks, the HDDs can account for | |
half the power or more. The cost penalty for 2TB or 4TB ssds is | |
still big, but not as bad as at the 8TB level. | |
markhahn wrote 9 hours 14 min ago: | |
such power claims are problematic - you're not letting the HDs | |
spin down, for instance, and not crediting the fact that an SSD | |
may easily dissipate more power than an HD under load. (in this | |
thread, the host and network are slow, so it's not relevant that | |
SSDs are far faster when active.) | |
1over137 wrote 8 hours 27 min ago: | |
Letting hdds spin down is generally not advisable in a NAS, | |
unless you access it really rarely perhaps. | |
sandreas wrote 2 hours 50 min ago: | |
Is there any (semi-)scientific proof to that (serious | |
question)? I did search a lot to this topic but found | |
nothing... | |
(see above, same question) | |
philjohn wrote 8 hours 27 min ago: | |
There's a lot of "never let your drive spin down! They need to | |
be running 24/7 or they'll die in no time at all!" voices in | |
the various homelab communities sadly. | |
Even the lower tier IronWolf drives from Seagate specify 600k | |
load/unload cycles (not spin down, granted, but gives an idea | |
of the longevity). | |
sandreas wrote 2 hours 50 min ago: | |
Is there any (semi-)scientific proof to that (serious | |
question)? I did search a lot to this topic but found | |
nothing... | |
espadrine wrote 47 min ago: | |
Here is someone that had significant corruption until they | |
stopped: [1] There are many similar articles. | |
[1]: https://www.xda-developers.com/why-not-to-spin-dow... | |
cyanydeez wrote 11 hours 10 min ago: | |
I've had a synology since 2015. Why, besides the drives themselves, | |
would most home labs need to upgrade? | |
I don't really understand the general public, or even most usages, | |
requiring upgrade paths beyond get a new device. | |
By the time the need to upgrade comes, the tech stack is likely | |
faster and you're basically just talking about gutting the PC and | |
doing everything over again, except maybe power supply. | |
dragontamer wrote 9 hours 14 min ago: | |
> except maybe power supply. | |
Modern Power MOSFETs are cheaper and more efficient. 10 Years ago | |
80Gold efficiency was a bit expensive and 80Bronze was common. | |
Today, 80Gold is cheap and common and only 80Platinum reaches into | |
the exotic level. | |
sandreas wrote 1 hour 58 min ago: | |
A 80Bronze 300W can still be more efficient than a 750W | |
80Platinum on mainly low loads. Additionally, some of the devices | |
are way more efficient than they are certified for. A well known | |
example is the Corsair RM550x (2021). | |
If your peak power draw is <200W, I would recommend an efficient | |
<450W power supply. | |
Another aspect: Buying a 120 bucks power supply that is 1.2% more | |
efficient than a 60 bucks one is just a waste of money. | |
sandreas wrote 11 hours 1 min ago: | |
Understandable... Well, the bottleneck for a Proxmox Server often | |
is RAM - sometimes CPU cores (to share between VMs). This might not | |
be the case for a NAS-only device. | |
Another upgrade path is to keep the case, fans, cooling solution | |
and only switch Mainboard, CPU and RAM. | |
I'm also not a huge fan of non x64 devices, because they still | |
often require jumping through some hoops regarding boot order, | |
external device boot or power loss struggle. | |
fnord77 wrote 12 hours 27 min ago: | |
these little boxes are perfect for my home | |
My use case is a backup server for my macs and cold storage for | |
movies. | |
6x2Tb drives will give me a 9Tb raid-5 for $809 ($100 each for the | |
drives, $209 for the nas). | |
Very quiet so I can have it in my living room plugged into my TV. < | |
10W power. | |
I have no room for a big noisy server. | |
UltraSane wrote 9 hours 16 min ago: | |
Storing backups and movies on NVMe ssds is just a waste of money. | |
sandreas wrote 2 hours 5 min ago: | |
Absolutely. I don't store movies at all but if I would, I would | |
add a USB-based solution that could be turned off via shelly plug | |
/ tasmota remotely. | |
sandreas wrote 11 hours 18 min ago: | |
While I get your point about size, I'd not use RAID-5 for my | |
personal homelab. I'd also say that 6x2TB drives are not the | |
optimal solution for low power consumption. You're also missing out | |
server quality BIOS, Design/Stability/x64 and remote management. | |
However, not bad. | |
While my Server is quite big compared to a "mini" device, it's | |
silent. No CPU Fan only 120mm case fans spinning around 500rpm, | |
maybe 900rpm on load - hardly noticable. I've also a completely | |
passive backup solution with a Streacom FC5, but I don't really | |
trust it for the chipsets, so I also installed a low rpm 120mm fan. | |
How did you fit 6 drives in a "mini" case? Using Asus Flashstor or | |
beelink? | |
j45 wrote 4 hours 32 min ago: | |
I agreed with this generally until learning the long way why RAID | |
5 minimum is the only way to have some peace of mind and always a | |
nas with at least 1-2 extra bays than you need. | |
Storage is easier as an appliance that just runs. | |
Dylan16807 wrote 5 hours 39 min ago: | |
> I'd not use RAID-5 for my personal homelab. | |
What would you use instead? | |
ZFS is better than raw RAID, but 1 parity per 5 data disks is a | |
pretty good match for the reliability you can expect out of any | |
one machine. | |
Much more important than better parity is having backups. Maybe | |
more important than having any parity, though if you have no | |
parity please use JBOD and not RAID-0. | |
timc3 wrote 2 hours 30 min ago: | |
I would run 2 or more parity disks always. I have had disks | |
fail and rebuilding with only one parity drive is scary (have | |
seen rebuilds go bad because a second drive failed whilst | |
rebuilding). | |
But agree about backups. | |
Dylan16807 wrote 1 hour 45 min ago: | |
Were those arrays doing regular scrubs, so that they | |
experience rebuild-equivalent load every month or two and | |
it's not a sudden shock to them? | |
If your odds of disk failure in a rebuild are "only" 10x | |
normal failure rate, and it takes a week, 5 disks will all | |
survive that week 98% of the time. That's plenty for a NAS. | |
sandreas wrote 2 hours 38 min ago: | |
I'd almost always use RAID-1 or if I had > 4 disks, maybe | |
RAID-6. RAID-5 seems very cost effective at first, but if you | |
loose a drive the probability of losing another one in the | |
restoring process is pretty high (I don't have the numbers, but | |
I researched that years ago). The disk-replacement process | |
produces very high load on the non defective disks and the more | |
you have the riskier the process. Another aspect is that 5 | |
drives draw way more power than 2 and you cannot (easily) | |
upgrade the capacity, although ZFS offers a feature for | |
RAID5-expansion. | |
Since RAID is not meant for backup, but for reliability, losing | |
a drive while restoring will kill your storage pool and having | |
to restore the whole data from a backup (e.g. from a cloud | |
drive)is probably not what you want, since it takes time where | |
the device is offline. If you rely on RAID5 without having a | |
backup you're done. | |
So I have a RAID1, which is simple, reliable and easy to | |
maintain. Replacing 2 drives with higher capacity ones and | |
increasing the storage is easy. | |
epistasis wrote 10 hours 9 min ago: | |
I'm interested in learning more about your setup. What sort of | |
system did you put together for $350? Is it a normal ATX case? I | |
really like the idea of running proxmox but I don't know how to | |
get something cheap! | |
sandreas wrote 2 hours 32 min ago: | |
My current config: | |
Fujitsu D3417-B12 | |
Intel Xeon 1225 | |
64GB ecc | |
WD SN850x 2TB | |
mATX case | |
Pico PSU 150 | |
For backup I use a 2TB enterprise HDD and ZFS send | |
For snapshotting i use zfs-auto-snapshot | |
So really nothing recommendable for buying today. You could go | |
for this [1] Or an old Fujitsu Celsius W580 Workstation with a | |
Bojiadafast ATX Power Supply Adapter, if you need harddisks. | |
Unfortunately there is no silver bullet these days. The old | |
stuff is... well too old or no longer available and the new | |
stuff is either to pricey, lacks features (ECC and 2.5G mainly) | |
or to power hungry. | |
A year ago there were bargains for Gigabyte MC12-LE0 board | |
available for < 50bucks, but nowadays these cost about 250 | |
again. These boards also had the problem of drawing too much | |
power for an ultra low power homelab. | |
If I HAD to buy one today, I'd probably go for a Ryzen Pro 5700 | |
with a gaming board (like ASUS ROG Strix B550-F Gaming) with | |
ECC RAM, which is supported on some boards. | |
[1]: https://www.aliexpress.com/item/1005006369887180.html | |
samhclark wrote 13 hours 26 min ago: | |
I think you're right generally, but I wanna call out the ODROID H4 | |
models as an exception to a lot of what you said. They are mostly | |
upgradable (SODIMM RAM, SATA ports, M.2 2280 slots), and it does | |
support in-band ECC which kinda checks the ECC box. They've got a | |
Mini-ITX adapter for $15 so it can fit into existing cases too. | |
No IPMI and not very many NVME slots. So I think you're right that a | |
good mATX board could be better. | |
geek_at wrote 12 hours 40 min ago: | |
Not sure about the odroid but I got myself the nas kit from | |
friendly elec. With the largest ram it was about 150 bucks and | |
comes with 2,5g ethernet and 4 NVME slots. No fan and keeps fairly | |
cool even under load. | |
Running it with encrypted zfs volumes and even with a 5bay 3.5 Inch | |
HDD dock attached via USB | |
[1]: https://wiki.friendlyelec.com/wiki/index.php/CM3588_NAS_Ki... | |
sandreas wrote 12 hours 44 min ago: | |
Well, if you would like to go mini (with ECC and 2.5G) you could | |
take a look at this one: [1] Not totally upgradable, but at least | |
pretty low cost and modern with an optional SATA + NVMe combination | |
for Proxmox. Shovel in an enterprise SATA and a consumer 8TB WD | |
SN850x and this should work pretty good. Even Optane is supported. | |
IPMI could be replaced with NanoKVM or JetKVM... | |
[1]: https://www.aliexpress.com/item/1005006369887180.html | |
a012 wrote 2 hours 47 min ago: | |
That looks pretty slick with a standard hsf for the CPU, thanks | |
for sharing | |
herf wrote 14 hours 8 min ago: | |
Which SSDs do people rely on? Considering PLP (power loss protection), | |
write endurance/DWPD (no QLC), and other bugs that affect ZFS | |
especially? It is hard to find options that do these things well for | |
<$100/TB, with lower-end datacenter options (e.g., Samsung PM9A3) | |
costing maybe double what you see in a lot of builds. | |
privatelypublic wrote 13 hours 41 min ago: | |
QLC isn't an issue for consumer NAS- are 'you' seriously going to | |
write 160GB/day, every day? | |
magicalhippo wrote 13 hours 26 min ago: | |
QLC have quite the write performance cliff though, which could be | |
an issue during use or when rebuilding the array. | |
Just something to be aware of. | |
dwood_dev wrote 4 hours 22 min ago: | |
The 2.5Gbe network writes against a RAID-Z1 config of 4 drives | |
puts the sustained write speed below that of most QLC drives. | |
Recovery from a lost drive would be slower, for sure. | |
nightfly wrote 13 hours 57 min ago: | |
ZFS isn't more effected by those, your just more likely to notice | |
them with ZFS. You'll probably never notice write endurance issues on | |
a home NAS | |
7e wrote 14 hours 12 min ago: | |
These need remote management capabilities (IPMI) to not be a huge PITA. | |
geerlingguy wrote 7 hours 55 min ago: | |
A JetKVM, NanoKVM, or the like is useful if you want to add on some | |
capability. | |
bongodongobob wrote 11 hours 49 min ago: | |
I haven't even thought about my NAS in years. No idea what you're | |
talking about. | |
yonatan8070 wrote 13 hours 49 min ago: | |
How often do you use IPMI on a server? I have a regular desktop | |
running Proxmox, and I haven't had to plug in a monitor since I first | |
installed it like 2 years ago | |
al_borland wrote 14 hours 56 min ago: | |
Iâve been thinking about moving from SSDs for my NAS to solid state. | |
The drive are so loud, all the time, itâs very annoying. | |
My first experience with these cheap mini PCs was with a Beelink and it | |
was very positive and makes me question the longevity of the hardware. | |
For a NAS, thatâs important to me. | |
leptons wrote 12 hours 28 min ago: | |
> moving from SSDs for my NAS to solid state. | |
SSD = Solid State Drive | |
So you're moving from solid state to solid state? | |
al_borland wrote 10 hours 58 min ago: | |
That should have been HDD. Typo. Seems too late to edit. | |
chime wrote 14 hours 37 min ago: | |
I've been using a QNAP TBS-464 [1] for 4 years now with excellent | |
results. I have 4x 4TB NVMe drives and get about 11TB usable after | |
RAID. It gets slightly warm but I have it in my media cabinet with a | |
UPS, Mikrotik router, PoE switches, and ton of other devices. Zero | |
complaints about this setup. | |
The entire cabinet uses under 1kwh/day, costing me under $40/year | |
here, compared to my previous Synology and home-made NAS which used | |
300-500w, costing $300+/year. Sure I paid about $1500 in total when I | |
bought the QNAP and the NVMe drives but just the electricity savings | |
made the expense worth it, let alone the performance, features etc. | |
1. | |
[1]: https://www.qnap.com/en-us/product/tbs-464 | |
al_borland wrote 13 hours 54 min ago: | |
Thanks, Iâll give it a look. Iâm running a Synology right now. | |
It only has 2 drives, so just swapping those out for SSDs would | |
cost as much as a whole 4xNVMe setup, as I have 8TB HDDs in there | |
now. | |
jbverschoor wrote 14 hours 41 min ago: | |
HDD -> SSD I assume | |
For me itâs more and random access times | |
whatever1 wrote 15 hours 0 min ago: | |
Question regarding these mini pcs: how do you connect them to plain old | |
hard drives ? Is thunderbolt / usb these days reliable enough to run | |
24/7 without disconnects like an onboard sata? | |
TiredOfLife wrote 14 hours 27 min ago: | |
I have been running usb hdds 24/7 connected to raspberry pi as a nas | |
for 10 years without problems | |
blargthorwars wrote 14 hours 35 min ago: | |
I've run a massive farm (2 petabytes) of ZFS on FreeBSD servers with | |
Zraid over consumer USB for about fifteen years and haven't had a | |
problem: directly attaching to the motherboard USB ports and using | |
good but boring controllers on the drives like the WD Elements | |
series. | |
michaelt wrote 14 hours 35 min ago: | |
Would you not simply buy a regular NAS? | |
Why buy a tiny, m.2 only mini-NAS if your need is better met by a | |
vanilla 2-bay NAS? | |
x0x0 wrote 13 hours 12 min ago: | |
power regularly hits 50 cents a kilowatt hour where I live. Most | |
of those seem to treat power like its free. | |
projektfu wrote 14 hours 27 min ago: | |
Good question. I imagine for the silence and low power usage | |
without needing huge amounts of storage. That said, I own an n100 | |
dual 3.5 bay + m.2 mini PC that can function as a NAS or as | |
anything and I think it's pretty neat for the price. | |
indemnity wrote 10 hours 16 min ago: | |
Noise is definitely an issue. | |
I have an 8 drive NAS running 7200 RPM drives, which is on a wall | |
mounted shelf drilled into the studs. | |
On the other side of that wall is my home office. | |
I had to put the NAS on speaker springs [1] to not go crazy from | |
the hum :) | |
[1]: https://www.amazon.com.au/Nobsound-Aluminum-Isolation-Am... | |
asalahli wrote 13 hours 43 min ago: | |
This sounds exactly like what I'm looking. Care to share the | |
brand&model? | |
projektfu wrote 11 hours 28 min ago: | |
AOOSTAR R1 | |
monster_truck wrote 14 hours 37 min ago: | |
The last sata controller (onboard or otherwise) that I had with known | |
data corruption and connection issues is old enough to drive now | |
jeffbee wrote 14 hours 48 min ago: | |
I've never heard of these disconnects. The OWC ThunderBay works well. | |
jkortufor wrote 14 hours 31 min ago: | |
I have experienced them - I have a B650 AM5 motherboard and if I | |
connect a Orico USB HDD enclosure to the fastest USB ports, the | |
ones comming directly from the AMD CPU (yes, it's a thing now), | |
after 5-10 min the HDD just disappears from the system. Doesn't | |
happen on the other USB ports. | |
jeffbee wrote 14 hours 24 min ago: | |
Well, AMD makes a good core but there are reasons that Intel is | |
preferred by some users in some applications, and one of those | |
reasons is that the peripheral devices on Intel platforms tend to | |
work. | |
layer8 wrote 14 hours 32 min ago: | |
For that money it can make more sense to get a UGreen DXP4800 with | |
built-in N100: [1] You can install a third-party OS on it. | |
[1]: https://nas.ugreen.com/products/ugreen-nasync-dxp4800-nas-... | |
devwastaken wrote 15 hours 4 min ago: | |
i want a NAS i can puf 4tb nvmeâs in and a 12tb hdd running backup | |
every night. with ability to shove a 50gbps sfp card in it so i can | |
truly have a detached storage solution. | |
gorkish wrote 11 hours 22 min ago: | |
The lack of highspeed networking on any small system is completely | |
and totally insane. I have come to hate 2.5gbps for the hard stall it | |
has caused on consumer networking with such a passion that it is | |
difficult to convey. You ship a system with USB5 on the front and | |
your networking offering is 3.5 orders of magnitude slower? What good | |
is the cloud if you have to drink it through a straw? | |
lostlogin wrote 13 hours 23 min ago: | |
10gbps would be a good start. The lack of wifi is mentioned as a | |
downside, but do many people want that on a NAS? | |
jbverschoor wrote 14 hours 39 min ago: | |
Yeah thatâs what I want too. | |
I donât necessarily need a mirror of most data, some I do prefer, | |
but thatâs small. | |
I just want a backup (with history) of the data-SSD. The backup can | |
be a single drive + perhaps remote storage | |
lostlogin wrote 13 hours 22 min ago: | |
Would you really want the backup on a single disk? Or is this | |
backing up data that is also versioned on the SSDs? | |
dwood_dev wrote 15 hours 4 min ago: | |
I love reviews like these. I'm a fan of the N100 series for what they | |
are in bringing low power x86 small PCs to a wide variety of | |
applications. | |
One curiosity for @geerlingguy, does the Beelink work over USB-C PD? I | |
doubt it, but would like to know for sure. | |
geerlingguy wrote 14 hours 24 min ago: | |
That, I did not test. But as it's not listed in specs or shown in any | |
of their documentation, I don't think so. | |
moondev wrote 13 hours 3 min ago: | |
Looks like it only draws 45w which could allow this to be powered | |
over POE++ with a splitter, but it has an integrated AC input and | |
PSU - that's impressive regardless considering how small it is but | |
not set up for PD or POE | |
amelius wrote 15 hours 14 min ago: | |
What types of distributed/network filesystem are people running | |
nowadays on Linux? | |
sekh60 wrote 13 hours 6 min ago: | |
I use Ceph. 5 nodes, 424TiB of raw space so far. | |
geerlingguy wrote 15 hours 11 min ago: | |
Ceph or MooseFS are the two that I've seen most popular. All | |
networked FS have drawbacks, I used to run a lot of Gluster, and it | |
certainly added a few grey hairs. | |
bee_rider wrote 15 hours 24 min ago: | |
Should a mini-NAS be considered a new type of thing with a new design | |
goal? He seems to be describing about a desktop worth of storage (6TB), | |
but always available on the network and less power consuming than a | |
desktop. | |
This seems useful. But it seems quite different from his previous | |
(80TB) NAS. | |
What is the idle power draw of an SSD anyway? I guess they usually have | |
a volatile ram cache of some sort built in (is that right?) so it must | |
not be zero⦠| |
jeffbee wrote 12 hours 34 min ago: | |
> less power consuming than a desktop | |
Not really seeing that in these minis. Either the devices under test | |
haven't been optimized for low power, or their Linux installs have | |
non-optimal configs for low power. My NUC 12 draws less than 4W, | |
measured at the wall, when operating without an attached display and | |
with Wi-Fi but no wired network link. All three of the boxes in the | |
review use at least twice as much power at idle. | |
privatelypublic wrote 13 hours 39 min ago: | |
With APSD the idle draw of a SSD is in the range of low tens of | |
milliwatts. | |
CharlesW wrote 13 hours 51 min ago: | |
> Should a mini-NAS be considered a new type of thing with a new | |
design goal? | |
Small/portable low-power SSD-based NASs have been commercialized | |
since 2016 or so. Some people call them "NASbooks", although I don't | |
think that term ever gained critical MAS (little joke there). | |
Examples: [1] , [2] , | |
[1]: https://www.qnap.com/en/product/tbs-464 | |
[2]: https://www.qnap.com/en/product/tbs-h574tx | |
[3]: https://www.asustor.com/en/product?p_id=80 | |
layer8 wrote 14 hours 19 min ago: | |
HDD-based NASes are used for all kinds of storage amounts, from as | |
low as 4TB to hundreds of TB. The SSD NASes arenât really much | |
different in use case, just limited in storage amount by available | |
(and affordable) drive capacities, while needing less space, being | |
quieter, but having a higher cost per TB. | |
transpute wrote 14 hours 32 min ago: | |
> Should a mini-NAS be considered a new type of thing with a new | |
design goal? | |
- Warm storage between mobile/tablet and cold NAS | |
- Sidecar server of functions disabled on other OSes | |
- Personal context cache for LLMs and agents | |
transpute wrote 15 hours 29 min ago: | |
Intel N150 is the first consumer Atom [1] CPU (in 15 years!) to include | |
TXT/DRTM for measured system launch with owner-managed keys. At every | |
system boot, this can confirm that immutable components (anything from | |
BIOS+config to the kernel to immutable partitions) have the expected | |
binary hash/tree. | |
TXT/DRTM can enable AEM (Anti Evil Maid) with Qubes, SystemGuard with | |
Windows IoT and hopefully future support from other operating systems. | |
It would be a valuable feature addition to Proxmox, FreeNAS and | |
OPNsense. | |
Some (many?) N150 devices from Topton (China) ship without Bootguard | |
fused, which _may_ enable coreboot to be ported to those platforms. | |
Hopefully ODROID (Korea) will ship N150 devices. Then we could have | |
fanless N150 devices with coreboot and DRTM for less-insecure [2] | |
routers and storage. [1] Gracemont (E-core): [1] | [2] (Intel Austin | |
architect, 2021) [2] "Xfinity using WiFi signals in your house to | |
detect motion", 400 comments, | |
[1]: https://chipsandcheese.com/p/gracemont-revenge-of-the-atom-cor... | |
[2]: https://youtu.be/agUwkj1qTCs | |
[3]: https://news.ycombinator.com/item?id=44426726#44427986 | |
reanimus wrote 11 hours 17 min ago: | |
Where are you seeing devices without Bootguard fused? I'd be very | |
curious to get my hands on some of those... | |
transpute wrote 7 hours 46 min ago: | |
As a Schrödinger-like property, it may vary by observer and not be | |
publicly documented.. One could start with a commercial product | |
that ships with coreboot, then try to find identical hardware from | |
an upstream ODM. A search for "bootguard" or "coreboot" on | |
servethehome forums, odroid/hardkernel forums, phoronix or even HN, | |
may be helpful. | |
jauntywundrkind wrote 15 hours 35 min ago: | |
Would be nice to see what those little N100 / N150 (or big brother N305 | |
/ N350) can do with all that NVMe. Raw throughput is pretty whatever | |
but hypothetically if the CPU isn't too gating, there's some | |
interesting IOps potential. | |
Really hoping we see 25/40GbaseT start to show up, so the lower market | |
segments like this can do 10Gbit. Hopefully we see some embedded Ryzens | |
(or other more PCIe willing contendors) in this space, at a value | |
oriented price. But I'm not holding my breath. | |
dwood_dev wrote 15 hours 21 min ago: | |
The problem quickly becomes PCIe lanes. The N100/150/305 only have 9 | |
PCIe 3.0 lanes. 5Gbe is fine, but to go to 10Gbe you need x2. | |
Until there is something in this class with PCIe 4.0, I think we're | |
close to maxing out the IO of these devices. | |
geerlingguy wrote 15 hours 9 min ago: | |
Not only the lanes, but putting through more than 6 Gbps of IO on | |
multiple PCIe devices on the N150 bogs things down. It's only a | |
little faster than something like a Raspberry Pi, there are a lot | |
of little IO bottlenecks (for high speed, that is, it's great for | |
2.5 Gbps) if you do anything that hits CPU. | |
lostlogin wrote 13 hours 24 min ago: | |
This is what baffles me - 2.5gbps. | |
I want smaller, cooler, quieter, but isnât the key attribute of | |
SSDs their speed? A raid array of SSDs can surely achieve vastly | |
better than 2.5gbps. | |
jauntywundrkind wrote 9 hours 44 min ago: | |
Even if the throughput isn't high, it sure is nice having the | |
instant response time & amazing random access performance of a | |
ssd. | |
2TB ssd are super cheap. But most systems don't have the | |
expandability to add a bunch of them. So I fully get the | |
incentive here, being able to add multiple drives. Even if | |
you're not reaping additional speed. | |
jrockway wrote 11 hours 50 min ago: | |
2.5Gbps is selected for price reasons. Not only is the NIC | |
cheap, but so is the networking hardware. | |
But yeah, if you want fast storage just stick the SSD in your | |
workstation, not on a mini PC hanging off your 2.5Gbps network. | |
p_ing wrote 11 hours 54 min ago: | |
A single SSD can (or at least NVMe can). You have to question | |
whether or not you need it -- what are you doing that you would | |
go line-speed a large portion of time that the time savings are | |
worth it. Or it's just a toy, totally cool too. | |
4 7200 RPM HDDs in RAID 5 (like WD Red Pro) can saturate a | |
1Gbps link at ~110MBps over SMB 3. But that comes with the heat | |
and potential reliability issues of spinning disks. | |
I have seen consumer SSDs, namely Samsung 8xx EVO drives have | |
significant latency issues in a RAID config where saturating | |
the drives caused 1+ second latency. This was on Windows Server | |
2019 using either a SAS controller or JBOD + Storage Spaces. | |
Replacing the drives with used Intel drives resolved the issue. | |
lostlogin wrote 9 hours 39 min ago: | |
My use is a bit into the cool-toy category. I like having VMs | |
where the NAS has the VMs and the backups, and like having | |
the server connect to the NAS to access the VMs. | |
Probably a silly arrangement but I like it. | |
dwood_dev wrote 14 hours 53 min ago: | |
The CPU bottleneck would be resolved by the Pentium Gold 8505, | |
but it still has the same 9 lanes of PCIe 3.0. | |
I only came across the existence of this CPU a few months ago, it | |
is Nearly the same price class as a N100, but has a full Alder | |
Lake P-Core in addition. It is a shame it seems to only be | |
available in six port routers, then again, that is probably a | |
pretty optimal application for it. | |
Havoc wrote 15 hours 36 min ago: | |
I've been running one of these quad nvme mini-NAS for a while. They're | |
a good compromise if you can live with no ECC. With some DIY | |
shenanigans they can even run fanless | |
If you're running on consumer nvmes then mirrored is probably a better | |
idea than raidz though. Write amplification can easily shred consumer | |
drives. | |
turnsout wrote 14 hours 17 min ago: | |
Iâm a TrueNAS/FreeNAS user, currently running an ECC system. The | |
traditional wisdom is that ECC is a must-have for ZFS. What do you | |
think? Is this outdated? | |
seltzered_ wrote 10 hours 22 min ago: | |
[1] has an argument for it with an update from 2024. | |
[1]: https://danluu.com/why-ecc/ | |
Havoc wrote 12 hours 59 min ago: | |
Ultimately comes down to how important the data is to you. It's not | |
really a technical question but one of risk tolerance | |
matja wrote 13 hours 8 min ago: | |
ECC is a must-have if you want to minimize the risk of corruption, | |
but that is true for any filesystem. | |
Sun (and now Oracle) officially recommended using ECC ever since it | |
was intended to be an enterprise product running on 24/7 servers, | |
where it makes sense that anything that is going to be cached in | |
RAM for long periods is protected by ECC. | |
In that sense it was a "must-have", as business-critical functions | |
require that guarantee. | |
Now that you can use ZFS on a number of operating systems, on many | |
different architectures, even a Raspberry Pi, the | |
business-critical-only use-case is not as prevalent. | |
ZFS doesn't intrinsically require ECC but it does trust that the | |
memory functions correctly which you have the best chance of | |
achieving by using ECC. | |
magicalhippo wrote 13 hours 12 min ago: | |
Been running without for 15+ on my NAS boxes, built using my | |
previous desktop hardware fitted with NAS disks. | |
They're on 24/ and run monthly scrubs, as well as monthly checksum | |
verification of my backup images, and not noticed any issues so | |
far. | |
I had some correctable errors which got fixed when changing SATA | |
cable a few times, and some from a disk that after 7 years of 24/7 | |
developed a small run of bad sectors. | |
That said, you got ECC so you should be able to monitor corrected | |
memory errors. | |
Matt Ahrens himself (one of the creators of ZFS) had said there's | |
nothing particular about ZFS: | |
There's nothing special about ZFS that requires/encourages the use | |
of ECC RAM more so than any other filesystem. If you use UFS, EXT, | |
NTFS, btrfs, etc without ECC RAM, you are just as much at risk as | |
if you used ZFS without ECC RAM. Actually, ZFS can mitigate this | |
risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY | |
flag (zfs_flags=0x10). This will checksum the data while at rest in | |
memory, and verify it before writing to disk, thus reducing the | |
window of vulnerability from a memory error. | |
I would simply say: if you love your data, use ECC RAM. | |
Additionally, use a filesystem that checksums your data, such as | |
ZFS. | |
[1]: https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&... | |
stoltzmann wrote 13 hours 21 min ago: | |
That traditional wisdom is wrong. ECC is a must-have for any | |
computer. The only reason people think ECC is mandatory for ZFS is | |
because it exposes errors due to inherent checksumming and most | |
other filesystems don't, even if they suffer from the same | |
problems. | |
HappMacDonald wrote 11 hours 54 min ago: | |
I'm curious if it would make sense for write caches in RAM to | |
just include a CRC32 on every block, to be verified as it gets | |
written to disk. | |
doubled112 wrote 11 hours 46 min ago: | |
Don't you have to read that data into RAM before you can | |
generate the CRC? Which means without ECC it could get | |
silently corrupted on the way to the cache? | |
adgjlsfhk1 wrote 8 hours 18 min ago: | |
that's just as true with ecc as without | |
evanjrowley wrote 14 hours 0 min ago: | |
One way to look at it is ECC has recently become more affordable | |
due to In-Band ECC (IBECC) providing ECC-like functionality for a | |
lot of newer power efficient Intel CPUs. [1] Not every new CPU has | |
it, for example, the Intel N95, N97, N100, N200, i3-N300, and | |
i3-N305 all have it, but the N150 doesn't! | |
It's kind of disappointing that the low power NAS devices reviewed | |
here, the only one with support for IBECC had a limited BIOS that | |
most likely was missing this option. The ODROID H4 series, CWWK NAS | |
products, AOOSTAR, and various N100 ITX motherboards all support | |
it. | |
[1]: https://www.phoronix.com/news/Intel-IGEN6-IBECC-Driver | |
cuu508 wrote 16 hours 0 min ago: | |
What are the non-Intel mini NAS options for lower idle power? | |
I know of FriendlyElec CM3588, are there others? | |
transpute wrote 7 hours 28 min ago: | |
QNAP TS435XeU 1U short-depth NAS based on Marvell CN913x (SoC | |
successor to Armada A388) with 4xSATA, 2xM.2, 2x10GbE, optional ECC | |
RAM and upstream Linux kernel support, | |
[1]: https://news.ycombinator.com/item?id=43760248 | |
koeng wrote 16 hours 4 min ago: | |
Are there any mini NAS with ECC ram nowadays? I recall that being my | |
personal limiting factor | |
qwertox wrote 15 hours 26 min ago: | |
Minisforum N5 Pro Nas has up to 96 GB of ECC RAM [1] [2] no RAM | |
1.399⬠| |
16GB RAM 1.459⬠| |
48GB RAM 1.749⬠| |
96GB RAM 2.119⬠| |
96GB DDR5 SO-DIMM costs around 200⬠to 280⬠in Germany. [3] I | |
wonder if that 128GB kit would work, as the CPU supports up to 256GB | |
[4] I can't force the page to show USD prices. | |
[1]: https://www.minisforum.com/pages/n5_pro | |
[2]: https://store.minisforum.com/en-de/products/minisforum-n5-n5... | |
[3]: https://geizhals.de/?cat=ramddr3&xf=15903_DDR5~15903_SO-DIMM... | |
[4]: https://www.amd.com/en/products/processors/laptop/ryzen-pro/... | |
lmz wrote 8 hours 8 min ago: | |
Note the RAM list linked above doesn't show ECC SODIMM options. | |
wyager wrote 14 hours 36 min ago: | |
Is this "full" ECC, or just the baseline improved ECC that all DDR5 | |
has? | |
Either way, on my most recent NAS build, I didn't bother with a | |
server-grade motherboard, figuring that the standard consumer DDR5 | |
ECC was probably good enough. | |
qwertox wrote 11 hours 59 min ago: | |
This is full ECC, the CPU supports it (AMD Pro variant). | |
DDR5 ECC is not good enough. What if you have faulty RAM and ECC | |
is constantly correcting it without you knowing it? There's no | |
value in that. You need the OS to be informed so that you are | |
aware of it. It also does not protect errors which occur between | |
the RAM and the CPU. | |
This is similar to HDDs using ECC. Without SMART you'd have a | |
problem, but part of SMART is that it allows you to get a count | |
of ECC-corrected errors so that you can be aware of the state of | |
the drive. | |
True ECC takes the role of SMART in regards of RAM, it's just | |
that it only reports that: ECC-corrected errors. | |
On a NAS, where you likely store important data, true ECC does | |
add value. | |
layer8 wrote 14 hours 11 min ago: | |
The DDR5 on-die ECC doesnât report memory errors back to the | |
CPU, which is why you would normally want ECC RAM in the first | |
place. Unlike traditional side-band ECC, it also doesnât | |
protect the memory transfers between CPU and RAM. DDR5 requires | |
the on-die ECC in order to still remain reliable in face of its | |
chip density and speed. | |
Havoc wrote 15 hours 35 min ago: | |
One of the arm ones is yes. Can't for the life of me remember which | |
though - sorry - either something in bananapi or lattepanda part of | |
universe I think | |
vbezhenar wrote 15 hours 47 min ago: | |
HP Microservers. | |
dontlaugh wrote 15 hours 40 min ago: | |
I got myself a gen8, theyâre quite cheap. They do have ECC RAM | |
and take 3.5â hard drives. | |
At some point though, SSDs will beat hard drives on total price | |
(including electricity). Iâd like a small and efficient ECC | |
option for then. | |
brookst wrote 15 hours 52 min ago: | |
The Aoostar WTR max is pretty beefy, supports 5 nvme and 6 hard | |
drives, and up to 128GB of ECC ram. But itâs $700 bare bones, much | |
more than these devices in the article. | |
Takennickname wrote 15 hours 30 min ago: | |
Aoostar WTR series is one change away from being the PERFECT home | |
server/nas. Passing the storage controller IOMMU to a VM is finicky | |
at best. Still better than the vast majority of devices that don't | |
allow it at all. But if they do that, I'm in homelab heaven. | |
Unfortunately, the current iteration cannot due to a hardware | |
limitation in the AMD chipset they're using. | |
brookst wrote 15 hours 11 min ago: | |
Good info! Is it the same limitation on WTR pro and max? The max | |
is an 8845hsv versus the 5825u in the pro. | |
amluto wrote 15 hours 56 min ago: | |
Yes, but not particularly cheap: | |
[1]: https://www.asustor.com/en/product?p_id=89 | |
MarkSweep wrote 15 hours 22 min ago: | |
Asustor has some cheaper options that support ECC. Though not as | |
cheap as those in the OP article. | |
FLASHSTOR 6 Gen2 (FS6806X) $1000 - [1] LOCKERSTOR 4 Gen3 (AS6804T) | |
$1300 - | |
[1]: https://www.asustor.com/en/product?p_id=90 | |
[2]: https://www.asustor.com/en/product?p_id=86 | |
<- back to front page |