Playing around with ZFS (not the Fuse version) and need to adjust the max memory size that ARC uses so it doesn't consume everything. Where can I find the zfs.conf file? (Mainly thinking about ZFS to speed up RAID 6 rebuild times since it only copies sectors of the drives that contain data.
Or on my non-ECC machine, should I just stick with mdadm? (Or an LSI RAID card? Will be using 12, 4TB drives in RAID 6 on my upgraded Plex server.)
Where is zfs.conf? And general RAID question
Forum rules
There are no such things as "stupid" questions. However if you think your question is a bit stupid, then this is the right place for you to post it. Stick to easy to-the-point questions that you feel people can answer fast. For long and complicated questions use the other forums in the support section.
Before you post read how to get help. Topics in this forum are automatically closed 6 months after creation.
There are no such things as "stupid" questions. However if you think your question is a bit stupid, then this is the right place for you to post it. Stick to easy to-the-point questions that you feel people can answer fast. For long and complicated questions use the other forums in the support section.
Before you post read how to get help. Topics in this forum are automatically closed 6 months after creation.
Where is zfs.conf? And general RAID question
Last edited by LockBot on Wed Dec 28, 2022 7:16 am, edited 1 time in total.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
- catweazel
- Level 19
- Posts: 9763
- Joined: Fri Oct 12, 2012 9:44 pm
- Location: Australian Antarctic Territory
Re: Where is zfs.conf? And general RAID question
I highly recommend going with the RAID card. I suffered a major data loss two weeks ago with mdadm. I accidentally unplugged the server. One disk out of a RAID 0 set ended up in a RAID 10 set, and two disks from the RAID 10 set ended up in the RAID 0 set. I lost the lot, it was completely unrecoverable, garbage was scribbled everywhere. Fortunately I had a RAID 10 set on my workstation as a backup. I changed the server to hardware RAID using two Adaptec 6805T RAID cards, which btw required zero configuration in linux; the cards were instantly recognised and I was up and running again in a very short time. So, short story long, avoid mdadm if you can afford hardware RAID.gene0915 wrote:Or on my non-ECC machine, should I just stick with mdadm? (Or an LSI RAID card? Will be using 12, 4TB drives in RAID 6 on my upgraded Plex server.)
"There is, ultimately, only one truth -- cogito, ergo sum -- everything else is an assumption." - Me, my swansong.
Re: Where is zfs.conf? And general RAID question
The only thing that worries me about hardware RAID, if the card dies, I need to scour eBay and find another one ASAP and wait a week or so for it to get to me. Or, buy a second card and if the first one fails, (and doesn't kill my array), slap in the new one and be back up and running (and making sure it's running the exact same BIOS). My old PERC H700 (knock on wood) has been humming along for over a year with no issues. (And I don't have a cold spare laying around. )catweazel wrote:I highly recommend going with the RAID card. I suffered a major data loss two weeks ago with mdadm. I accidentally unplugged the server. One disk out of a RAID 0 set ended up in a RAID 10 set, and two disks from the RAID 10 set ended up in the RAID 0 set. I lost the lot, it was completely unrecoverable, garbage was scribbled everywhere. Fortunately I had a RAID 10 set on my workstation as a backup. I changed the server to hardware RAID using two Adaptec 6805T RAID cards, which btw required zero configuration in linux; the cards were instantly recognised and I was up and running again in a very short time. So, short story long, avoid mdadm if you can afford hardware RAID.gene0915 wrote:Or on my non-ECC machine, should I just stick with mdadm? (Or an LSI RAID card? Will be using 12, 4TB drives in RAID 6 on my upgraded Plex server.)
With software RAID, if my motherboard fails, run over to Microcenter, grab a new one and move everything to it and get back up and running. (Assuming that when the motherboard failed, the RAID array is still in-tact of course. )
The card I was looking at is around $350. Sure, I could probably find something cheaper so lets say I get a $200 card or heck, keep my existing PERC H700 and use a SAS expander to feed all 12 drives to it and buy a 2nd PERC H700? Point is, if a hardware RAID 6 card doesn't offer me anything better than I could get with using mdadm, why spend the money on it? Doing lots of reading lately and I saw something about how mdadm (when scrubbing), if it finds a parity error, it assumes the data is correct and just overwrites the parity bits. Is this where ZFS would come in handy, because it verifies parity on reads/writes(?) and if a discrepancy is found, it 'votes' on what is correct by looking at the data and both parity bits?
If i pulled the power cord out of my server, I'd probably end up killing my array too because my current controller doesn't have a batty backup and neither does the card i was looking at getting. I'm 100% reliant on my UPS to save my bacon in case of a power outage. I also do nightly backups so if the RAID array would die, I could recover it all. (But transferring nearly 15TB back to the array my take a little time. )