Rich Mingin (PLUG) via plug on 20 Jul 2020 06:49:28 -0700 |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: [PLUG] RAID-1 mdadm vs. Mobo H/W = moot |
On 7/16/20 9:21 PM, JP Vossen wrote:
> I'm building 2 new PCs and I am wondering about RAID-1. I've been using
> mdadm RAID-1 for at least a decade with excellent results. But I
> haven't had to deal with (U)EFI up to now, and that's a mess, along with
> the fact that the Linux Mint/Ubuntu `Ubiquity` installer doesn't do
> RAID. :-(
>
> I only care about RAID-1 (mirror) and Linux. I would not even consider
> Mobo RAID anymore more complicated than a mirror, but since a mirror is
> *so* simple...
<snip>
The short version is, the Asus ROG STRIX B450-F GAMING ATX AM4
motherboard [0] is *not* capable of using both M2 slots if you are
*also* using the Radeon Vega Graphics processor [1]! :-(
So we fell back to plan-b which is to install some old HDD drives I had
laying around and I'll get around to writing some kind of `rsync` script
(after I take a look at Brent's code).
Thanks for the thoughts, I was reasonably certain about not doing the
hardware RAID, and that was certainly confirmed!
I will look into getting my code up to my Github page one of these
weeks, there are actually some good details in there.
Specific replies:
Keith, I think we've talked about this before, but I still don't see
where LVM does RAID. Maybe PLUG needs a talk?
You do need to install GRUB2 to both disks, and it's different for old
BIOS and new EFI!
BIOS:
grub-install /dev/sda
grub-install /dev/sdb
EFI (grub-install silently ignores "/dev/sd?"):
mountpoint -q /boot/efi1 || mount $DISK1$EFI_PART /boot/efi1
grub-install --bootloader-id="$BOOTLOADER_ID" \
--recheck --no-uefi-secure-boot \
--efi-directory='/boot/efi1'
then
grub-install --bootloader-id="$BOOTLOADER_ID" \
--recheck --no-uefi-secure-boot \
--efi-directory='/boot/efi'
NOTE, I mounted BOTH EFI partitions, then did the "second" ($DISK1) one
first, and the one I normally want to use ($DISK0) second, because "last
one wins."
There was a lot of old and/or didn't-work and complicated information
about this out on the web. I didn't actually see my much simpler
solution anywhere, but it works for my VM testing. I can't test it in
real-life as noted, and VMware ESXi will not let me "unplug" any of my
virtual test hard drives because I have (many) snapshots. :-/
Brent, very cool! I will check out
https://git.square-r00t.net/OpTools/tree/sys/BootSync soon, and also to
see if it might work for my larger sync job. I suspect not, since
that's out of scope and cron+rsync is probably the way. Still, neat stuff.
Adam & Rich^2, ZFS? "Run away!" In my very limited looks at ZFS it
seems much too complicated with way too much overhead for what I want.
Heck, I ended up with ext4 because I told the Mint (Ubuntu Ubiquity)
installer to "just do it" and it created:
$ df -hl
Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme0n1p2 ext4 228G 56G 161G 26% /
/dev/nvme0n1p1 vfat 511M 7.8M 504M 2% /boot/efi
/dev/sda2 ext4 457G 62G 373G 15% /data
Note also, not shown or mounted: /dev/sda1 vfat 511M 7.8M 504M 2%
/boot/efi
Bhaskar, I recall you've talked about your /spare/ process before. I
think perhaps some of my concerns with mobo H/W RAID came from similar
discussions with you and/or wider PLUG. Plan B is going to be similar,
but actually a bit simpler.
nvme0n1p2 is the main storage. sda1 *will be* a periodic `rsync` clone
of nvme0n1p2, with some hack I haven't figured out yet for the
`/data/etc/fstab`. It will also house Mint's "TimeShift" (think Mac
Time Machine). So if nvme0n1p2 unexpectedly dies [2], I can just boot
from sda and keep going, possibly with having to do a bit of a restore
from either TimeShift or my other backups.
Interestingly, the GRUB menu lists the Mint install on sda as a possible
boot candidate. I didn't do anything related to GRUB or EFI for that,
it just happened. What I *did* do was:
Unplug nvme0n1
Plug in sda
Install Mint-20.0
Test boot
Unplug sda
Plug in nvme0n1
Install Mint-20.0
Test boot
Down and re-plug sda
Boot
GRUB just magically saw them both
Hack /etc/fstab
I'm pretty sure that the EFI BIOS just goes looking for any/all vfat ESP
(EFI System Partitions) it can find, so that's how they both show up.
And yeah, booting with a LiveUSB and tweaking bits is a last resort.
Rich^1, it sounds like my plan-b is pretty much the same as your scheme.
I agree about not putting /boot/ on something really out there like
ZFS, but I've screwed myself keeping it separate, as I discussed on the
list recently [3]. Ironically, the reason it became a problem is
install longevity that was in part due to RAID-1!
As you may recall I created a /boot/ that was adequate at the time, but
became too small. But I never reinstalled those systems and thus got a
bigger /boot/ because whenever a disk failed I just replaced it and kept
going because of the `mdadm` RAID-1. Then, as you note, the fact that
it was on `mdadm` caused other problems I talked about elsewhere [4].
I suppose disks have gotten big enough that even a /boot/ 4-5x bigger
"than needed" is still too small to be noticeable, but it's still a
point to mention.
Thanks again everyone,
JP
[0] Build list: https://pcpartpicker.com/list/TGsDRk
[1] See table at the top of page 1-8 in
https://dlcdnets.asus.com/pub/ASUS/mb/SocketAM4/ROG_STRIX_B450_F_GAMING/E14401_ROG_STRIX_B450-F_GAMING_UM_WEB.pdf?_ga=2.106556743.536717031.1594655660-2139132852.1593803160
[2] It wasn't clear to me if SSD NVMe drives worked with SMART, but:
# smartctl -a /dev/nvme0n1
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.0-40-generic] (local build)
...
Model Number: WDC WDS250G2B0C-00PXH0
...
Total NVM Capacity: 250,059,350,016 [250 GB]
...
Local Time is: Mon Jul 20 02:02:06 2020 EDT
Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero
...
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
...
Error Information (NVMe Log 0x01, max 256 entries)
No Errors Logged
[3] http://lists.netisland.net/archives/plug/plug-2020-04/msg00036.html
[4] http://lists.netisland.net/archives/plug/plug-2020-06/msg00103.html
-- -------------------------------------------------------------------
JP Vossen, CISSP | http://www.jpsdomain.org/ | http://bashcookbook.com/
___________________________________________________________________________
Philadelphia Linux Users Group -- http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug
___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug