JP Vossen via plug on 13 Jan 2021 09:27:10 -0800


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] I need help with configuring Raid on Ubuntu server 20.04


On 1/13/21 9:57 AM, Rich Freeman via plug wrote:
On Wed, Jan 13, 2021 at 9:48 AM H Mottaleb via plug
<plug@lists.phillylinux.org> wrote:

I’m confused after reading the comments about advise against the use of the RAID in the BIOS in the event the motherboard fails.

What is the difference between the two and would I be able to configure the software raid without setting up the hardware or vice-versa? Should I not configure the raid settings in the bios and run the bash script as Rich stated?


So, based on your private email you're a little new to all of this,
and so this might feel a bit like diving into the deep end.  There are
advocates of both, but I suspect more in favor of software RAID here.
When all is working fine there are no problems with either, and if
anything hardware has some advantages IF you have battery backup, and
it might even be a bit faster with a decent card.  The issue is that
it is usually less flexible if you want to reconfigure things later,
and if that card ever dies then your drives are useless unless you
obtain a compatible card.  Software RAID is more flexible and the same
drives are readable on any hardware (you could attach them all to a
Raspberry Pi somehow and still read them).

If you wanted to use software RAID then you'd need to configure the
hardware RAID card to just expose the drives to the OS directly.
Ideally this is just as a raw drive pass-through (sometimes called IT
mode), but some cards don't support this and you'd expose them as a
bunch of single-drive volumes.  That approach might make the drives
harder to read without the card, but would maintain the flexibility
aspect.

If you want to use hardware RAID then you just configure it on the
card and the OS just sees whatever drives you have the card configured
to present as if they were physical drives.  At that point the OS part
is the same as a non-OS install.

You mentioned starting over in email.  If you do that, I'd suggest
getting a screenshot of your RAID config in hardware, and also get a
screenshot of what the partitioning screen looks like.  Then once your
OS is set up before you spend a lot of time messing around with your
application just run df/lvs/pvs/vgs/blkid and just get a sense of what
you're working with.  Then set up any mounts the way you want them
before you go installing software so that everything doesn't end up on
root if you don't want it there.  You probably could also configure
Ubuntu to give you a really big root - that isn't a best practice but
it would work.


What Rich said.  To elaborate...

My thoughts against motherboard RAID are against cheap/crappy *motherboard* RAID in consumer PCs!  If/when the MB dies:
* *Can* you replace it?  How long will that take?
* If you do, will the array still work?

Since the hardware in this case is a server class Dell R510 with a PERC (LSI/MegaRAID/Whatever) and drives are probably hot-swap SAS drives, that concern is moot.  I've run PE2950, R710, and R720 all with PERC RAID at home for over a decade with no problems, and it Just Works.  My home ESXi is R720 with PERC and 8x 4TB SAS drives in RAID-5+HS.  I've replaced at least 3 drives since I migrated to that hardware and it's great.

Per the pics in this case the RAID card is all one virtual disk, but NOTE that is RAID0 which means when (not if) you lose 1 drive you lose it all!  That's probably NOT what you want.  See https://en.wikipedia.org/wiki/RAID.

Folks on the list will have different thoughts, but for "typical" (if that word has any meaning in IT) use I'd say RAID-5 or RAID-6 if the PERC supports it.  Or RAID-5 + 1 additional hot spare (which is sort-of RAID-6) but a PERC *will* let you do that.

RAID-5 gives you N-1 capacity (~5TB), RAID-6 or 5+HS give N-2 (~4TB).  Of course real capacity will be about 15% less, give or take, no matter what, because of the games vendors play with capacity numbers.

If the PERC is set up as shown in the pics, I can't see how it would see more than 1 drive.  I may be missing something, I'm a bit swamped and haven't read this whole thread as carefully as I'd like.

Bottom line, that server and RAID care should Just Work very well for Ubuntu.

Note, once it's all running, look into the SMART monitoring tools and it's RAID/LSI/Mega switches, and the (really terrible, very bad, no good MegaCLI tool.  It sucks, but I have a cron job command to monitor it for bad things.  Even with RAID-5/6/HS, you want to replace failed drives ASAP.

VERY IMPORTANT!!!  RAID is NOT a backup!!!  You also have to have backups, ideally off-site.  The ONLY thing RAID does for you is keep you running when (not if) a drive fails.  VERY IMPORTANT!!!

One other thought.  With that hardware and those resources you might consider running the free VMware ESXi (or maybe even better run Proxmox) on that "bare metal" and then you can create lots of VMs to play with.  No clue what the final use-case is, so maybe this doesn't work.

Later,
JP
--  -------------------------------------------------------------------
JP Vossen, CISSP | http://www.jpsdomain.org/ | http://bashcookbook.com/
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug