[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
md(adm) ... Re: Next meeting July 26th 2020, Tomorrow!
- To: BerkeleyLUG <berkeleylug@googlegroups.com>
- Subject: md(adm) ... Re: Next meeting July 26th 2020, Tomorrow!
- From: "Michael Paoli" <Michael.Paoli@cal.berkeley.edu>
- Date: Sun, 26 Jul 2020 03:33:04 -0700
- Arc-authentication-results: i=2; gmr-mx.google.com; spf=neutral (google.com: 198.144.192.42 is neither permitted nor denied by best guess record for domain of michael.paoli@cal.berkeley.edu) smtp.mailfrom=Michael.Paoli@cal.berkeley.edu
- Arc-authentication-results: i=1; gmr-mx.google.com; spf=neutral (google.com: 198.144.192.42 is neither permitted nor denied by best guess record for domain of michael.paoli@cal.berkeley.edu) smtp.mailfrom=Michael.Paoli@cal.berkeley.edu
- Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:reply-to:user-agent :content-disposition:mime-version:in-reply-to:references:subject:to :from:date:message-id:sender:dkim-signature; bh=wWx8GLZ8drmGIVKDXAyLjSm+rJHo3mxD5yjITrkVKl4=; b=vpjzkcrdLrMFZxGuEGTpH0wBBtOHvOMSdR79XjJPjQivwqwk1Je759JJJJqGvFbogY y615dJbxjIHjOHo3844kNNbKIdMV2gVLQTm99cPo8b9Pqx7x5WpJSiCxSkP8fHUgaIVJ qSxey/FAq3pzI4eTrfF0Ek1RplnaIM3PWInWsYloANQSW5p92YMxiOUNtBbM6ccF807/ N0ODEZtYeeDJGxt7NSBHUDgtr9H0nWXC3LrOnjjQlWnQ/Xy+i7bUzdtCMSPsFtbcfhnx F5+8Id2P27OmOFSBpN+yPi0BImuxJRoDiMMW4gj6c0C7CpprMS5G1CbZ0Eyf8CGQfLZd OyEg==
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=user-agent:content-transfer-encoding:content-disposition :mime-version:in-reply-to:references:subject:to:from:date:message-id; bh=TY7/ms6uZL+AN1CcDpKrV5cG7qrH2HgZTWU1751Ixss=; b=Lnos57WP3bx219iYdkcO+kDc6wGSINLbFAO4uGId1hzWqg4nAioyHfM3pyzxh0rsCX uvizs9VLlSpkpuPy5zBXJzrczezrcI/I4cK111FPy+WNLS1+hB0KcqgTsapvx4B+0EJB Ghjo7ye9WA5QyUhRmvJzGnNSB3+82RGgNYFN8PCOJu7s2FNHMVA51N/0LgmyiCQlaocL TRlgQrPBODq01qz3T6470YG96Tq1irQpP1v5+h+LIkTCF5U8ShyT0f3fXSLo20pauxiV xA61xHdNi07CrY4w3+d5Krj463jYooh/2JDk576shN+Z2OTDfrbVbXVH7kJJOxNEa/+B cCyQ==
- Arc-seal: i=2; a=rsa-sha256; t=1595759585; cv=pass; d=google.com; s=arc-20160816; b=ilxeFGJm14ebiey/xs07cGTr3D+B329WYiGWiYYWjBlnLEm2vAWv1zcw5NJkZwT2rW CN8N9Xjo0K5q9SiUuubpdU4u4hHezOgFiTrMDVk+bgz3uhT78zVxZBka4px6r77IAzEu aeIkqWRt8So+MUZ0jjlShi3fM+bqygk3KTuudmccliM9H4voJ2USdGA/HbRyJPTDaGIr aKk1CIJhjXEIclCf42hf9kJAHmoyvRUl1n9udhvjKhFnxQ3wvW1Ru41qQ23WK/grsyDD 6ArrywmtdbJip12doUExBNLZnAbl6GNAnCPj3YhM8MmApdlqMGveBxUr3LqVJRfw3nso 1ZGQ==
- Arc-seal: i=1; a=rsa-sha256; t=1595759585; cv=none; d=google.com; s=arc-20160816; b=f4QY0rBy42HaqCHsUaYqeApMJ+kCft9QO0jPGkzFAkBiIuNlGaJhU7HyvmEBz9AJCb cMdWfzxZUTopfbwpiSBiC+Qfy0cVeekbJHi7LPfaSbScXv1hHLr1jl82apCy21pcvSf7 rsqCRkhYtr7nEAzp+wPBp5MOZArTiFFK4F+SrmUvainiT9As3EAsqtFQcbrGoezNUfop 7zjnCsy173RGDnVedt+lk5aT2H+DLVA/cOE4b7HDXifYf2XB+mJ5DQuwdTxO2G6Q3rJR bX4CJdGwAhfawZxJzKgZh6mS3g1asV5tHQKKSjdGcUtrvi37aJZthLmz9vgpYgbtXxjd skXg==
- Delivered-to: historian@entropia.netisland.net
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlegroups.com; s=20161025; h=sender:message-id:date:from:to:subject:references:in-reply-to :mime-version:content-disposition:user-agent:x-original-sender :x-original-authentication-results:reply-to:precedence:mailing-list :list-id:list-post:list-help:list-archive:list-subscribe :list-unsubscribe; bh=wWx8GLZ8drmGIVKDXAyLjSm+rJHo3mxD5yjITrkVKl4=; b=eWkaffpX2hBgQ4+BFWxujaIw48y2/6bOwj+oVwlp5JJHNWB0DTCUKTPir3if6dUQia PeG/f45oTX1IqJ6hJ4yr8AbQr3Vpf2QqmvbsEHq5Fdk/9pkYS+nROS47DGz+Me0loXtX NouCIxb/Ya3o8g6R8Z4GLr7z6SPHqUtKeTk7bB3DtLuSkj1Klxtu3ZpC2PaEx2YTJ8lG xP4LtLhPsvT2l/tV9zdH3em2joxG8NO038W6n4ZQ08OgJyrXDt4xOW2SLpvzM91QB5Cl ynMrP2yd/gjq5im+uDtI4m/yMYv1k9yDnh00KK0rA9vzgV7Wr+/OqClOKGL4dAN75s7s xZaA==
- In-reply-to: <CAGpvfso9F0S20MWUrjQSOSRTmtjUgn23Cuu90hgmeS1y2UhZQw@mail.gmail.com>
- List-archive: <https://groups.google.com/group/berkeleylu>
- List-help: <https://groups.google.com/support/>, <mailto:berkeleylug+help@googlegroups.com>
- List-id: <berkeleylug.googlegroups.com>
- List-post: <https://groups.google.com/group/berkeleylug/post>, <mailto:berkeleylug@googlegroups.com>
- List-subscribe: <https://groups.google.com/group/berkeleylug/subscribe>, <mailto:berkeleylug+subscribe@googlegroups.com>
- List-unsubscribe: <mailto:googlegroups-manage+61884646931+unsubscribe@googlegroups.com>, <https://groups.google.com/group/berkeleylug/subscribe>
- Mailing-list: list berkeleylug@googlegroups.com; contact berkeleylug+owners@googlegroups.com
- References: <CAGpvfso9F0S20MWUrjQSOSRTmtjUgn23Cuu90hgmeS1y2UhZQw@mail.gmail.com>
- Reply-to: berkeleylug@googlegroups.com
- Sender: berkeleylug@googlegroups.com
- User-agent: Internet Messaging Program (IMP) H3 (4.2.1-RC1)
From: "tom r lopes" <tomrlopes@gmail.com>
Subject: Next meeting July 26th 2020, Tomorrow!
Date: Sat, 25 Jul 2020 14:00:49 -0700
4th Sunday virtual meeting 11 am
meet.jit.si/berkeleylug
(no typo this time :-)
I'm hoping to work on a file server running on a sbc.
Plan was to work on this last week for the PI meeting but
I couldn't find the SATA hat for my NanoPi. Now I have it.
So I will install Armbian and add two 1TB and combine them
in md-raid.
Hope to see you there,
Thomas
Let me know if you need any md(adm) assistance.
I quite recently had need/occasion to snag copy of (the very top bit):
file (large, about 5GiB) on
filesystem on
LVM LV on VG on PV on
partition on
VM raw format disk image file on
filesystem on
md raid1 on
(pair of) LVM LV on (each their own) VG on PV on
(pair of) partitions (one each) on
2 physical drives on
physical host
and without network (only virtual console) access to the VM.
The topmost bit being a file on filesystem within a Virtual Machine (VM)
where that VM's drive storage was the aforementioned VM raw format disk
image file, and needed to snag copy of topmost referenced (and large -
~5GiB) file from within the VM - with no network (only virtual serial
console) access to the VM. And, "of course", to make it more
interesting, has to be consistent/recoverable, and conflict with neither
the ongoing use of the VM nor the physical host, and all while the VM
and physical host remained up and running. So, among other bits,
to do that, took a LV snapshot of the lowest level LV,
that then gave point-in-time snapshot of one of the two md raid1
constituent member devices under the lowest raid1 shown in that stack.
"Of course" that immediately has UUID conflict potential - so wiped that
metadata to eliminate that hazard, then to be able to make use of the
data, took that snapshot, and turned it into an md raid1 device - being
careful to use the same metadata format - notably so it would be same
size of earlier metadata and not stomp on any data that would be within
the md device at the md device level. Also, to make it the same(ish),
and not complain about missing device, created it as md raid1 ... but
with single member device and configured for just one device. Once that
was done, had recoverable (point-in-time snapshot from live) filesystem.
Again to thwart potential conflicts, changed UUID of that filesystem,
then mounted it nosuid,nodev. It needed to be mounted rw, due to some
bits needing teensy bit 'o write further up the chain to metadata.
Then once that was mounted, losetup and and partx -a to get to
applicable partition within the file on that filessytem within the
drive image. Was then able to bring the VG from that PV (activate)
onto the physical (were the UUID and/or VG name conflicting with any on
the physical host, there would've been some other steps needed too).
From there, mounted that filesystem ro(,nosuid,nodev) (but device under
it again - rw needed - as filesystem state was recoverable but not
clean) provided by that LV. Was then able to access and copy the
desired file from that filesystem - now seen via snapshot and some
medatada mucking about, on the physical host, whereas before it was
effectively only accessible on the VM - and all that with the VM and
phyisical still up and running throughout.
Yeah, I didn't design it like that. That's the way some particular
vendor's "appliance" devices structure things and manage their VMs on
the device.
Had another occasion some while back, to fix rather a mess on quite same
type of device. There were two physical hard drives ... lots of RAID-1.
So far so good. But, no backups ("oops"). And, one of the two hard
drives had failed long ago ("oops"), and not been replaced ("oops").
And now the one hard drive that wasn't totally dead was giving
hard errors - notably unrecoverable read errors on a particular sector
... uh oh.
Well, the vendor and their support, and the appliance were too
stupid(/smart?) to be able to fix/recover that mess. But I didn't give
up so easily. I drilled all the way down to isolate exactly
where the failed sector was, and exactly what it was/wasn't being used
by. Turned out it wasn't holding any data proper, but just
recoverable/rewritable metadata - or allocated but not used data.
So, I did an operation to rewrite that wee bit 'o data.
The drive, being "smart enough", since it was unrecoverable read
sector, but got a write operation to it then, automagically remapped and
wrote it out. At that point drive was operational (enough) again -
could read the entire drive with no read errors - and was then able to
successfully mirror to a good replacement for the other failed drive
(before that all such attempts failed, notably due to the hard read
error). Anyway, successfully and fully recovered what the vendor's
appliance and the vendor's support could not recover, where they were
saying it would have to be reinstalled from scratch. Oh, and also,
after the successful remirroring - also got the drive that was having
the sector hard read error replaced, then remirrored onto the
replacement drive, thus ending fully recovered onto two newly replaced
good drives. Not the first time I've recovered RAID-1 when it was
discovered there were problems when the 2nd drive started failing
after the 1st drive had long since totally died and not been
earlier replaced. "Of course" it's highly preferable to not get into
such situations ... have good (and validated) backups, and replace
failed drives in redundant arrays as soon as feasible - especially
before things start to hard fail without redundancy.
--
You received this message because you are subscribed to the Google Groups "BerkeleyLUG" group.
To unsubscribe from this group and stop receiving emails from it, send an email to berkeleylug+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/berkeleylug/20200726033304.13885iwu7mlx9cdc%40webmail.rawbw.com.