Rich Freeman on 11 Nov 2014 09:48:19 -0800


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Restructuring home network and building a storage server


On Tue, Nov 11, 2014 at 11:00 AM, Paul L. Snyder <plsnyder@drexel.edu> wrote:
>
> Regarding Rich's comment on the potential problems with traffic coming in
> to the network not-over-the-VPN going back out over-the-VPN...I ran into
> this with my current config. I put a nastly little hack in place to deal
> with this, rather than sorting out proper routing rules: a decomissioned
> OpenWRT box in the DMZ serves as the destination when, e.g., SSHing in. The
> external firewall forwards a port to that box, and that box forwards to the
> actual destination. Then, since the returning packets' next hop is a
> local network, the return packets do not get sent through the VPN.  I
> wouldn't recommend this as a real solution, but, once again, it got things
> working fast.

When you're ready let me know and I can post my iproute/iptables rules
that I used to accomplish this.  They are probably on the list
archives as well, though buried in a long discussion as I was
commenting as I was trying to figure things out.

>
> I'm not keen on locking myself into a RAID solution, due to the
> requirements of equal-sized drives, which makes it a much less desirable
> long-term solution.

Back when I was running RAID I would often get around this with
partitioning.  Suppose you have 2x3TB drives and 3x1TB drives.  Create
a 1TB partition on all 5 drives and stripe this in a RAID5/6 across
all 5 drives.  Then partition the remaining 2TB of space on the 3TB
drives and create a RAID1 across those two.  Add the two raids as PVs
to your LVM VG.  Obviously seeks across the two arrays will compete
for the shared drives, but it should be no worse than what you'd get
with 5x3TB drives in a traditional RAID5.

This kind of arrangement makes it more straightforward to expand
drives as you can always buy the best price/performance drive and get
benefit from all the space (well, the first drive of a larger capacity
ends up partially unused until you get another equal/larger drive).

It does require a bit of planning, but with auto-assembly it is no big
deal for day-to-day operation.

>
> Any thoughts on Greyhole?
>
>   https://www.greyhole.net/

First I've heard of it.  Being RAID1-like it obviously won't be as
space-efficient as parity RAID.  If I was going to go this route I'd
also consider a clustered filesystem, but the most promising options
there are still immature (last time I checked).  They do mention that
it isn't quite as seamless as RAID when a drive fails.

The other problem with all of these RAID-like solutions is that they
probably all suffer from the write hole problem, and silent
corruptions.  The only solutions to that which I've seen are zfs and
btrfs.  Even the newest cluster filesystems tend to not checksum data
on disk (just in transit), which means that if you end up with some
kind of conflict in the RAID the filesystem has no idea which version
is right.  Worrying about this stuff is a fairly new trend, so
hopefully we'll see more solutions that offer this kind of security.
Even btrfs has some gaps here - the data is all checksummed but the
auto-assembly of btrfs filesystems can't always tell if a particular
volume is out-of-date (such as when you re-introduce a disk to an
array that was mounted degraded, or if the scanner spots an old lvm
volume).  I suspect that this will eventually be fixed, but right now
the btrfs advice is to not do lvm snapshots below the btrfs layer
(they aren't really needed anyway), and if you mount a filesystem
read-write in degraded mode then ensure that the missing disk(s) never
are re-introduced without wiping them.  The filesystem can sometimes
detect some of these failure modes, but not reliably so.

--
Rich
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug