Rich Kulawiec on 9 Aug 2017 05:40:51 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Firewall/security philosophy [was: SSH Hardening : Request for Best Practices]


On Fri, Aug 04, 2017 at 02:26:29PM -0400, bergman@merctech.com wrote:
> Sure, if you can enumerate those rules, in sufficient detail and accuracy,
> and be responsive to changes. There will be changes in what should
> be permitted for the business need. 

Of course.  There always are.  That's why one uses revision control and
scripting: so that they can be made easily, so that they can be tracked,
so that they can be backed out when wrong, and so that they're made
uniformly across the environment.  Done right, this makes changes easy
and their impact predictable.

> Security policies have very different meanings in different environments,
> but it sounds like you're making sweeping statements about policies that
> should be enabled by 'everyone'.

Yes.  I am absolutely doing that.  "Default permit" is not just a
known-failed approach, it's a well-known known-failed approach.

> That is not a operational plan.

*shrug* I've used it in operations of all shapes and sizes for years.
It works.  It's not a matter of whether or not it should be deployed,
it's only a matter of the details.  And those, as I've pointed out
elsewhere in this discussion vary quite a bit but can always be
accomodated by a judicious combination of means.  It's just a matter
of studying your own traffic in sufficient detail and then deliberately
restricting it to what it needs to be -- no more.

Sometimes that's ten firewall rules.  Sometimes it's five hundred.
Sometimes it also involves netflows and DNS and proxies and other things.
It all depends.   But the end goal never changes, just the details.

If this isn't clear, then think of it this way: every single thing that
you're allowing your network to do above-and-beyond what it *needs*
to do is a 100% loss for you and a 100% win for your adversaries.
You reap no benefit from it at all: it's superfluous.  The only people
who might gain from it are the people who are going to attack you.

So why are you expending your resources supporting them?

> And, you didn't quote his next sentence:
> 
> 	A CTO isn't going to know detail about every application on
> 	the network, but if you haven't got a vague idea what's going
> 	on it's impossible to do [...] virtually any of the things in a
> 	CTO's charter.
> 
> That directly contradicts your assertation that "it requires that you
> know EXACTLY what network traffic you need, why you need it, when you
> need it, where it's going, how much of it there should be, etc." 

I think you're missing the point and quibbling about his rhetoric,
but let me address that.

First, that's why CTOs have people who work for them who do know -- in the
aggregate -- exactly what's going on.  Presumably they tell their boss.

Second, yes, it can be done in every real-world environment.  Of course
in complex/large ones it may take people with considerable expertise to
do it, but security is much too important to be left to novices.

Third, note that in the twelve years since Ranum wrote that article
the tools available to us have gotten a LOT better, and there are a
lot more of them.  It's gotten way easier since the days (late 90's)
when I started doing this using things like tcpdump and argus.  So while
he was being mildly hyperbolic at the time, NOW there really is no excuse
for a CTO to have all of this information at his/her fingertips in
as much excruciating detail as desired.

> Having "an idea what your technology is doing" may lead to a security
> philosophy that is very different than what you are advocating.

Then it was wrong before it left the whiteboard stage.

It's only a matter of when, not if, those responsible for it will figure
that out.  Sometimes that lesson is taught quietly and cheaply; sometimes
it's taught loudly and expensively.  If you read the "breachexchange"
mailing list for any length of time, you'll notice that nearly every
day brings fresh examples of the latter -- usually accompanied by
excuses ("nobody could have foreseen"), by damage to reputation,
by expensive remediation, and sometimes by far more expensive litigation.

So I invite everyone to go look at the breachexchange archive:

	http://lists.riskbasedsecurity.com/pipermail/breachexchange/

Peruse, let's say, the last 90 days of breach notices.  After reading each
one, ask yourself if the incident in question could have been prevented
or at least mitigated by eschewing default permit and embracing default
deny as a philosophy.

	[ "Mitigation" is important.  The idea is that if you can't stop
	an attack completely, maybe you can (1) slow it down and (2) make
	it noisy.  This gives you a fighting chance of detecting it
	and stopping it before it's over.  Yeah, you may still get hit.
	Yeah, you may still lose some data.  But would you rather lose
	PII for 4.2M  people or 126K people?  Ask your legal counsel
	that question. ]

Keep in mind, when you're done skimming breachexchange, that this is
just one mailing list that attempts to track some of the incidents that
happen.  And keep in mind as well, that security incidents follow the
iceberg model: for every one we find out about, there are N more that
will never be public.  And for every one that is known only inside a
single operation, there are N more that they don't know about.  Yet.

> Certainly, computer security is necessary to keep the machines available,
> and may sometimes be intrusive, but the bottom line is that computers
> should be tools to enable people to do things.

No, that's not the bottom line.  The bottom line is that if you can't
compute securely, you can't compute.

---rsk
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug