Fred Stluka via plug on 20 May 2020 09:18:46 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] download from WHERE?


Rich,

On 5/13/20 10:30 AM, Rich Freeman wrote:
This is more of a
change in paradigms in software development which is relatively
universal.

True!  And it's a very bad thing.  Writing sloppy software, with no
regard for security, privacy, reliability, backward-compatibility, etc.
has become the new normal, but it's still not acceptable.

I tell everyone I can to use Linux instead of Windows.  And to lock
it down properly, since Linux makes that possible.  See my Unix
security tips about logwatch, fail2ban, and tripwire to log breakin
attempts, prevent them, and detect and report any that somehow
still succeed.  And that you should disallow root logins, set up
careful sudo rules, require SSH keys instead of just passwords,
consider port knocking, honeypots, etc:
- http://bristle.com/Tips/Unix.htm#unix_security


Honestly, I don't think this is really an MS thing.  I think it is
more of a Facebook and general internet-speed thing.

No, you're letting MS off the hook too easily.  Before MS, things
were generally secure.  But MS has lowered the bar, hijacking the
well-respected words "computer" and "operating system" to
include things that were really not much more than toys and
fancy typewriters.  From there, the world progressed, with much
lowered expectations, to the state we're in now.  We now find it
an acceptable excuse for any problem that "the computer is down".
That includes mission critical systems like stock markets, banking,
air traffic control, medical instruments, etc.

NONE of those types of systems should ever be allowed to use
Windows.  And no programmer who thinks Windows is a real
operating system should ever be allowed to work on any of them.


A perfect solution delivered in two years is going to end up with zero
market share when faced with a cobbled-together solution delivered in
six months.

In the early days there was a lot of talk about Facebook facing
architecture problems.  They were growing so fast that their initial
design just couldn't scale.  They had to throw TONS of money at
keeping the old system on life support while basically having to
refactor the whole thing.

This was touted by many as evidence that Facebook did things the wrong
way.  In reality, it was evidence that they did things the right way.
Yes, doing it over probably cost 10x what doing it "right" the first
time cost.  However, the first time out they were paying for the whole
thing with very scarce capital while their staff was living like
college students.  When they had to redo everything they had money
flooding in from every direction, and thus their biggest problem
wasn't coming up with the money, but spending it fast enough to keep
growing.  If they hadn't been in the market at the right time they
might never have succeeded at all.  It is better to spend $10M to make
$1B than to spend $100k and lose it all.  It is better still to spend
that $10M in installments of $25k before you're making money and
$9.975M when you're making $10M per month.

VERY good point!  The whole concept of "Internet speed" has become
unavoidable these days.  If you can't get there fast, don't even bother.
You'll have missed your market window, and no one will ever use your
product.  I can't argue against that.  It's unfortunate, but true, and I
don't know if it's possible to ever change.  People generally prefer
cheaper/sooner over better/later.  They're too impatient to put much
value on secure/private/reliable/etc.

So, better/later cannot compete with cheaper/sooner.  The only
answer is to switch to better/sooner.  That's what Agile software
development does, if you follow the spirit of Agile and don't get
bogged down in any Agile "methodologies".  See my article:
- http://bristle.com/Tips/Agile.htm#spirit_of_agile

It makes the point that you MUST be quick, but also MUST respect
the architecture at all times.  Because the architecture provides the
security, reliability, scalability, and all the other "ilities". Yes, you
sometimes swap out one architectural layer for a better version.
And you very occasionally re-architect the layers.  But mostly you
respect the existing layers.

If you build up the right reusable parts, libraries and layers, and
use them in your app, it's actually FASTER to do it right than to hack
together something that takes a bunch of sloppy shortcuts.


First, modern windows is MUCH more secure than what people were using
in the 90s.

Really?  Windows is more secure than mainframes, VAXes, Unix,
Linux?  Oh, wait a minute...  You're just saying that Windows now
is more secure than Windows then, right?  That may be true, but
I've still NEVER had any trouble breaking into any Windows system
where a client needed access that someone had decided he wasn't
supposed to have.

It almost always come down to writing a simple REG file to edit the
part of the registry that's blocking him.  Put that REG file at a web
site and having him click on its URL.  But, what if some part of the
registry is "locked down" so the REG file can't edit it?  Easy. Just
add more stuff to the REG file to first edit the part of the registry
that says it should be "locked down".

There's no security, only obscurity.  And the registry isn't really all
that obscure.  Go to a machine that's not locked down, dump the
registry to a textual REG file, make a change to "lock something
down", dump the registry again, and use diff to compare the 2 REG
files.  Cut paste the unlocked snippet into a 3rd REG file and give it
to your user as a key to unlock things.  Easy peasy!  I've done it lots
of times, to the delight of lots of users who had valid reasons for
needing access.

None of this can be fixed until Windows does one of the following:

1. Abandon the Windows registry for a Unix/Linux-style set of
    separate config files (in /etc or wherever).  Give different
    access to different config files by using robust Unix/Linux-style
    file system permissions, including as necessary, "setuid" bits
    and perhaps ACLs.

or:

2. Re-implement the whole file permission mechanism, but
    applied to branches and keys of the registry, instead of
    directories and files in the file system.  That is, treat the single
    registry as an embedded file system, and bake the access
    rules into the OS.  And then spend lots of time in the future
    maintaining that entirely unnecessary duplicated effort.

Until then, I'll quite easily hack into any Windows system I need to,
disregarding policies (stored in the registry), profiles (stored in the
registry), accounts (still stored in the registry? or worse moved to
"Active Directory"), etc.


Second, you're comparing multi-user setups with single-user setups.

OK.  Had to read ahead quite a bit, but I think you're saying that
the secure ones (mainframes, VAXes, Unix/Linux) are typically
multi-user systems.  Servers, not desktops.  So security matters.
So they have security.  Right?

And that Windows systems, Macs, and Linux desktops are typically
desktops with a single user.  So all security is abandoned in favor
of convenience.  So they are "toys" as I called them above.  Right?

But what about Windows servers?  Shouldn't they be able to be
locked down?  I haven't ever seen it done effectively.  It's like trying
to do high-quality cabinet-making carpentry projects with the
plastic rounded "saw" that you give a 2-year-old to play with.
The tools just aren't there.

And why would a Mac or Linux user loosen all the permissions
when it's so easy to sudo as needed?  My Mac laptop IS basically
single-user, but I don't just login as root.  I login as the much
more restricted fred.  That allows the file permissions to prevent
me from doing something stupid via a simple typo.  And I think
carefully whenever I use sudo.  I treat my Mac laptop like a
server.  And I'd treat a Linux laptop like a server too.  Why not?


Unless you're using a linux desktop with SELinux/etc and a LOT of
tailoring those operating systems are not actually all that secure in
a single-user paradigm, and could arguably be less secure than windows
against remote intrusion in some ways.

This has already been argued to death so I'll be brief.  Suppose you
are hit by a zero-day on Windows vs Linux in your browser, which is
probably any desktop user's single biggest vulnerability window (that
and their MUA I guess).  Now some remote code has the ability to
execute arbitrary commands using your UID.  Unless you're
containerizing your browser/etc that code can already read all your
personal info in both scenarios - the only thing it can't do on either
platform is modify the core of the OS.  I'd argue that on Windows it
can probably tamper with fewer of your settings/etc due to the whole
UAC mechanism, while on linux pretty-much anything in
.config/.whateverrc and so on is editable without priv escalation.

Wow!  That's a lot of assumptions.  You're basically saying that every
desktop users logs in as root and runs a browser.  So, if the browser
is hacked, root access is gained, and it's "game over".  Right?

And with all true security now bypassed, the Windows user is better
off because of Windows obscurity?  Before you answer, keep in mind
that the UAC rules and the very fact of whether UAC is enabled at all
are stored in (you guessed it!) the registry!

And as I said, the registry is not even that obscure.  Not having a
Windows box handy to play with, I did a quick Google and found
LOTS of articles like:

- How to Run Program without Admin Privileges and to Bypass UAC Prompt?
http://woshub.com/run-program-without-admin-password-and-bypass-uac-prompt

- How to turn off User Account Control in Windows
https://knowledge.autodesk.com/search-result/caas/sfdcarticles/sfdcarticles/How-to-turn-off-User-Account-Control-in-Windows.html

Obscurity is never a substitute for security, especially when a
rogue can automate the "obscure" steps required to do something.
And when anyone can Google up the answers as easily as I just did.


Now, I will concede that Linux has more tools available to lock this
stuff down like containerizing applications, or SELinux with
fine-grained permissions so that random processes can't just go
editing your .bashrc or whatever.  However, most of this stuff is not
configured in a typical desktop environment, and even distros that use
SELinux by default probably don't lock it down to that degree - it
would require a lot more conventions around what goes where in a
user's home directory and so on.

Huh?  By default on a Linux desktop, you login as root?  Or all the
files in the whole tree are chmod 777?  Or what?  Why not leave all
users files at 755 or less?  Then no rogue process can edit my
.bashrc.  What conventions are needed about what goes where?
Just lock down the users tree so only the user can edit it.  That's
been the default on every Unix/Linux system I've ever used.

Am I missing something?


Now, one thing users do have a lot of exposure to is mobile operating
systems, and this is an environment where these sorts of controls
actually are fairly routine.  Perhaps they're still not as extensive
as might be desirable, but something like Android or iOS does a LOT
more to sandbox application and user configuration data than your
typical desktop Linux distro or windows.

Finally, you also have to consider physical security.  VAX and
Mainframe systems typically store all their data in secured
facilities.  Modern desktop users keep a ton of personal data on
phones/laptops/etc.  Now, Android runs Linux and is generally
configured to have a pretty high level of physical security, and I
suspect that in practice iOS is more secure.  Windows is often not so
secure by default but it actually has a number of tools for full-disk
encryption and so on available, often with check-box-level
configurability assuming you have the right version of Windows.  Most
Linux distros lag in this area.  Many do offer home directory
encryption these days, but none that I'm aware of back it with a TPM
so that it is impossible to break if the drive is separated from the
computer.  Almost no distros do any kind of verification of the OS
itself to prevent tampering.  Windows does most of that out of the
box, as do most mobile operating systems.

Lots of claims that I seriously doubt, and no evidence cited.  This
is getting a little wacky.  You suspect that iOS is more secure than
Linux?  Why?

Full-disk encryption is available and easy on Windows, but much
harder or not available at all on Linux?  Huh?

You're starting to sound like a "Microsoft Evangelist" (yes, a real
job title at Microsoft).  Seems a little odd for someone who's
typically a Linux advocate to be posting to a Linux mailing list.

Rich, is this really you?  Can someone who knows him well ask
him some security questions (hopefully something less easily
discoverable than mother's maiden name) and confirm it's
really him?

You seem to put a lot of faith in TPM (hardware support for
encrypted disks).  Does it worry you that I can so easily Google
things like:
- https://www.nextofwindows.com/how-to-clear-and-manage-tpm-on-windows-10

It talks about how to clear the TPM.  And says you should always
do so before disposing of a computer because people can steal
your keys out of the TPM itself!  Even if you follow that advice,
what about stolen laptops?  No one warned you it was about to
be stolen, so you didn't get a chance to clear the keys from the
TPM.  Doh!


So, I think on the security front you have a fairly complex situation,
with various options offering various security protections out of the
box, and with others available if an administrator deploys them.

Not that complex.  Pretty much all systems other than Windows
assume multi-user.  So they have file permissions, etc.  And on
Linux, you have lots of tools like logwatch, fail2ban, tripwire, ssh,
sudo, SSH keys, etc.  Windows, on the other hand, suggests you
use strong passwords and change them often because they're
inevitably going to be cracked.  Doh!


So, having seen some of the stuff in at least one area of healthcare I
do have concerns, but you also have to consider that the controls go
way beyond the software.
Typically these sorts of processes and systems have tiers of
procedures and processes around them that together make it relatively
difficult for an attacker to have a serious impact on life/etc.

Think "social engineering".  Need I say more...


Now, for stuff that is of obvious strategic significance that is
likely to be targeted by a state actor I completely support the idea
that we probably need to be doing a lot more.  These sorts of systems
need multiple lines of defense from the applications to the OSes to
the networks to the processes and so on.

I will note though that these sorts of areas are the one place you
won't see many of the modern programming paradigms we were talking
about at the start of this email.  It is almost always
highly-waterfalled development paradigms with layers of change and
configuration management moving at a glacial pace.

Not true.  They pay a lot of lip service to Waterfall, CM, policies,
"validation", Six Sigma black belts, CMM levels, etc.  But in house,
everyone ignores all that whenever they get a chance.  It just gets
in the way of getting their work done.

I successfully did Agile for 5 years at a large international pharma
company.  Wrote their entire clinical trial supply system that
decided and tracked which patients got which meds.  Patients and
doctors were not allowed to know -- double blind studies.

Someone once pushed us to use the newly-mandated "iQMS" that
had been adopted as the company-wide standard for SDLC on all
projects.  I showed him the superior Agile approach we were taking,
the audit trail and docs it automatically created, etc.  And I casually
mentioned that senior management had calculated that we were
saving the company $15 million/year.  He left without another
word.

Waterfall NEVER works.  No one ever actually does Waterfall.  They
just claim to.  Behind the scenes, they're all doing Agile.  Or they're
having their projects cancelled.  Here's why:
- http://bristle.com/Tips/Agile.htm#waterfall


So, I'll agree that this may have come across a bit personally when it
was more directed at the general MS pile-on attitude that is prevalent
in this thread.  It has always been fashionable to bash MS, but IMO a
lot of the issues they had in the 90s are not the same issues they or
other vendors have today.

Always a possibility.  I haven't seen any evidence, but I haven't
really looked that hard.  I just go along, continuing to expect MS
to be incompetent, waltzing around their "security" and getting
my work done.

Do you have specific examples of where they've improved?  Any
where they've improved enough to be ahead of the crowd?
Creating a really stupid bug or security hole, and later fixing or
partially fixing it, does not count as significant improvement.  Not
when compared to an OS that never had such a bug or hole to start
with.  That's just playing catch up.


I don't think that you personally should be called out on that - it is
actually a fairly prevalent attitude in the community.  My words may
have been a bit harsh in that regard.

OK.  No problem.


And that is my point.  I didn't say the information was incorrect.  I
said it was DATED.

You can't really assign a reputation to anything that lasts 20 years,
but that is especially true of a company.

You're still hung up on the 1991 "Original Version" date on my
article, when I was first exposed to MS software and began to
suspect there was a problem.  Ignore that date.  Focus on the
2020 "Last Updated" date.  That's when I last re-considered the
validity of the reputation I'd assigned, and found it to be still
true.  (BTW, I did the arithmetic wrong in my earlier post.  The
original version was 29 years ago, not 19.)  This is not a "dated"
opinion from 29 years ago.  It's a well-deserved and accurate
reputation based on 29 years of continuous evidence.


People can change over
time.  Companies change people ALL the time.  Change the CEO and
suddenly the company can have a completely different personality.
Obviously there is some inertia but you have to be careful about
applying judgements 20 years later.

True.  Satya Nadella seems to be better than Gates and Ballmer.
But there's a LOT of inertia.  It's VERY hard to overcome a culture
of dishonesty.  Consider the mindsets of:
- Lawyers -- if legal then "ethical"
- Salesmen -- sell even if they don't need it
- Credit card companies - sell more services, charge more interest,
   encourage to fall behind on payments, late fees, find a way to get
   rid of bad (unprofitable) customers who pay their bills on time
   versus good (profitable) customers who always pay late with lots
   of interest and late fees

Do you really believe a CEO could waltz into any of these types of
situations and completely change the corporate culture?  As Mark
Twain said:

"It ain't what you don't know that gets you into trouble.
It's what you know for sure that just ain't so."

Getting people to "unlearn" bad behavior is hard.  Especially when
they were rewarded for it for so long by Gates and Ballmer.


Of course not.  Hence the reason I used the word "dated."  I don't
think you're making things up - they were different back then.  Maybe
if they had more market power they'd be still doing that stuff today.
Maybe if RedHat had that kind of market power today they'd be doing
that stuff too.

So you trust them because they're weaker and wouldn't dare?
Doesn't inspire me with confidence...


3. Do you claim they did NOT lower the bar in software quality
      compared to their Unix and mainframe predecessors, as I
      described?  Or that Linux and FOSS are NOT better and
      safer alternatives?
So, I already shared my thoughts on that above.  And MS software in
the 90s was very different from what it is today.

Again, specifics would strengthen your argument.


Obviously I'm a fan of Linux in general and prefer it for a lot of
solutions for a lot of reasons, and security can be one of them.
However, I don't think that you're automatically more secure because
you're using an Ubuntu desktop running Firefox instead of a Windows
desktop running Firefox.

That's just because you're assuming that the Ubuntu user is
browsing the web while logged in as root.  Why would anyone
ever do that?


So, I think that is more a result of who operates those machines and
their level of network access.  I've heard tales from family members
who got scammed into giving somebody remote access to their Windows
boxes and paying them for the privilege.

True.  It's hard to protect against ID-10-T, PEBCAK, PMAC and IBM
errors.  (Google them.)   Social engineering is tough to beat. That's
why you need to secure every system to limit the ability to users to
do foolish or accidental things.


I'm not sure that they'd
have been any more secure if they were using most conventional Linux
distros.

Huh?  Do most conventional distros have the single user log in as
root?  Were these produced by the kids of the Microsoft generation?
Somewhere along the line they accepted that "toys" are real
computers, and didn't aspire to better.  Bad news...


Specialized ones like Android/ChromeOS/etc can be more
secure because they basically aim to protect the user from themselves
with almost no way to override that which doesn't involve flipping
switches, attaching USB cables to other computers, and wiping the
device in the process and getting hit with security nag screens on
every boot.

Even then I'd think we'd see a LOT more old mobile phones targeted for
botnets if it wasn't for the fact that mobile networks are fairly
locked down.  You can't get a worm spreading between mobile phones
because they're all completely firewalled from incoming connections,
often behind a NAT as well.

Good!  Sounds like someone still knows how to do it.


No moreso than anybody really.  You're talking about a company with
tens of thousands of employees.  Most of them are going to be just
like you or I.  Often you get some really scummy ones at all levels.
The ones at the bottom usually are leaches on the company, and the
ones at the top tend to be leaches on all of society, but often are
leaches on the company too.

See my comment above about a "culture of dishonesty".  You've
been in this field for a while.  I'm sure there are some entire
companies that you'd rate as more ethical, more competent,
better at security, etc. than others.  No, not every single individual
has the same problems.  But the average can be significantly
higher for one company than another.  And MS has a LOT of
inertia to overcome.


I just think that a bit of nuance is necessary.  When evaluating
security you need to look at the entire ecosystem, especially the
user.

Right.  Social engineering makes all systems more vulnerable
than if no people were involved.  So limit the access rights of'
each person to mitigate the damage.


When you look at companies you need to look at what they're
DOING today, and not really go too much on reputation one way or
another.

OK.  So what is Microsoft DOING that has impressed you enough
to take on the gargantuan task of defending them in a Linux
forum?  Or do you just like the taste of rotten tomatoes?  :-)

Do you have specific examples of them being ethical, honorable,
competent, security conscious, focused on quality, etc?  Can
you demonstrate a trend on that direction?  Or are you only
noticing such examples because they're so unexpected ("the
exception that proves the rule").

Do you trust them to have your best interests at heart and the
skills to protect you from those who don't?  Especially in
absolute terms, not just relative to other vendors who've had
to lower their standards to compete after Microsoft lowered the
bar so much?


One big advantage of FOSS is that you're less beholden to any
company's reputation, because you get the source, and you can see for
yourself what is going on, and pay anybody you choose to do so as well
and to maintain it if your relationship with the original vendor
sours.  Now, that isn't always a reasonably-priced option, but it is
still an option.

Exactly!  Can "trust, but verify".


However, with the lower barrier to entry you don't have to be
thoroughly indoctrinated in the ways of change management in order to
get access to the COBOL interpreter.  That creates both opportunity
and danger, and it is important to use the right programming paradigms
in the right places.

True.  You either have to review all the Linux source code, or trust
that Linus and others did.  Long live the Git pull REQUEST!  Not just
a Git PUSH of random low-quality code into a project.  A request
for someone to officially review your code for correctness, compliance
with standards, lack of bugs, lack of side effects, presence of test
cases added to the automated test suite, etc.  And then PULL it into
their beloved project if it passes muster.

--Fred
------------------------------------------------------------------------
Fred Stluka
Bristle Software, Inc.
http://bristle.com 		#DontBeATrump #SadLittleDonny
#ShakeOffTheTrumpStink
#MakeAmericaHonorableAgain
http://MakeAmericaHonorableAgain.us/Trump/Countdown.htm

------------------------------------------------------------------------


___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug