JP Vossen on 12 Jun 2017 14:16:17 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] PI being targeted for malware


On 06/12/2017 03:23 PM, Thomas Delrue wrote:
On 06/12/2017 02:36 PM, JP Vossen wrote:
>>
+1.  It's not the end-user's fault even if it's the end-user's
fault. Our computing industry has failed, in large part, to provide
secure solutions.  A lot of that is because the market wants
race-to-the-bottom prices and I don't see that changing easily or
without a lot of regulations, which gets tricky fast.

I guess what you're trying to say is: You can lead a horse to water but
you can't force it to drink.

No, I'm saying vendors pay no attention to security--largely because its expensive and they have no incentive to do so--and release horribly dangerous products that *will* fail in *predictable and preventable* ways.


Perpetuating that falsehood is bad! The customer is NOT always right!
I think this attitude is wrong and it is damaging to the goal of secure
software and/or computing.

Funny, that's pretty much word for word what I thought when I read your position, except s/customer/vendor/!


It is NOT wrong to demand a level of
education on the object you will be working with. Especially if that
object has the ability to influence others, which computing devices
frequently/almost always do.

You certainly have something there, but flip it around. If the product is fundamentally unsafe to use, and *will* fail in *predictable and preventable* ways, should it be available?

(Spoiler alert: I'll say "no." :)


I find this attitude wrong and detrimental because it gives end-users an
excuse to act or continue to act (even worse) irresponsibly. I don't see
any reasonable person taking this same approach with (e.g.) driving(*).

That's another great point, but products have safety features for a reason; no one is perfect all time, and reducing the dangers of events that *can clearly be predicted* is a requirement. (Yes, this can and does go too far sometimes.)

So you are arguing that if the average person (or at least intended consumer) reads the manual, it's OK to drive a car with lots of sharp edges, no seat belts, no safety glass, etc. In the most egregious cases (many/most IoT devices?), perhaps no brakes?

Nope, I don't buy that. Now, the customer *is* going to buy that car, because...wait for it...it's cheap! You can make an edge argument for specialty and antique cars, but that *is* specialty and not "mass market" and there are all sorts of mitigating factors there.


If I tell you to not do X and you still do X, or if I tell you to do Y
and you don't do Y, then that's on you. It is NOT my fault and I flat
out refuse to take responsibility for that.

If you sell the car I describe, you can and would be held accountable. And then you'd get sued into oblivion by the survivors. Back in computer/IoT context, it's 100% predictable that people won't read the manual, and that they will just connect it to the internet.


Just /imagine/ using the same laxness in other fields as the level of
laxness that we allow in software usage because this or that user
"doesn't like adhering to instructions". Lemme know how that's working
out for ya when you get (for instance) OSHA'ed.

Just imagine if *real* Engineering world (not the BS "software/computer/whatever" pretend-engineering we have) had the same laxness what we're talking about has? Holy crap, civilization would have ended centuries ago. Software used to not be life and death, the way a bridge or building or plane is. That is NOT true anymore, but we've got probably 30+ years of bad habits and market pressure to correct. (Not to mention "sales" folks who can't use the dame damn term with the same meaning twice in a row. There's a *reason* for jargon, and it has to be accurate and unambiguous or you get...some of the mess that we're in.)

And you just made my point again, none of the stuff that OSHA exists for would have been fixed until everyone was regulated into fixing it. You can't create a knowing unsafe working environment (known shoddy materials, etc...) but that's *exactly* what happens in the software world!

No one can possibly argue that attaching a device with a well-known password to the internet is not an "unsafe working environment." The risks are real, well-know, 100% predictable and at least some of them are largely preventable.

I don't *like* the idea of a lot of regulation, and there are huge problems with it, not the least of which is objective criteria. But until there are consequences where the costs exceed the expense...things not going to change. For F/OSS that's even trickier and there are all kinds of slippery slopes to fall down. :-(


When I lead that horse to water and tell it to drink, that's because
maybe I know that we won't encounter water for a long while. The horse
should bloody well drink the water!
A horse that refuses to drink, could die from dehydration when it's in
the middle of the desert and there's no oasis around for miles.
(You know what I call a horse like that? I call it "my previous horse"
or "glue" - depending on my mood!)

Yeah, but we call them end-users and they are just like horses, some will drink and some will not. This is 100% predictable and there are ways to mitigate the problem. Your dead horse probably killed you too when you failed to cross the desert. Unless you mitigated the problem before it happened. Sure, you have to think ahead a bit and do a little extra work, but the end result turns out to be worth it...when you have incentive.


You are entirely correct about the race-to-the-bottom and I'll just add
the following: "If you think hiring a professional is expensive, wait
until you hire an amateur".

Love that!


In this case, yes--nothing, but nothing, should ever ship with a
"default password."  If you can't figure out a way to provide or set
  a decent password on first use, you have no business providing
whatever it is.

Entirely agreed. Providing documentation on how to set up a strong
password is also "a way to provide or set a decent password".

No, it *demonstrably* is not.


Could it be simpler? Sure, but that doesn't give you a pass on not doing it.

I think the answer to this has already been covered, not gonna beat that dead horse anymore. ;-)


I get why the rPi works the way it does, but they need to do better.
  Use the serial number of the device, print a default password on the
  board, use part of a MAC address...something.  Figure it out...
Maybe now they will...

Agreed, they can afterwards update their docs (that the user would need
to read because it'll tell them how to know the password), which -as
we've learned- the user will ignore anyway and will set up a password
"password1" and that's just hunky-bleepin-dory. But, hey, that's MY
fault and not theirs!

Right, you just agreed with me! Everyone **knows** the end-user will a) not read the manual and b) do it wrong **if you let them!** So you can't use those arguments anymore, they're done. Move on and fix the product so that it works in the real world!


(*) As I'm sure you're aware, this is /one/ of the issues with
self-driving cars, namely "who is liable"? The operator or the
manufacturer? If everyone has self-driving cars, who do car-insurers go
to to collect the premium? If I have zero control over the car, do *I*
have a level of liability?

Yup, that's already a fun one with Tesla and this stuff hasn't even really got started yet.

Later,
JP
--  -------------------------------------------------------------------
JP Vossen, CISSP | http://www.jpsdomain.org/ | http://bashcookbook.com/
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug