Thomas Delrue on 12 Jun 2017 16:12:18 -0700

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] PI being targeted for malware

I'll preface this with declaring that I'm not going against you as an
individual. It's more about how incentive systems are set up wrong and
how, sometimes, people ("general people", not you) are just dumb...

On 06/12/2017 05:16 PM, JP Vossen wrote:
>>> +1.  It's not the end-user's fault even if it's the end-user's 
>>> fault. Our computing industry has failed, in large part, to 
>>> provide secure solutions.  A lot of that is because the market 
>>> wants race-to-the-bottom prices and I don't see that changing 
>>> easily or without a lot of regulations, which gets tricky fast.
>> I guess what you're trying to say is: You can lead a horse to water
>> but you can't force it to drink.
> No, I'm saying vendors pay no attention to security--largely because
>  its expensive and they have no incentive to do so--and release 
> horribly dangerous products that *will* fail in *predictable and 
> preventable* ways.

You are correct, and this is a problem. I will address this below.

>> Perpetuating that falsehood is bad! The customer is NOT always 
>> right! I think this attitude is wrong and it is damaging to the 
>> goal of secure software and/or computing.
> Funny, that's pretty much word for word what I thought when I read 
> your position, except s/customer/vendor/!

For sure, the vendor is not always right either. Not at all! And I'm not
claiming that. I'm saying that the customer bears a large amount of
responsibility in this too and that sometimes it is a bit more
beneficial to point out to the customer that they too bear said
And this brings me back to an earlier point: I think we can demand a bit
more from our end-users. I don't think it is wrong to ask a certain
level of competence before they are allowed to play with the toys.
Because right now, we're pretty low on what we demand from our users.

If anything, *that* is what I'd like to convey in this entire thread.

>> I find this attitude wrong and detrimental because it gives 
>> end-users an excuse to act or continue to act (even worse) 
>> irresponsibly. I don't see any reasonable person taking this same 
>> approach with (e.g.) driving(*).
> That's another great point, but products have safety features for a 
> reason; no one is perfect all time, and reducing the dangers of 
> events that *can clearly be predicted* is a requirement.  (Yes, this
>  can and does go too far sometimes.)

And yet we deliberately weaken said safety features time and time again.
Why do we keep doing this? We get beaten over the head with the argument
of "convenience" and "security is on a spectrum" or "total security
doesn't exist" and when we relent and give in, and then something goes
wrong *specifically because* of that convenience we relented on, we get
singled out again and are reprimanded: "why didn't you tell us this
that what we were trying to do was dumb?"... And all we do is cower in
the corner going "Dobby is sorry, master, Dobby is sorry".

This is the other side of the coin, when we do build secure and solid
software, then it's not good either because now it's not user friendly,
"it goes too far" or "it's annoying to use auth scheme X or Y".

In a way, it's just a reflection of human nature: you tell them that
eating unhealthily will have repercussions and yet they/we do it...
I guess it's an illusion to think this will change :(

> So you are arguing that if the average person (or at least intended 
> consumer) reads the manual, it's OK to drive a car with lots of
> sharp edges, no seat belts, no safety glass, etc.  In the most
> egregious cases (many/most IoT devices?), perhaps no brakes?

That is not at all what I said, not at all. If you continue reading on,
what I advocate for is making sure that when you use something in a
serious fashion, you first become competent in using it.
I deliberately mentioned competence and I am not saying at all that just
reading the manual suffices.

You're spot on about the IoT devices!
Whenever IoT comes up, I remind people that in IoT, the S stands for
security and the P for privacy.
First they give me a weird look, then they respond with "haha, funny,
there's no S or P in there", finally the penny drops (in most of them) &
they begin to see the larger issue with, among things, IoT...

> Nope, I don't buy that.  Now, the customer *is* going to buy that 
> car, because...wait for's cheap!  You can make an edge 
> argument for specialty and antique cars, but that *is* specialty and
>  not "mass market" and there are all sorts of mitigating factors 
> there.

I don't see how the customer is right in this case nor how this is a
good thing. The customer is wrong.
I thought everyone was aware of the Project Management/Iron Triangle

>> If I tell you to not do X and you still do X, or if I tell you to 
>> do Y and you don't do Y, then that's on you. It is NOT my fault and
>> I flat out refuse to take responsibility for that.
> If you sell the car I describe, you can and would be held 
> accountable. And then you'd get sued into oblivion by the survivors.
>  Back in computer/IoT context, it's 100% predictable that people 
> won't read the manual, and that they will just connect it to the 
> internet.

"AS IS" clause...
In software, one cowardly covers ones arse with the AS IS clause. (See
section 11 in for instance or section 15 in

Thanks, lawyers... :P

I ask again: why is software treated differently from the rest of the
world that *we* get an AS IS "don't go to jail" card?

>> Just /imagine/ using the same laxness in other fields as the level
>>  of laxness that we allow in software usage because this or that 
>> user "doesn't like adhering to instructions". Lemme know how that's
>> working out for ya when you get (for instance) OSHA'ed.
> Just imagine if *real* Engineering world (not the BS 
> "software/computer/whatever" pretend-engineering we have) had the 
> same laxness what we're talking about has?  Holy crap, civilization 
> would have ended centuries ago.  Software used to not be life and 
> death, the way a bridge or building or plane is.  That is NOT true 
> anymore, but we've got probably 30+ years of bad habits and market 
> pressure to correct.  (Not to mention "sales" folks who can't use the
> dame damn term with the same meaning twice in a row.  There's a 
> *reason* for jargon, and it has to be accurate and unambiguous or you
> get...some of the mess that we're in.)
> And you just made my point again, none of the stuff that OSHA exists
>  for would have been fixed until everyone was regulated into fixing 
> it.  You can't create a knowing unsafe working environment (known 
> shoddy materials, etc...) but that's *exactly* what happens in the 
> software world!

Intonation does not come across in written text. I meant to say the same
thing you are saying: that we DON'T (for good reason) allow the same
level of laxness in -for instance- bridge engineering as we do in
software. Which bridge you've driven over or plane you've flown in comes
with an "AS IS" clause?
I think we're on the same page here...

> No one can possibly argue that attaching a device with a well-known 
> password to the internet is not an "unsafe working environment." The 
> risks are real, well-know, 100% predictable and at least some of
> them are largely preventable.
> I don't *like* the idea of a lot of regulation, and there are huge 
> problems with it, not the least of which is objective criteria.  But 
> until there are consequences where the costs exceed the 
> expense...things not going to change.  For F/OSS that's even trickier
> and there are all kinds of slippery slopes to fall down. :-(

I'm not singling you out here, but the thing that is wrong with The
System (not you) is this:
"But until there are consequences where the costs exceed the
expense...things not going to change."
There *are* consequences and no-one seems to be taking responsibility.
When someone points out that X and Y will go wrong, the response all too
frequently is "but it is cheaper to allow it to go kabloo-ey than it is
to fix it ahead of time". This is indeed the major issue and is not
unique to the software business, mind you!

We teach our kids that "sharing is caring" and "do diligent work now to
prevent problems in the future" and then we unleash them into the world
where they suddenly realize that "hey, these adults were lying, no-one
is doing what I was taught to do... no one is sharing and everyone is
cutting corners"

You're absolutely right that this is the mentality in the world, but
that kind of mentality drives me bonkers and up multiple walls. It's
really that part that I am so, annoyingly loudly, against.

>> When I lead that horse to water and tell it to drink, that's 
>> because maybe I know that we won't encounter water for a long 
>> while. The horse should bloody well drink the water! A horse that 
>> refuses to drink, could die from dehydration when it's in the 
>> middle of the desert and there's no oasis around for miles. (You 
>> know what I call a horse like that? I call it "my previous horse" 
>> or "glue" - depending on my mood!)
> Yeah, but we call them end-users and they are just like horses, some 
> will drink and some will not.  This is 100% predictable and there are
> ways to mitigate the problem.  Your dead horse probably killed you
> too when you failed to cross the desert.  Unless you mitigated the
> problem before it happened.  Sure, you have to think ahead a bit and
> do a little extra work, but the end result turns out to be worth 
> it...when you have incentive.

My mitigation was listed: I call that horse "glue"
( I.e. I don't (sticking with
the analogy) use that horse to go through the desert exactly because it
will kill itself, and me in the process.

>> You are entirely correct about the race-to-the-bottom and I'll just
>> add the following: "If you think hiring a professional is 
>> expensive, wait until you hire an amateur".
> Love that!
>>> I get why the rPi works the way it does, but they need to do 
>>> better. Use the serial number of the device, print a default 
>>> password on the board, use part of a MAC address...something. 
>>> Figure it out... Maybe now they will...
>> Agreed, they can afterwards update their docs (that the user would
>>  need to read because it'll tell them how to know the password), 
>> which -as we've learned- the user will ignore anyway and will set 
>> up a password "password1" and that's just hunky-bleepin-dory. But,
>>  hey, that's MY fault and not theirs!
> Right, you just agreed with me!  Everyone **knows** the end-user will
> a) not read the manual and b) do it wrong **if you let them!** So you
> can't use those arguments anymore, they're done.  Move on and fix the
> product so that it works in the real world!

Low, my sarcasm-fu was...
What I'm saying is that one of the biggest lies in history is "I have
read and understood the terms and conditions". What I'm saying is that
we should not let them without a verification of competence just like we
don't allow just anyone to drive a semi or fly a plane.

I think that sometimes a bit of arm-twisting is what is needed in order
to fix this particular part of the problem at hand because continuing to
cater to it is only making it worse. This is not to say at all that the
vendors have zero responsibility in fixing their stuff, but it most
certainly is also not a one-way street where the customer is allowed to
continually claim innocence (I refer to the quote by Graham Greene).

>> (*) As I'm sure you're aware, this is /one/ of the issues with 
>> self-driving cars, namely "who is liable"? The operator or the 
>> manufacturer? If everyone has self-driving cars, who do 
>> car-insurers go to to collect the premium? If I have zero control 
>> over the car, do *I* have a level of liability?
> Yup, that's already a fun one with Tesla and this stuff hasn't even 
> really got started yet.

Right there with you!
*sits back and hands JPV some of his popcorn* ;)

Attachment: signature.asc
Description: OpenPGP digital signature

Philadelphia Linux Users Group         --
Announcements -
General Discussion  --