Philip Rushik via plug on 21 Sep 2019 19:51:31 -0700

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] The lock down?! Uhh.. why?

On 9/21/19, Drew DeVault <> wrote:
> Even with HTTP 1.1, I'm sure timing attacks are trivial for separating
> out the individual requests.

I have my doubts about this. If the same HTTP/TLS connection is shared
for multiple downloads, how could one determine which 2+ files were
downloaded? Even if the plaintext size was easy to determine from a
cyphertext stream (which couldn't be the case unless you knew how long
all headers/cookies/useragent strings were, which would only be
possible under very specific circumstances), not knowing the _number_
of requests per connection basically makes this nearly impossible.

That being said, if you can prove me wrong, that would be awesome. I
would _love_ to see an attack like this in action, sounds awesome.

> It would be more difficult with HTTP 2 but
> I think we're still several years out from seeing broad adoption across
> mirrors.

Isn't HTTP 2 a big mess with basically no real benefits? I doubt its
ever going to be adopted.

> Other distros don't have a herd of starry-eyed programmers implementing
> every RFC they can get their grubby hands on, either. Outside of Debian
> I would be surprised to see HTTP 1.1 persistent connections being used.
> My distro of choice, Alpine Linux, definitely does not use them. A quick
> survey of pacman shows that it shells out to curl for every request,
> which is convenient because you can replace curl with an arbitrary
> command to fetch over some other transport.

Yeah, true. However, I imagine most would use just curl, and [lib]curl
most definitely is capable of this, although depending on the exact
situation, it might take some effort to enable this behavior (although
a LOT less than it would take to implement it without libcurl,
implementing HTTP 1.1 is surprisingly complex).
Philadelphia Linux Users Group         --
Announcements -
General Discussion  --