lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5jJken1qWxayxUFs59kBgDzSkkdUZm8BEkcVAof7mJHMbA@mail.gmail.com>
Date:   Mon, 20 Nov 2017 16:42:46 -0800
From:   Kees Cook <keescook@...omium.org>
To:     Matthew Garrett <mjg59@...f.ucam.org>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        David Windsor <dave@...lcore.net>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [GIT PULL] usercopy whitelisting for v4.15-rc1

On Mon, Nov 20, 2017 at 3:29 PM, Matthew Garrett <mjg59@...f.ucam.org> wrote:
> From a practical perspective this does feel like a completely reasonable
> request - when changing the semantics of kernel APIs in ways that aren't
> amenable to automated analysis, doing so in a way that generates
> warnings rather than triggering breakage is pretty clearly a preferable
> approach. But these features often start off seeming simple and then
> devolving into rounds of "ok just one more fix and we'll have
> everything" and by then it's easy to have lost track of the amount of
> complexity that's developed as a result. Formalising the Right Way of
> approaching these problems would possibly help avoid this kind of
> problem in future - I'll try to write something up for
> Documentation/process.

I'm always trying to balance the requests from both ends of the
security defense spectrum. One of the most common requests I get from
people who are strongly interested in the defenses is "can this please
be enabled by default?" And then I have to explain that it sometimes
takes time for code to shake out, and it sometimes takes time for
developers to trust it, etc. This is rarely a comfort to them, but
they tend to be glad they can turn some config knob to enable the
"strongest" version, etc, because for them, a false positive is no big
deal. At the other end is the requirement that new stuff should not
break the system. Both are reasonable perspectives, but if we violate
the latter, the defense will never end up in the kernel in the first
place.

The implicit process tends to be: land a defense that is optional,
distros enable it by default, kernel enables it by default. My sense
is that the span for these steps is about 3 years. There are countless
examples, but one of the more recent is x86 KASLR. Optional (Jun 2014,
v3.14), then distro default (e.g. Ubuntu 16.10, Oct 2016), then kernel
default (Jul 2017, v4.12). (And, in my opinion, this takes way too
long. But I'd rather have the defense at all.)

The trouble tends to be even in the "optional" phase, where it may be
optional but still very disruptive. If you were using KASLR but there
was a bug, it was basically going to be fatal to the system. So, my
historical perspective for other less disruptive defenses was "oh
good, it's only BUG(), that's WAY better than taking out the entire
system", but as Linus has pointed out in several threads, BUG() can
frequently lead to that too, which is not acceptable.

A variation on "new stuff should not break the system" comes from
things that LOOK like they can't break the system (i.e. disallowing
some pathological condition that would seem to be entirely reasonable
to disallow), and then discovering that stuff does, actually, use a
pathological condition in the real world (e.g. refcount underflows,
memcpy beyond the end of a short source string, 3rd party drivers
reading directly from userspace memory). And then there's just plain
bugs in defenses (e.g. missed usercopy whitelist for SCTPv6). Handling
these conditions means that in addition to being optional, a new
defense frequently has to also be WARN()-only initially.

Though, even then, there is a spectrum of defense capabilities. KASLR
couldn't WARN() initially since it's an architectural state. The
refcount_t API doesn't really need to be stronger than WARN() because
it can defend against overflow while leaving everything running fine.
And here, as already done, usercopy whitelisting needs to WARN() for
its initial implementation to shake out any hard-to-find cases before
becoming more aggressive.

With all this in mind, it can be tricky to find a way to provide the
right level of initial defense behavior that also has a knob for the
users that want to accept the risk of false positives. I'll keep
trying to get it right and keeping helping others find the right path.
We'll get there, steady progress, etc, etc. :)

-Kees

-- 
Kees Cook
Pixel Security

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ