[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202405031126.CEAB079A1A@keescook>
Date: Fri, 3 May 2024 12:06:38 -0700
From: Kees Cook <keescook@...omium.org>
To: jvoisin <julien.voisin@...tri.org>
Cc: Matteo Rizzo <matteorizzo@...gle.com>, Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
"GONG, Ruiqi" <gongruiqi@...weicloud.com>,
Xiu Jianfeng <xiujianfeng@...wei.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Kent Overstreet <kent.overstreet@...ux.dev>,
Jann Horn <jannh@...gle.com>, Thomas Graf <tgraf@...g.ch>,
Herbert Xu <herbert@...dor.apana.org.au>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-hardening@...r.kernel.org
Subject: Re: [PATCH v3 0/6] slab: Introduce dedicated bucket allocator
On Fri, May 03, 2024 at 03:39:28PM +0200, jvoisin wrote:
> On 4/28/24 19:02, Kees Cook wrote:
> > On Sun, Apr 28, 2024 at 01:02:36PM +0200, jvoisin wrote:
> >> On 4/24/24 23:40, Kees Cook wrote:
> >>> [...]
> >>> While CONFIG_RANDOM_KMALLOC_CACHES provides a probabilistic defense
> >>> against these kinds of "type confusion" attacks, including for fixed
> >>> same-size heap objects, we can create a complementary deterministic
> >>> defense for dynamically sized allocations that are directly user
> >>> controlled. Addressing these cases is limited in scope, so isolation these
> >>> kinds of interfaces will not become an unbounded game of whack-a-mole. For
> >>> example, pass through memdup_user(), making isolation there very
> >>> effective.
> >>
> >> What does "Addressing these cases is limited in scope, so isolation
> >> these kinds of interfaces will not become an unbounded game of
> >> whack-a-mole." mean exactly?
> >
> > The number of cases where there is a user/kernel API for size-controlled
> > allocations is limited. They don't get added very often, and most are
> > (correctly) using kmemdup_user() as the basis of their allocations. This
> > means we have a relatively well defined set of criteria for finding
> > places where this is needed, and most newly added interfaces will use
> > the existing (kmemdup_user()) infrastructure that will already be covered.
>
> A simple CodeQL query returns 266 of them:
> https://lookerstudio.google.com/reporting/68b02863-4f5c-4d85-b3c1-992af89c855c/page/n92nD?params=%7B%22df3%22:%22include%25EE%2580%25803%25EE%2580%2580T%22%7D
These aren't filtered for being long-lived, nor filtered for
userspace reachability, nor filtered for userspace size and content
controllability. Take for example, cros_ec_get_panicinfo(): the size is
controlled by a device, the allocation doesn't last beyond the function,
and the function itself is part of device probing.
> Is this number realistic and coherent with your results/own analysis?
No, I think it's 1 possibly 2 orders of magnitude too high. Thank you for
the link, though: we can see what the absolute upper bounds is with it,
but that's not an accurate count of cases that would need to explicitly
use this bucket API. But even if it did, 300 instances is still small:
we converted more VLAs than that, more switch statement fallthroughs
than that, and fixed more array bounds cases than that.
And, again, while this series does close a bunch of methods today,
it's a _prerequisite_ for doing per-call-site allocation isolation,
which obviates the need for doing per-site analysis. (We can and will
still do such analysis, though, since there's a benefit to it for folks
that can't tolerate the expected per-site memory overhead.)
> [...]
> >>> Memory allocation pinning[2] is still needed to plug the Use-After-Free
> >>> cross-allocator weakness, but that is an existing and separate issue
> >>> which is complementary to this improvement. Development continues for
> >>> that feature via the SLAB_VIRTUAL[3] series (which could also provide
> >>> guard pages -- another complementary improvement).
> >>>
> >>> Link: https://lore.kernel.org/lkml/202402211449.401382D2AF@keescook [1]
> >>> Link: https://googleprojectzero.blogspot.com/2021/10/how-simple-linux-kernel-memory.html [2]
> >>> Link: https://lore.kernel.org/lkml/20230915105933.495735-1-matteorizzo@google.com/ [3]
> >>
> >> To be honest, I think this series is close to useless without allocation
> >> pinning. And even with pinning, it's still routinely bypassed in the
> >> KernelCTF
> >> (https://github.com/google/security-research/tree/master/pocs/linux/kernelctf).
> >
> > Sure, I can understand why you might think that, but I disagree. This
> > adds the building blocks we need for better allocation isolation
> > control, and stops existing (and similar) attacks toda>
> > But yes, given attackers with sufficient control over the entire system,
> > all mitigations get weaker. We can't fall into the trap of "perfect
> > security"; real-world experience shows that incremental improvements
> > like this can strongly impact the difficulty of mounting attacks. Not
> > all flaws are created equal; not everything is exploitable to the same
> > degree.
>
> It's not about "perfect security", but about wisely spending the
> complexity/review/performance/churn/… budgets in my opinion.
Sure, that's an appropriate analysis to make, and it's one I've done. I
think this series is well within those budgets: it abstracts the "bucket"
system into a distinct object that we've needed to have extracted for
other things, it's a pretty trivial review (since the abstraction makes
the other patches very straight forward), using the new API is a nearly
trivial drop-in replacement, and we immediately closes several glaring
exploit techniques, which has real-world impact. This is, IMO, a total
slam dunk of a series.
> >> Do you have some particular exploits in mind that would be completely
> >> mitigated by your series?
> >
> > I link to like a dozen in the last two patches. :P
> >
> > This series immediately closes 3 well used exploit methodologies.
> > Attackers exploiting new flaws that could have used the killed methods
> > must now choose methods that have greater complexity, and this drives
> > them towards cross-allocator attacks. Robust exploits there are more
> > costly to develop as we narrow the scope of methods.
>
> You linked exploits that were making use of the two structures that you
> isolated; making them use different structures would likely mean a
> couple of hours.
I think you underestimate what it would take to provide such a flexible
replacement. As I noted earlier, the techniques have several requirements:
- reachable from userspace
- long-lived allocation
- userspace controllable size
- userspace controllable contents
I'm not saying there aren't other interfaces that provide this, but it's
not common, and each (like these) will have their own quirks and
limitations. (e.g. the msg_msg exploit can't use the start of the
allocation since the contents aren't controllable, and has a minimum
size for the same reason.)
This series kills the 3 techniques with _2_ changes. 2 of the techniques
depend on the same internal (memdup_user()) that gets protected, which
implies that it will cover other things both now and in the future.
> I was more interested in exploits that are effectively killed; as I'm
> still not convinced that elastic structures are rare, and that manually
> isolating them one by one is attainable/sustainable/…
I don't agree with your rarity analysis, but it doesn't matter, because
we will be taking the next step and providing per-call-site isolation
using this abstraction.
> But if you have some proper analysis in this direction, then yes, I
> completely agrees that isolating all of them is a great idea.
I don't need to perform a complete reachability analysis for all UAPI
because I can point to just memdup_user(): it is the recommended way
to get long-lived data from userspace. It has been and will be used by
interfaces that meet all 4 criteria for the exploit technique.
Converting other APIs to it or using the bucket allocation API can
happen over time as those are identified. This is standard operating
procedure for incremental improvements in Linux.
> > Bad analogy: we're locking the doors of a house. Yes, some windows may
> > still be unlocked, but now they'll need a ladder. And it doesn't make
> > sense to lock the windows if we didn't lock the doors first. This is
> > what I mean by complementary defenses, and comes back to what I mentioned
> > earlier: "perfect security" is a myth, but incremental security works.
> >
> >> Moreover, I'm not aware of any ongoing development of the SLAB_VIRTUAL
> >> series: the last sign of life on its thread is from 7 months ago.
> >
> > Yeah, I know, but sometimes other things get in the way. Matteo assures
> > me it's still coming.
> >
> > Since you're interested in seeing SLAB_VIRTUAL land, please join the
> > development efforts. Reach out to Matteo (you, he, and I all work for
> > the same company) and see where you can assist. Surely this can be
> > something you can contribute to while "on the clock"?
>
> I left Google a couple of weeks ago unfortunately,
Ah! Bummer; I didn't see that happen. :(
> and I won't touch
> anything with email-based development for less than a Google salary :D
LOL. Yes, I can understand that. :) I do want to say, though, that
objections carry a lot more weight when counter-proposal patches are
provided. "This is the way." :P
-Kees
--
Kees Cook
Powered by blists - more mailing lists