[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <95a9f679-93d9-548a-fc26-985ec605e7f8@suse.cz>
Date: Tue, 14 Jun 2022 10:23:30 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Jann Horn <jannh@...gle.com>, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/slub: add missing TID updates on slab deactivation
On 6/8/22 20:22, Jann Horn wrote:
> The fastpath in slab_alloc_node() assumes that c->slab is stable as long as
> the TID stays the same. However, two places in __slab_alloc() currently
> don't update the TID when deactivating the CPU slab.
>
> If multiple operations race the right way, this could lead to an object
> getting lost; or, in an even more unlikely situation, it could even lead to
> an object being freed onto the wrong slab's freelist, messing up the
> `inuse` counter and eventually causing a page to be freed to the page
> allocator while it still contains slab objects.
>
> (I haven't actually tested these cases though, this is just based on
> looking at the code. Writing testcases for this stuff seems like it'd be
> a pain...)
>
> The race leading to state inconsistency is (all operations on the same CPU
> and kmem_cache):
>
> - task A: begin do_slab_free():
> - read TID
> - read pcpu freelist (==NULL)
> - check `slab == c->slab` (true)
> - [PREEMPT A->B]
> - task B: begin slab_alloc_node():
> - fastpath fails (`c->freelist` is NULL)
> - enter __slab_alloc()
> - slub_get_cpu_ptr() (disables preemption)
> - enter ___slab_alloc()
> - take local_lock_irqsave()
> - read c->freelist as NULL
> - get_freelist() returns NULL
> - write `c->slab = NULL`
> - drop local_unlock_irqrestore()
> - goto new_slab
> - slub_percpu_partial() is NULL
> - get_partial() returns NULL
> - slub_put_cpu_ptr() (enables preemption)
> - [PREEMPT B->A]
> - task A: finish do_slab_free():
> - this_cpu_cmpxchg_double() succeeds()
> - [CORRUPT STATE: c->slab==NULL, c->freelist!=NULL]
>
>
> From there, the object on c->freelist will get lost if task B is allowed to
> continue from here: It will proceed to the retry_load_slab label,
> set c->slab, then jump to load_freelist, which clobbers c->freelist.
>
>
> But if we instead continue as follows, we get worse corruption:
>
> - task A: run __slab_free() on object from other struct slab:
> - CPU_PARTIAL_FREE case (slab was on no list, is now on pcpu partial)
> - task A: run slab_alloc_node() with NUMA node constraint:
> - fastpath fails (c->slab is NULL)
> - call __slab_alloc()
> - slub_get_cpu_ptr() (disables preemption)
> - enter ___slab_alloc()
> - c->slab is NULL: goto new_slab
> - slub_percpu_partial() is non-NULL
> - set c->slab to slub_percpu_partial(c)
> - [CORRUPT STATE: c->slab points to slab-1, c->freelist has objects
> from slab-2]
> - goto redo
> - node_match() fails
> - goto deactivate_slab
> - existing c->freelist is passed into deactivate_slab()
> - inuse count of slab-1 is decremented to account for object from
> slab-2
>
> At this point, the inuse count of slab-1 is 1 lower than it should be.
> This means that if we free all allocated objects in slab-1 except for one,
> SLUB will think that slab-1 is completely unused, and may free its page,
> leading to use-after-free.
>
> Fixes: c17dda40a6a4e ("slub: Separate out kmem_cache_cpu processing from deactivate_slab")
> Fixes: 03e404af26dc2 ("slub: fast release on full slab")
> Cc: stable@...r.kernel.org
Hmm these are old commits, and currently oldest LTS is 4.9, so this will be
fun. Worth doublechecking if it's not recent changes that actually
introduced the bug... but seems not, AFAICS.
> Signed-off-by: Jann Horn <jannh@...gle.com>
> ---
> mm/slub.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index e5535020e0fdf..b97fa5e210469 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2936,6 +2936,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>
> if (!freelist) {
> c->slab = NULL;
> + c->tid = next_tid(c->tid);
> local_unlock_irqrestore(&s->cpu_slab->lock, flags);
So this immediate unlock after setting NULL is new from the 5.15 preempt-rt
changes. However even in older versions we could goto new_slab,
new_slab_objects(), new_slab(), allocate_slab(), where if
(gfpflags_allow_blocking()) local_irq_enable(); (there's no extra disabled
preemption besides the irq disable) so I'd say the bug was possible before
too, but less often?
> stat(s, DEACTIVATE_BYPASS);
> goto new_slab;
> @@ -2968,6 +2969,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> freelist = c->freelist;
> c->slab = NULL;
> c->freelist = NULL;
Previously these were part of deactivate_slab(), which does that at the very
end, but also without bumping tid.
I just wonder if it's necessary too, because IIUC the scenario you described
relies on the missing bump above. This alone doesn't cause the c->slab vs
c->freelist mismatch?
But I guess it won't hurt to just bump tid on each c->freelist assignment.
In backports we would just add it to deactivate_slab() instead.
Thanks. Applying to slab/for-5.19-rc3/fixes branch.
> + c->tid = next_tid(c->tid);
> local_unlock_irqrestore(&s->cpu_slab->lock, flags);
> deactivate_slab(s, slab, freelist);
>
>
> base-commit: 9886142c7a2226439c1e3f7d9b69f9c7094c3ef6
Powered by blists - more mailing lists