[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <228295be-2e40-3c91-68a0-2715e66486d1@suse.cz>
Date: Mon, 8 Jun 2020 12:51:42 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Kees Cook <keescook@...omium.org>
Cc: Vegard Nossum <vegard.nossum@...cle.com>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Robert Moore <robert.moore@...el.com>,
Erik Kaneda <erik.kaneda@...el.com>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Christoph Lameter <cl@...ux.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Marco Elver <elver@...gle.com>,
Waiman Long <longman@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>,
ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
Len Brown <lenb@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Roman Gushchin <guro@...com>
Subject: Re: slub freelist issue / BUG: unable to handle page fault for
address: 000000003ffe0018
On 6/5/20 8:46 PM, Kees Cook wrote:
>>
>> Hmm I have a different idea. The whole cache_from_obj() was added because of
>> kmemcg (commit b9ce5ef49f00d) where per-memcg cache can be different from the
>> root one. And I just realized this usecase can go away with Roman's series [1].
>> But cache_from_obj() also kept the original SLUB consistency check case, and you
>> added the freelist hardening case. If kmemcg use case went away it would be nice
>> to avoid the virt_to_cache() and check completely again, unless in debugging or
>> hardened kernel.
>
> Is it that expensive? (I'm fine with it staying behind debug/hardening,
> but if we can make it on by default, that'd be safer.)
Well, it's fast path and e.g. networking guys did a lot of work to optimize
SLUB. If we decide to stop trusting the supplied cache pointer completely, we
can deprecate kmem_cache_free() and use kfree() everywhere (SLOB would need some
adjustments to store size with each object like for kmalloc) but it would have
to be a conscious decision.
>> Furthermore, the original SLUB debugging case was an unconditional pr_err() plus
>> WARN_ON_ONCE(1), which was kept by commit b9ce5ef49f00d. With freelist
>> hardening this all changed to WARN_ONCE. So the second and later cases are not
>> reported at all for hardening and also not for explicitly enabled debugging like
>> in this case, which is IMHO not ideal.
>
> Oh, I have no problem with WARN vs WARN_ONCE -- there's no reason to
> split this. And I'd love the hardening side to gain the tracking call
> too, if it's available.
>
> I had just used WARN_ONCE() since sometimes it can be very noisy to keep
> warning for some condition that might not be correctable.
OK.
>> So I propose the following - the freelist hardening case keeps the WARN_ONCE,
>> but also a one-line pr_err() for each case so they are not silent. The SLUB
>> debugging case is always a full warning, and printing the tracking info if
>> enabled and available. Pure kmemcg case does virt_to_cache() for now (until
>> hopefully removed by Roman's series) but no checking at all. Would that work for
>> everyone?
>> [...]
>> @@ -520,9 +528,18 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
>> return s;
>>
>> cachep = virt_to_cache(x);
>> - WARN_ONCE(cachep && !slab_equal_or_root(cachep, s),
>> - "%s: Wrong slab cache. %s but object is from %s\n",
>> - __func__, s->name, cachep->name);
>> + if (unlikely(s->flags & SLAB_CONSISTENCY_CHECKS)) {
>> + if (WARN(cachep && !slab_equal_or_root(cachep, s),
>> + "%s: Wrong slab cache. %s but object is from %s\n",
>> + __func__, s->name, cachep->name))
>> + slab_print_tracking(cachep, x);
>> + } else if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED)) {
>> + if (unlikely(cachep && !slab_equal_or_root(cachep, s))) {
>> + pr_err("%s: Wrong slab cache. %s but object is from %s\n",
>> + __func__, s->name, cachep->name);
>> + WARN_ON_ONCE(1);
>> + }
>> + }
>
> How about just this (in addition to your slab_print_tracking() refactor):
That could work, I will send a proper patch.
> diff --git a/mm/slab.h b/mm/slab.h
> index 207c83ef6e06..107b7f6db3c3 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -520,9 +520,10 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
> return s;
>
> cachep = virt_to_cache(x);
> - WARN_ONCE(cachep && !slab_equal_or_root(cachep, s),
> + if (WARN(cachep && !slab_equal_or_root(cachep, s),
> "%s: Wrong slab cache. %s but object is from %s\n",
> - __func__, s->name, cachep->name);
> + __func__, s->name, cachep->name))
> + slab_print_tracking(cachep, x);
> return cachep;
> }
>
>
Powered by blists - more mailing lists