[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+awrz-j8y5Qc8OS9qkov4doMnw1V=obwp3MB_LTvaUFXw@mail.gmail.com>
Date: Thu, 10 Sep 2020 19:11:41 +0200
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Alexander Potapenko <glider@...gle.com>
Cc: Marco Elver <elver@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Catalin Marinas <catalin.marinas@....com>,
Christoph Lameter <cl@...ux.com>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Mark Rutland <mark.rutland@....com>,
Pekka Enberg <penberg@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Andrey Konovalov <andreyknvl@...gle.com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Andy Lutomirski <luto@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Eric Dumazet <edumazet@...gle.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Ingo Molnar <mingo@...hat.com>, Jann Horn <jannh@...gle.com>,
Jonathan Corbet <corbet@....net>,
Kees Cook <keescook@...omium.org>,
Peter Zijlstra <peterz@...radead.org>, Qian Cai <cai@....pw>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>,
"the arch/x86 maintainers" <x86@...nel.org>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
kasan-dev <kasan-dev@...glegroups.com>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH RFC 01/10] mm: add Kernel Electric-Fence infrastructure
On Thu, Sep 10, 2020 at 6:19 PM Alexander Potapenko <glider@...gle.com> wrote:
>
> On Thu, Sep 10, 2020 at 5:43 PM Dmitry Vyukov <dvyukov@...gle.com> wrote:
>
>
> > > + /* Calculate address for this allocation. */
> > > + if (right)
> > > + meta->addr += PAGE_SIZE - size;
> > > + meta->addr = ALIGN_DOWN(meta->addr, cache->align);
> >
> > I would move this ALIGN_DOWN under the (right) if.
> > Do I understand it correctly that it will work, but we expect it to do
> > nothing for !right? If cache align is >PAGE_SIZE, nothing good will
> > happen anyway, right?
> > The previous 2 lines look like part of the same calculation -- "figure
> > out the addr for the right case".
>
> Yes, makes sense.
>
> > > +
> > > + schedule_delayed_work(&kfence_timer, 0);
> > > + WRITE_ONCE(kfence_enabled, true);
> >
> > Can toggle_allocation_gate run before we set kfence_enabled? If yes,
> > it can break. If not, it's still somewhat confusing.
>
> Correct, it should go after we enable KFENCE. We'll fix that in v2.
>
> > > +void __kfence_free(void *addr)
> > > +{
> > > + struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr);
> > > +
> > > + if (unlikely(meta->cache->flags & SLAB_TYPESAFE_BY_RCU))
> >
> > This may deserve a comment as to why we apply rcu on object level
> > whereas SLAB_TYPESAFE_BY_RCU means slab level only.
>
> Sorry, what do you mean by "slab level"?
> SLAB_TYPESAFE_BY_RCU means we have to wait for possible RCU accesses
> in flight before freeing objects from that slab - that's basically
> what we are doing here below:
Exactly! You see it is confusing :)
SLAB_TYPESAFE_BY_RCU does not mean that. rcu-freeing only applies to
whole pages, that's what I mean by "slab level" (whole slabs are freed
by rcu).
> > > + call_rcu(&meta->rcu_head, rcu_guarded_free);
> > > + else
> > > + kfence_guarded_free(addr, meta);
> > > +}
Powered by blists - more mailing lists