[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG_fn=UwMgXJkgKhSa6Qsr_2jqQi8exZj7b8eoe+WK-_7aD5cA@mail.gmail.com>
Date: Tue, 16 Feb 2016 19:37:58 +0100
From: Alexander Potapenko <glider@...gle.com>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: kasan-dev@...glegroups.com, Christoph Lameter <cl@...ux.com>,
linux-kernel@...r.kernel.org, Dmitriy Vyukov <dvyukov@...gle.com>,
Andrey Ryabinin <ryabinin.a.a@...il.com>, linux-mm@...ck.org,
Andrey Konovalov <adech.fo@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH v1 5/8] mm, kasan: Stackdepot implementation. Enable
stackdepot for SLAB
On Mon, Feb 1, 2016 at 3:55 AM, Joonsoo Kim <iamjoonsoo.kim@....com> wrote:
> On Thu, Jan 28, 2016 at 02:27:44PM +0100, Alexander Potapenko wrote:
>> On Thu, Jan 28, 2016 at 1:51 PM, Alexander Potapenko <glider@...gle.com> wrote:
>> >
>> > On Jan 28, 2016 8:40 AM, "Joonsoo Kim" <iamjoonsoo.kim@....com> wrote:
>> >>
>> >> Hello,
>> >>
>> >> On Wed, Jan 27, 2016 at 07:25:10PM +0100, Alexander Potapenko wrote:
>> >> > Stack depot will allow KASAN store allocation/deallocation stack traces
>> >> > for memory chunks. The stack traces are stored in a hash table and
>> >> > referenced by handles which reside in the kasan_alloc_meta and
>> >> > kasan_free_meta structures in the allocated memory chunks.
>> >>
>> >> Looks really nice!
>> >>
>> >> Could it be more generalized to be used by other feature that need to
>> >> store stack trace such as tracepoint or page owner?
>> > Certainly yes, but see below.
>> >
>> >> If it could be, there is one more requirement.
>> >> I understand the fact that entry is never removed from depot makes things
>> >> very simpler, but, for general usecases, it's better to use reference
>> >> count
>> >> and allow to remove. Is it possible?
>> > For our use case reference counting is not really necessary, and it would
>> > introduce unwanted contention.
>
> Okay.
>
>> > There are two possible options, each having its advantages and drawbacks: we
>> > can let the clients store the refcounters directly in their stacks (more
>> > universal, but harder to use for the clients), or keep the counters in the
>> > depot but add an API that does not change them (easier for the clients, but
>> > potentially error-prone).
>> > I'd say it's better to actually find at least one more user for the stack
>> > depot in order to understand the requirements, and refactor the code after
>> > that.
>
> I re-think the page owner case and it also may not need refcount.
> For now, just moving this stuff to /lib would be helpful for other future user.
I agree this code may need to be moved to /lib someday, but I wouldn't
hurry with that.
Right now it is quite KASAN-specific, and it's unclear yet whether
anyone else is going to use it.
I suggest we keep it in mm/kasan for now, and factor the common parts
into /lib when the need arises.
> BTW, is there any performance number? I guess that it could affect
> the performance.
I've compared the performance of KASAN with SLAB allocator on a small
synthetic benchmark in two modes: with stack depot enabled and with
kasan_save_stack() unconditionally returning 0.
In the former case 8% more time was spent in the kernel than in the latter case.
If I am not mistaking, for SLUB allocator the bookkeeping (enabled
with the slub_debug=UZ boot options) take only 1.5 time, so the
difference is worth looking into (at least before we switch SLUB to
stack depot).
> Thanks.
--
Alexander Potapenko
Software Engineer
Google Germany GmbH
Erika-Mann-Straße, 33
80636 München
Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
Diese E-Mail ist vertraulich. Wenn Sie nicht der richtige Adressat sind,
leiten Sie diese bitte nicht weiter, informieren Sie den
Absender und löschen Sie die E-Mail und alle Anhänge. Vielen Dank.
This e-mail is confidential. If you are not the right addressee please
do not forward it, please inform the sender, and please erase this
e-mail including any attachments. Thanks.
Powered by blists - more mailing lists