lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG_fn=Xby+PJtMQtZ68gPkSPCyxbF=RsOCVavYew7ZVDx25yow@mail.gmail.com>
Date:	Tue, 8 Mar 2016 12:42:32 +0100
From:	Alexander Potapenko <glider@...gle.com>
To:	Andrey Ryabinin <ryabinin.a.a@...il.com>
Cc:	Dmitry Vyukov <dvyukov@...gle.com>,
	Andrey Konovalov <adech.fo@...il.com>,
	Christoph Lameter <cl@...ux.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	JoonSoo Kim <js1304@...il.com>,
	Kostya Serebryany <kcc@...gle.com>,
	kasan-dev <kasan-dev@...glegroups.com>,
	LKML <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH v4 5/7] mm, kasan: Stackdepot implementation. Enable
 stackdepot for SLAB

On Tue, Mar 1, 2016 at 12:57 PM, Andrey Ryabinin <ryabinin.a.a@...il.com> wrote:
>
>
> On 02/29/2016 08:12 PM, Dmitry Vyukov wrote:
>
>>>> diff --git a/lib/Makefile b/lib/Makefile
>>>> index a7c26a4..10a4ae3 100644
>>>> --- a/lib/Makefile
>>>> +++ b/lib/Makefile
>>>> @@ -167,6 +167,13 @@ obj-$(CONFIG_SG_SPLIT) += sg_split.o
>>>>  obj-$(CONFIG_STMP_DEVICE) += stmp_device.o
>>>>  obj-$(CONFIG_IRQ_POLL) += irq_poll.o
>>>>
>>>> +ifeq ($(CONFIG_KASAN),y)
>>>> +ifeq ($(CONFIG_SLAB),y)
>>>
>>> Just try to imagine that another subsystem wants to use stackdepot. How this gonna look like?
>>>
>>> We have Kconfig to describe dependencies. So, this should be under CONFIG_STACKDEPOT.
>>> So any user of this feature can just do 'select STACKDEPOT' in Kconfig.
>>>
>>>> +     obj-y   += stackdepot.o
>>>> +     KASAN_SANITIZE_slub.o := n
>                         _stackdepot.o
>
>
>>>
>>>> +
>>>> +     stack->hash = hash;
>>>> +     stack->size = size;
>>>> +     stack->handle.slabindex = depot_index;
>>>> +     stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN;
>>>> +     __memcpy(stack->entries, entries, size * sizeof(unsigned long));
>>>
>>> s/__memcpy/memcpy/
>>
>> memcpy should be instrumented by asan/tsan, and we would like to avoid
>> that instrumentation here.
>
> KASAN_SANITIZE_* := n already takes care about this.
> __memcpy() is a special thing solely for kasan internals and some assembly code.
> And it's not available generally.
>
>
>>>> +     if (unlikely(!smp_load_acquire(&next_slab_inited))) {
>>>> +             if (!preempt_count() && !in_irq()) {
>>>
>>> If you trying to detect atomic context here, than this doesn't work. E.g. you can't know
>>> about held spinlocks in non-preemptible kernel.
>>> And I'm not sure why need this. You know gfp flags here, so allocation in atomic context shouldn't be problem.
>>
>>
>> We don't have gfp flags for kfree.
>> I wonder how CONFIG_DEBUG_ATOMIC_SLEEP handles this. Maybe it has the answer.
>
> It hasn't. It doesn't guarantee that atomic context always will be detected.
>
>> Alternatively, we can always assume that we are in atomic context in kfree.
>>
>
> Or do this allocation in separate context, put in work queue.
>
>>
>>
>>>> +                     alloc_flags &= (__GFP_RECLAIM | __GFP_IO | __GFP_FS |
>>>> +                             __GFP_NOWARN | __GFP_NORETRY |
>>>> +                             __GFP_NOMEMALLOC | __GFP_DIRECT_RECLAIM);
>>>
>>> I think blacklist approach would be better here.
>>>
>>>> +                     page = alloc_pages(alloc_flags, STACK_ALLOC_ORDER);
>>>
>>> STACK_ALLOC_ORDER = 4 - that's a lot. Do you really need that much?
>>
>> Part of the issue the atomic context above. When we can't allocate
>> memory we still want to save the stack trace. When we have less than
>> STACK_ALLOC_ORDER memory, we try to preallocate another
>> STACK_ALLOC_ORDER in advance. So in the worst case, we have
>> STACK_ALLOC_ORDER memory and that should be enough to handle all
>> kmalloc/kfree in the atomic context. 1 page does not look enough. I
>> think Alex did some measuring of the failure race (when we are out of
>> memory and can't allocate more).
>>
>
> A lot of 4-order pages will lead to high fragmentation. You don't need physically contiguous memory here,
> so try to use vmalloc(). It is slower, but fragmentation won't be problem.
I've tried using vmalloc(), but turned out it's calling KASAN hooks
again. Dealing with reentrancy in this case sounds like an overkill.
Given that we only require 9 Mb most of the time, is allocating
physical pages still a problem?

> And one more thing. Take a look at mempool, because it's generally used to solve the problem you have here
> (guaranteed allocation in atomic context).
As far as I understood the docs, mempools have a drawback of
allocating too much memory which won't be available for any other use.
O'Reily's "Linux Device Drivers" even suggests not using mempools in
any case when it's easier to deal with allocation failures (that
advice is for device drivers, not sure if that stands for other
subsystems though).


-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ