lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 4 Mar 2016 15:52:21 +0100
From:	Alexander Potapenko <glider@...gle.com>
To:	Andrey Ryabinin <ryabinin.a.a@...il.com>
Cc:	Dmitry Vyukov <dvyukov@...gle.com>,
	Andrey Konovalov <adech.fo@...il.com>,
	Christoph Lameter <cl@...ux.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	JoonSoo Kim <js1304@...il.com>,
	Kostya Serebryany <kcc@...gle.com>,
	kasan-dev <kasan-dev@...glegroups.com>,
	LKML <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH v4 5/7] mm, kasan: Stackdepot implementation. Enable
 stackdepot for SLAB

On Tue, Mar 1, 2016 at 12:57 PM, Andrey Ryabinin <ryabinin.a.a@...il.com> wrote:
>
>
> On 02/29/2016 08:12 PM, Dmitry Vyukov wrote:
>
>>>> diff --git a/lib/Makefile b/lib/Makefile
>>>> index a7c26a4..10a4ae3 100644
>>>> --- a/lib/Makefile
>>>> +++ b/lib/Makefile
>>>> @@ -167,6 +167,13 @@ obj-$(CONFIG_SG_SPLIT) += sg_split.o
>>>>  obj-$(CONFIG_STMP_DEVICE) += stmp_device.o
>>>>  obj-$(CONFIG_IRQ_POLL) += irq_poll.o
>>>>
>>>> +ifeq ($(CONFIG_KASAN),y)
>>>> +ifeq ($(CONFIG_SLAB),y)
>>>
>>> Just try to imagine that another subsystem wants to use stackdepot. How this gonna look like?
>>>
>>> We have Kconfig to describe dependencies. So, this should be under CONFIG_STACKDEPOT.
>>> So any user of this feature can just do 'select STACKDEPOT' in Kconfig.
>>>
>>>> +     obj-y   += stackdepot.o
>>>> +     KASAN_SANITIZE_slub.o := n
>                         _stackdepot.o
>
>
>>>
>>>> +
>>>> +     stack->hash = hash;
>>>> +     stack->size = size;
>>>> +     stack->handle.slabindex = depot_index;
>>>> +     stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN;
>>>> +     __memcpy(stack->entries, entries, size * sizeof(unsigned long));
>>>
>>> s/__memcpy/memcpy/
>>
>> memcpy should be instrumented by asan/tsan, and we would like to avoid
>> that instrumentation here.
>
> KASAN_SANITIZE_* := n already takes care about this.
> __memcpy() is a special thing solely for kasan internals and some assembly code.
> And it's not available generally.
As far as I can see, KASAN_SANITIZE_*:=n does not guarantee it.
It just removes KASAN flags from GCC command line, it does not
necessarily replace memcpy() calls with some kind of a
non-instrumented memcpy().

We see two possible ways to deal with this problem:
1. Define "memcpy" to "__memcpy" in lib/stackdepot.c under CONFIG_KASAN.
2. Create mm/kasan/kasan_stackdepot.c stub which will include
lib/stackdepot.c, and define "memcpy" to "__memcpy" in that file.
This way we'll be able to instrument the original stackdepot.c and
won't miss reports from it if someone starts using it somewhere else.

>>>> +     if (unlikely(!smp_load_acquire(&next_slab_inited))) {
>>>> +             if (!preempt_count() && !in_irq()) {
>>>
>>> If you trying to detect atomic context here, than this doesn't work. E.g. you can't know
>>> about held spinlocks in non-preemptible kernel.
>>> And I'm not sure why need this. You know gfp flags here, so allocation in atomic context shouldn't be problem.
>>
>>
>> We don't have gfp flags for kfree.
>> I wonder how CONFIG_DEBUG_ATOMIC_SLEEP handles this. Maybe it has the answer.
>
> It hasn't. It doesn't guarantee that atomic context always will be detected.
>
>> Alternatively, we can always assume that we are in atomic context in kfree.
>>
>
> Or do this allocation in separate context, put in work queue.
>
>>
>>
>>>> +                     alloc_flags &= (__GFP_RECLAIM | __GFP_IO | __GFP_FS |
>>>> +                             __GFP_NOWARN | __GFP_NORETRY |
>>>> +                             __GFP_NOMEMALLOC | __GFP_DIRECT_RECLAIM);
>>>
>>> I think blacklist approach would be better here.
>>>
>>>> +                     page = alloc_pages(alloc_flags, STACK_ALLOC_ORDER);
>>>
>>> STACK_ALLOC_ORDER = 4 - that's a lot. Do you really need that much?
>>
>> Part of the issue the atomic context above. When we can't allocate
>> memory we still want to save the stack trace. When we have less than
>> STACK_ALLOC_ORDER memory, we try to preallocate another
>> STACK_ALLOC_ORDER in advance. So in the worst case, we have
>> STACK_ALLOC_ORDER memory and that should be enough to handle all
>> kmalloc/kfree in the atomic context. 1 page does not look enough. I
>> think Alex did some measuring of the failure race (when we are out of
>> memory and can't allocate more).
>>
>
> A lot of 4-order pages will lead to high fragmentation. You don't need physically contiguous memory here,
> so try to use vmalloc(). It is slower, but fragmentation won't be problem.
>
> And one more thing. Take a look at mempool, because it's generally used to solve the problem you have here
> (guaranteed allocation in atomic context).
>
>



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ