lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+YkEMPe3uZmPO+HmpAk6JckdiGhxWq=7i8t2WG2efZgZw@mail.gmail.com>
Date:   Tue, 30 May 2017 10:49:19 +0200
From:   Dmitry Vyukov <dvyukov@...gle.com>
To:     Vladimir Murzin <vladimir.murzin@....com>
Cc:     Joonsoo Kim <js1304@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Andrey Ryabinin <aryabinin@...tuozzo.com>,
        Alexander Potapenko <glider@...gle.com>,
        kasan-dev <kasan-dev@...glegroups.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H . Peter Anvin" <hpa@...or.com>, kernel-team@....com
Subject: Re: [PATCH v1 00/11] mm/kasan: support per-page shadow memory to
 reduce memory consumption

On Tue, May 30, 2017 at 10:40 AM, Vladimir Murzin
<vladimir.murzin@....com> wrote:
> On 30/05/17 09:31, Vladimir Murzin wrote:
>> [This sender failed our fraud detection checks and may not be who they appear to be. Learn about spoofing at http://aka.ms/LearnAboutSpoofing]
>>
>> On 30/05/17 09:15, Dmitry Vyukov wrote:
>>> On Tue, May 30, 2017 at 9:58 AM, Vladimir Murzin
>>> <vladimir.murzin@....com> wrote:
>>>> On 29/05/17 16:29, Dmitry Vyukov wrote:
>>>>> I have an alternative proposal. It should be conceptually simpler and
>>>>> also less arch-dependent. But I don't know if I miss something
>>>>> important that will render it non working.
>>>>> Namely, we add a pointer to shadow to the page struct. Then, create a
>>>>> slab allocator for 512B shadow blocks. Then, attach/detach these
>>>>> shadow blocks to page structs as necessary. It should lead to even
>>>>> smaller memory consumption because we won't need a whole shadow page
>>>>> when only 1 out of 8 corresponding kernel pages are used (we will need
>>>>> just a single 512B block). I guess with some fragmentation we need
>>>>> lots of excessive shadow with the current proposed patch.
>>>>> This does not depend on TLB in any way and does not require hooking
>>>>> into buddy allocator.
>>>>> The main downside is that we will need to be careful to not assume
>>>>> that shadow is continuous. In particular this means that this mode
>>>>> will work only with outline instrumentation and will need some ifdefs.
>>>>> Also it will be slower due to the additional indirection when
>>>>> accessing shadow, but that's meant as "small but slow" mode as far as
>>>>> I understand.
>>>>>
>>>>> But the main win as I see it is that that's basically complete support
>>>>> for 32-bit arches. People do ask about arm32 support:
>>>>> https://groups.google.com/d/msg/kasan-dev/Sk6BsSPMRRc/Gqh4oD_wAAAJ
>>>>> https://groups.google.com/d/msg/kasan-dev/B22vOFp-QWg/EVJPbrsgAgAJ
>>>>> and probably mips32 is relevant as well.
>>>>> Such mode does not require a huge continuous address space range, has
>>>>> minimal memory consumption and requires minimal arch-dependent code.
>>>>> Works only with outline instrumentation, but I think that's a
>>>>> reasonable compromise.
>>>>
>>>> .. or you can just keep shadow in page extension. It was suggested back in
>>>> 2015 [1], but seems that lack of stack instrumentation was "no-way"...
>>>>
>>>> [1] https://lkml.org/lkml/2015/8/24/573
>>>
>>> Right. It describes basically the same idea.
>>>
>>> How is page_ext better than adding data page struct?
>>
>> page_ext is already here along with some other debug options ;)


But page struct is also here. What am I missing?


>>> It seems that memory for all page_ext is preallocated along with page
>>> structs; but just the lookup is slower.
>>>
>>
>> Yup. Lookup would look like (based on v4.0):
>>
>> ...
>> page_ext = lookup_page_ext_begin(virt_to_page(start));
>>
>> do {
>>         page_ext->shadow[idx++] = value;
>> } while (idx < bound);
>>
>> lookup_page_ext_end((void *)page_ext);
>>
>> ...
>
> Correction: please, ignore that *_{begin,end} stuff - mainline only
> lookup_page_ext() is only used.


Note that this added code will be executed during handling of each and
every memory access in kernel. Every instruction matters on that path.
The additional indirection via page struct will also slow down it, but
that's the cost for lower memory consumption and potentially 32-bit
support. For page_ext it looks like even more overhead for no gain.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ