lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 15 Feb 2021 20:12:17 +0200
From:   Topi Miettinen <toiwoton@...il.com>
To:     Uladzislau Rezki <urezki@...il.com>
Cc:     linux-hardening@...r.kernel.org, akpm@...ux-foundation.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Andy Lutomirski <luto@...nel.org>,
        Jann Horn <jannh@...gle.com>,
        Kees Cook <keescook@...omium.org>,
        Linux API <linux-api@...r.kernel.org>,
        Matthew Wilcox <willy@...radead.org>,
        Mike Rapoport <rppt@...nel.org>
Subject: Re: [PATCH v2] mm/vmalloc: randomize vmalloc() allocations

On 15.2.2021 14.51, Uladzislau Rezki wrote:
> On Sat, Feb 13, 2021 at 03:43:39PM +0200, Topi Miettinen wrote:
>> On 13.2.2021 13.55, Uladzislau Rezki wrote:
>>>> Hello,
>>>>
>>>> Is there a chance of getting this reviewed and maybe even merged, please?
>>>>
>>>> -Topi
>>>>
>>> I can review it and help with it. But before that i would like to
>>> clarify if such "randomization" is something that you can not leave?
>>
>> This happens to interest me and I don't mind the performance loss since I
>> think there's also an improvement in security. I suppose (perhaps wrongly)
>> that others may also be interested in such features. For example, also
>> `nosmt` can take away a big part of CPU processing capability.
>>
> OK. I was thinking about if it is done for some production systems or
> some specific projects where this is highly demanded.
> 
>>
>> Does this
>> answer your question, I'm not sure what you mean with leaving? I hope you
>> would not want me to go away and leave?
>>
> No-no, that was a type :) Sorry for that. I just wanted to figure out
> who really needs it.

It's not needed. The goal is just to increase address space layout 
randomization, to harden the system against attacks which depend on 
predictable kernel memory layout. This should not be used when 
performance is more important than hardening.

>>> For example on 32bit system vmalloc space is limited, such randomization
>>> can slow down it, also it will lead to failing of allocations much more,
>>> thus it will require repeating with different offset.
>>
>> I would not use `randomize_vmalloc=1` on a 32 bit systems, because in
>> addition to slow down, the address space could become so fragmented that
>> large allocations may not fit anymore. Perhaps the documentation should warn
>> about this more clearly. I haven't tried this on a 32 bit system though and
>> there the VM layout is very different.
>>
> For 32-bit systems that would introduce many issues not limited to fragmentations.
> 
>> __alloc_vm_area() scans the vmalloc space starting from a random address up
>> to end of the area. If this fails, the scan is restarted from the bottom of
>> the area up to this random address. Thus the entire area is scanned.
>>
>>> Second. There is a space or region for modules. Using various offsets
>>> can waste of that memory, thus can lead to failing of module loading.
>>
>> The allocations for modules (or BPF code) are also randomized within their
>> dedicated space. I don't think other allocations should affect module space.
>> Within this module space, fragmentation may also be possible because there's
>> only 1,5GB available. The largest allocation on my system seems to be 11M at
>> the moment, others are 1M or below and most are 8k. The possibility of an
>> allocation failing probably depends on the fill ratio. In practice haven't
>> seen problems with this.
>>
> I think it depends on how many modules your system loads. If it is a big
> system it might be that such fragmentation and wasting of module space
> may lead to modules loading.

# echo 1 > /proc/sys/kernel/kptr_restrict
# grep 0xffffffff /proc/vmallocinfo | awk '{s=s+$2;c++} END {print 
"total\tcount\tavg\tof 1536MB";print s,c,s/c,s/1536/1024/1024}'
total   count   avg     of 1536MB
34201600 1022 33465.4 0.0212351

I think that on my system fragmentation shouldn't be a danger since only 
2% (34MB) of the 1536MB available is used for the 1022 module/BPF blocks.

>> It would be possible to have finer control, for example
>> `randomize_vmalloc=3` (1 = general vmalloc, 2 = modules, bitwise ORed) or
>> `randomize_vmalloc=general,modules`.
>>
>> I experimented by trying to change how the modules are compiled
>> (-mcmodel=medium or -mcmodel=large) so that they could be located in the
>> normal vmalloc space, but instead I found a bug in the compiler (-mfentry
>> produces incorrect code for -mcmodel=large, now fixed).
>>
>>> On the other side there is a per-cpu allocator. Interfering with it
>>> also will increase a rate of failing.
>>
>> I didn't notice the per-cpu allocator before. I'm probably missing
>> something, but it seems to be used for a different purpose (for allocating
>> the vmap_area structure objects instead of the address space range), so
>> where do you see interference?
>>
> 
> 
>     A                       B
>   ---->                   <----
> <---------------------------><--------->
> |   vmalloc address space    |
> |<--------------------------->
> 
> 
> A - is a vmalloc allocations;
> B - is a percpu-allocator.

OK, now I get it, thanks. These can be seen in /proc/vmallocinfo as 
allocations done by pcpu_get_vm_areas(). The way of allocating very 
predictably downwards of a fixed address is bad for ASLR, so I'll try to 
randomize the location of these too. Other allocations by 
pcpu_populate_chunk() and
pcpu_create_chunk() seem to be randomized already.

-Topi

> 
> --
> Vlad Rezki
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ