lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <40bc28e5-c971-055f-eff4-b9d67fe768cc@suse.cz>
Date:   Wed, 20 Sep 2023 10:49:50 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Matteo Rizzo <matteorizzo@...gle.com>,
        "Lameter, Christopher" <cl@...amperecomputing.com>
Cc:     Dave Hansen <dave.hansen@...el.com>, penberg@...nel.org,
        rientjes@...gle.com, iamjoonsoo.kim@....com,
        akpm@...ux-foundation.org, roman.gushchin@...ux.dev,
        42.hyeyoo@...il.com, keescook@...omium.org,
        linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
        linux-mm@...ck.org, linux-hardening@...r.kernel.org,
        tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
        dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com,
        corbet@....net, luto@...nel.org, peterz@...radead.org,
        jannh@...gle.com, evn@...gle.com, poprdi@...gle.com,
        jordyzomer@...gle.com, Mike Rapoport <rppt@...nel.org>
Subject: Re: [RFC PATCH 00/14] Prevent cross-cache attacks in the SLUB
 allocator

On 9/18/23 14:08, Matteo Rizzo wrote:
> On Fri, 15 Sept 2023 at 18:30, Lameter, Christopher
>> Problems:
>>
>> - Overhead due to more TLB lookups
>>
>> - Larger amounts of TLBs are used for the OS. Currently we are trying to
>> use the maximum mappable TLBs to reduce their numbers. This presumably
>> means using 4K TLBs for all slab access.
> 
> Yes, we are using 4K pages for the slab mappings which is going to increase
> TLB pressure. I also tried writing a version of the patch that uses 2M
> pages which had slightly better performance, but that had its own problems.
> For example most slabs are much smaller than 2M, so we would need to create
> and map multiple slabs at once and we wouldn't be able to release the
> physical memory until all slabs in the 2M page are unused which increases
> fragmentation.
 At last LSF/MM [1] we basically discarded direct map fragmentation
avoidance as solving something that turns out to be insignificant, with the
exception of kernel code. As kernel code is unlikely to be allocated from
kmem caches due to W^X, we can hopefully assume it's also insignificant for
the virtual slab area.

[1] https://lwn.net/Articles/931406/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ