lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 12 Aug 2022 15:39:09 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     "Kirill A. Shutemov" <kirill@...temov.name>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org
Subject: Re: State of the Page (August 2022)

On Fri, Aug 12, 2022 at 05:33:56PM +0300, Kirill A. Shutemov wrote:
> If you really need info about these pages and reference their memdesc it
> is likely be 9 cache lines that scattered across memory instead of 8 cache
> lines next to each other in the same page.

Well, hopefully not.  Most allocations should be multiple pages.  That's
already true for slab, netpool and file (for xfs anyway), and hopefully
soon for anon.

> Initially, I thought we can offset the cost by caching memdescs instead of
> struct page/folio. Like page cache store memdesc, but it would require
> memdesc_to_pfn() which is not possible, unless we want to store pfn
> explicitly in memdesc.

I think we do, at least for some memdescs.  File folios definitely want
to store the pfn, but I don't think getting the PFN for a slab is a
common operation (although we'll still need to store the pointer to
the struct page, so it's equivalent).

> I don't want to be buzzkill, I like the idea a lot, but abstractions are
> often costly. Getting it upstream without noticeable performance
> regressions going to be a challenge.

I don't think there's a way to find out whether it'll be a performance
win without actually doing it.  Fortunately, the steps to get to this
point are mostly good cleanups anyway.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ