lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 30 May 2022 01:55:14 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Muchun Song <songmuchun@...edance.com>
Cc:     bh1scw@...il.com, Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/slub: replace alloc_pages with folio_alloc

On Sun, May 29, 2022 at 04:31:07AM +0100, Matthew Wilcox wrote:
> On Sun, May 29, 2022 at 10:58:18AM +0800, Muchun Song wrote:
> > On Sat, May 28, 2022 at 05:27:11PM +0100, Matthew Wilcox wrote:
> > > On Sun, May 29, 2022 at 12:11:58AM +0800, bh1scw@...il.com wrote:
> > > > From: Fanjun Kong <bh1scw@...il.com>
> > > > 
> > > > This patch will use folio allocation functions for allocating pages.
> > > 
> > > That's not actually a good idea.  folio_alloc() will do the
> > > prep_transhuge_page() step which isn't needed for slab.
> > 
> > You mean folio_alloc() is dedicated for THP allocation?  It is a little
> > surprise to me.  I thought folio_alloc() is just a variant of alloc_page(),
> > which returns a folio struct instead of a page.  Seems like I was wrong.
> > May I ask what made us decide to do this?
> 
> Yeah, the naming isn't great here.  The problem didn't really occur
> to me until I saw this patch, and I don't have a good solution yet.

OK, I have an idea.

None of the sl*b allocators use the page refcount.  So the
atomic operations on it are just a waste of time.  If we add an
alloc_unref_page() to match our free_unref_page(), that'll be enough
difference to stop pepole sending "helpful" patches.  Also, it'll be a
(small?) performance improvement for slab.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ