lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YVr4YXpsPZtoxDtO@casper.infradead.org>
Date:   Mon, 4 Oct 2021 13:49:37 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC] mm: Optimise put_pages_list()

On Mon, Oct 04, 2021 at 10:10:37AM +0100, Mel Gorman wrote:
> On Thu, Sep 30, 2021 at 05:32:58PM +0100, Matthew Wilcox (Oracle) wrote:
> > Instead of calling put_page() one page at a time, pop pages off
> > the list if there are other refcounts and pass the remainder
> > to free_unref_page_list().  This should be a speed improvement,
> > but I have no measurements to support that.  It's also not very
> > widely used today, so I can't say I've really tested it.  I'm only
> > bothering with this patch because I'd like the IOMMU code to use it
> > https://lore.kernel.org/lkml/20210930162043.3111119-1-willy@infradead.org/
> > 
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> 
> I see your motivation but you need to check that all users of
> put_pages_list (current and future) handle destroy_compound_page properly
> or handle it within put_pages_list. For example, the release_pages()
> user of free_unref_page_list calls __put_compound_page directly before
> freeing. put_pages_list as it stands will call dstroy_compound_page but
> free_unref_page_list does not destroy compound pages in free_pages_prepare

Quite right.  I was really only thinking about order-zero pages because
there aren't any users of compound pages that call this.  But of course,
we should be robust against future callers.  So the obvious thing to do
is to copy what release_pages() does:

+++ b/mm/swap.c
@@ -144,6 +144,10 @@ void put_pages_list(struct list_head *pages)
        list_for_each_entry_safe(page, next, pages, lru) {
                if (!put_page_testzero(page))
                        list_del(&page->lru);
+               if (PageCompound(page)) {
+                       list_del(&page->lru);
+                       __put_compound_page(page);
+               }
        }

        free_unref_page_list(pages);

But would it be better to have free_unref_page_list() handle compound
pages itself?

+++ b/mm/page_alloc.c
@@ -3427,6 +3427,11 @@ void free_unref_page_list(struct list_head *list)

        /* Prepare pages for freeing */
        list_for_each_entry_safe(page, next, list, lru) {
+               if (PageCompound(page)) {
+                       __put_compound_page(page);
+                       list_del(&page->lru);
+                       continue;
+               }
                pfn = page_to_pfn(page);
                if (!free_unref_page_prepare(page, pfn, 0)) {
                        list_del(&page->lru);

(and delete the special handling from release_pages() in the same patch)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ