lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZmiVy8iE93HGkBWv@casper.infradead.org>
Date: Tue, 11 Jun 2024 19:22:03 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: David Hildenbrand <david@...hat.com>,
	Andrew Bresticker <abrestic@...osinc.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm/memory: Don't require head page for do_set_pmd()

On Tue, Jun 11, 2024 at 11:06:22AM -0700, Andrew Morton wrote:
> On Tue, 11 Jun 2024 17:33:17 +0200 David Hildenbrand <david@...hat.com> wrote:
> 
> > On 11.06.24 17:32, Andrew Bresticker wrote:
> > > The requirement that the head page be passed to do_set_pmd() was added
> > > in commit ef37b2ea08ac ("mm/memory: page_add_file_rmap() ->
> > > folio_add_file_rmap_[pte|pmd]()") and prevents pmd-mapping in the
> > > finish_fault() and filemap_map_pages() paths if the page to be inserted
> > > is anything but the head page for an otherwise suitable vma and pmd-sized
> > > page.
> > > 
> > > Fixes: ef37b2ea08ac ("mm/memory: page_add_file_rmap() -> folio_add_file_rmap_[pte|pmd]()")
> > > Signed-off-by: Andrew Bresticker <abrestic@...osinc.com>
> > > ---
> > >   mm/memory.c | 3 ++-
> > >   1 file changed, 2 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 0f47a533014e..a1fce5ddacb3 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -4614,8 +4614,9 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
> > >   	if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))
> > >   		return ret;
> > >   
> > > -	if (page != &folio->page || folio_order(folio) != HPAGE_PMD_ORDER)
> > > +	if (folio_order(folio) != HPAGE_PMD_ORDER)
> > >   		return ret;
> > > +	page = &folio->page;
> > >   
> > >   	/*
> > >   	 * Just backoff if any subpage of a THP is corrupted otherwise
> > 
> > Acked-by: David Hildenbrand <david@...hat.com>
> 
> You know what I'm going to ask ;) I'm assuming that the runtime effects
> are "small performance optimization" and that "should we backport the
> fix" is "no".

We're going to stop using PMDs to map large folios unless the fault is
within the first 4KiB of the PMD.  No idea how many workloads that
affects, but it only needs to be backported as far as v6.8, so we
may as well backport it.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ