lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <341acdcc-1745-436b-a3c7-26916b675175@redhat.com>
Date: Fri, 9 Aug 2024 19:27:27 +0200
From: David Hildenbrand <david@...hat.com>
To: Vincent Donnefort <vdonnefort@...gle.com>, g@...gle.com
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
 Andrew Morton <akpm@...ux-foundation.org>,
 "Matthew Wilcox (Oracle)" <willy@...radead.org>,
 Hugh Dickins <hughd@...gle.com>, Ryan Roberts <ryan.roberts@....com>,
 Yin Fengwei <fengwei.yin@...el.com>, Mike Kravetz <mike.kravetz@...cle.com>,
 Muchun Song <muchun.song@...ux.dev>, Peter Xu <peterx@...hat.com>,
 surenb@...gle.com
Subject: Re: [PATCH v2 08/40] mm/memory: page_add_file_rmap() ->
 folio_add_file_rmap_[pte|pmd]()

On 09.08.24 19:13, Vincent Donnefort wrote:
> Hi,
> 
> Sorry, reviving this thread as I have ran into something weird:
> 
> On Wed, Dec 20, 2023 at 11:44:32PM +0100, David Hildenbrand wrote:
>> Let's convert insert_page_into_pte_locked() and do_set_pmd(). While at it,
>> perform some folio conversion.
>>
>> Reviewed-by: Yin Fengwei <fengwei.yin@...el.com>
>> Reviewed-by: Ryan Roberts <ryan.roberts@....com>
>> Signed-off-by: David Hildenbrand <david@...hat.com>
>> ---
>>   mm/memory.c | 14 ++++++++------
>>   1 file changed, 8 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 7f957e5a84311..c77d3952d261f 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
> 
> [...]
> 
>>   vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
>>   {
>> +	struct folio *folio = page_folio(page);
>>   	struct vm_area_struct *vma = vmf->vma;
>>   	bool write = vmf->flags & FAULT_FLAG_WRITE;
>>   	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
>> @@ -4418,8 +4421,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
>>   	if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))
>>   		return ret;
>>   
>> -	page = compound_head(page);
>> -	if (compound_order(page) != HPAGE_PMD_ORDER)
>> +	if (page != &folio->page || folio_order(folio) != HPAGE_PMD_ORDER)
>>   		return ret;
> 
> Is this `page != &folio->page` expected? I believe this check wasn't there
> before as we had `page = compound_head()`.
> 
> It breaks the installation of a PMD level mapping for shmem when the fault
> address is in the middle of this block. In its fault path, shmem sets
> 
>    vmf->page = folio_file_page(folio, vmf->pgoff)
> 
> which fails this test above.

Already fixed? :)

commit ab1ffc86cb5bec1c92387b9811d9036512f8f4eb (tag: 
mm-hotfixes-stable-2024-06-26-17-28)
Author: Andrew Bresticker <abrestic@...osinc.com>
Date:   Tue Jun 11 08:32:16 2024 -0700

     mm/memory: don't require head page for do_set_pmd()


-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ