lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <517e4c23-11f8-4ded-a502-354c482c4e51@redhat.com>
Date: Mon, 26 Feb 2024 14:03:58 +0100
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>, Lance Yang <ioworker0@...il.com>,
 fengwei.yin@...el.com
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
 linux-mm@...ck.org, mhocko@...e.com, minchan@...nel.org, peterx@...hat.com,
 shy828301@...il.com, songmuchun@...edance.com, wangkefeng.wang@...wei.com,
 zokeefe@...gle.com, 21cnbao@...il.com
Subject: Re: [PATCH 1/1] mm/madvise: enhance lazyfreeing with mTHP in
 madvise_free

On 26.02.24 13:57, Ryan Roberts wrote:
> On 26/02/2024 08:35, Lance Yang wrote:
>> Hey Fengwei,
>>
>> Thanks for taking time to review!
>>
>>> On Mon, Feb 26, 2024 at 10:38 AM Yin Fengwei <fengwei.yin@...el.com> wrote:
>>>> On Sun, Feb 25, 2024 at 8:32 PM Lance Yang <ioworker0@...il.com> wrote:
>> [...]
>>>> --- a/mm/madvise.c
>>>> +++ b/mm/madvise.c
>>>> @@ -676,11 +676,43 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
>>>>                 */
>>>>                if (folio_test_large(folio)) {
>>>>                        int err;
>>>> +                     unsigned long next_addr, align;
>>>>
>>>> -                     if (folio_estimated_sharers(folio) != 1)
>>>> -                             break;
>>>> -                     if (!folio_trylock(folio))
>>>> -                             break;
>>>> +                     if (folio_estimated_sharers(folio) != 1 ||
>>>> +                         !folio_trylock(folio))
>>>> +                             goto skip_large_folio;
>>>> +
>>>> +                     align = folio_nr_pages(folio) * PAGE_SIZE;
>>>> +                     next_addr = ALIGN_DOWN(addr + align, align);
>>> There is a possible corner case:
>>> If there is a cow folio associated with this folio and the cow folio
>>> has smaller size than this folio for whatever reason, this change can't
>>> handle it correctly.
>>
>> Thanks for pointing that out; it's very helpful to me!
>> I made some changes. Could you please check if this corner case is now resolved?
>>
>> As a diff against this patch.
>>
>> diff --git a/mm/madvise.c b/mm/madvise.c
>> index bcbf56595a2e..c7aacc9f9536 100644
>> --- a/mm/madvise.c
>> +++ b/mm/madvise.c
>> @@ -686,10 +686,12 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
>>   			next_addr = ALIGN_DOWN(addr + align, align);
>>   
>>   			/*
>> -			 * If we mark only the subpages as lazyfree,
>> -			 * split the large folio.
>> +			 * If we mark only the subpages as lazyfree, or
>> +			 * if there is a cow folio associated with this folio,
>> +			 * then split the large folio.
>>   			 */
>> -			if (next_addr > end || next_addr - addr != align)
>> +			if (next_addr > end || next_addr - addr != align ||
>> +			    folio_total_mapcount(folio) != folio_nr_pages(folio))
> 
> I still don't think this is correct. I think you were previously assuming that
> if you see a page from a large folio then the whole large folio should be
> contiguously mapped? This new check doesn't validate that assumption reliably;
> you need to iterate through every pte to generate a batch, like David does in
> folio_pte_batch() for this to be safe.
> 
> An example of when this check is insufficient; let's say you have a 4 page anon
> folio mapped contiguously in a process (total_mapcount=4). The process is forked
> (total_mapcount=8). Then each process munmaps the second 2 pages
> (total_mapcount=4). In place of the munmapped 2 pages, 2 new pages are mapped.
> Then call madvise. It's probably even easier to trigger for file-backed memory
> (I think this code path is used for both file and anon?)

What would work here is using folio_pte_batch() to get how many PTEs are 
mapped *here*, then comparing the the batch size to folio_nr_pages(). If 
both match, we are mapping all subpages.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ