lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <73bbee97-ff58-4518-8dcf-e1da07906b45@redhat.com>
Date: Mon, 5 Aug 2024 11:00:12 +0200
From: David Hildenbrand <david@...hat.com>
To: Usama Arif <usamaarif642@...il.com>, akpm@...ux-foundation.org,
 linux-mm@...ck.org
Cc: hannes@...xchg.org, riel@...riel.com, shakeel.butt@...ux.dev,
 roman.gushchin@...ux.dev, yuzhao@...gle.com, baohua@...nel.org,
 ryan.roberts@....com, rppt@...nel.org, willy@...radead.org,
 cerasuolodomenico@...il.com, corbet@....net, linux-kernel@...r.kernel.org,
 linux-doc@...r.kernel.org, kernel-team@...a.com,
 Shuang Zhai <zhais@...gle.com>
Subject: Re: [PATCH 3/6] mm: free zapped tail pages when splitting isolated
 thp

On 04.08.24 21:02, Usama Arif wrote:
> 
> 
> On 30/07/2024 16:14, David Hildenbrand wrote:
>> On 30.07.24 14:46, Usama Arif wrote:
>>> From: Yu Zhao <yuzhao@...gle.com>
>>>
>>> If a tail page has only two references left, one inherited from the
>>> isolation of its head and the other from lru_add_page_tail() which we
>>> are about to drop, it means this tail page was concurrently zapped.
>>> Then we can safely free it and save page reclaim or migration the
>>> trouble of trying it.
>>>
>>> Signed-off-by: Yu Zhao <yuzhao@...gle.com>
>>> Tested-by: Shuang Zhai <zhais@...gle.com>
>>> Signed-off-by: Usama Arif <usamaarif642@...il.com>
>>> ---
>>>    mm/huge_memory.c | 26 ++++++++++++++++++++++++++
>>>    1 file changed, 26 insertions(+)
>>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 0167dc27e365..76a3b6a2b796 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -2923,6 +2923,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
>>>        unsigned int new_nr = 1 << new_order;
>>>        int order = folio_order(folio);
>>>        unsigned int nr = 1 << order;
>>> +    LIST_HEAD(pages_to_free);
>>> +    int nr_pages_to_free = 0;
>>>          /* complete memcg works before add pages to LRU */
>>>        split_page_memcg(head, order, new_order);
>>> @@ -3007,6 +3009,24 @@ static void __split_huge_page(struct page *page, struct list_head *list,
>>>            if (subpage == page)
>>>                continue;
>>>            folio_unlock(new_folio);
>>> +        /*
>>> +         * If a tail page has only two references left, one inherited
>>> +         * from the isolation of its head and the other from
>>> +         * lru_add_page_tail() which we are about to drop, it means this
>>> +         * tail page was concurrently zapped. Then we can safely free it
>>> +         * and save page reclaim or migration the trouble of trying it.
>>> +         */
>>> +        if (list && page_ref_freeze(subpage, 2)) {
>>> +            VM_BUG_ON_PAGE(PageLRU(subpage), subpage);
>>> +            VM_BUG_ON_PAGE(PageCompound(subpage), subpage);
>>> +            VM_BUG_ON_PAGE(page_mapped(subpage), subpage);
>>> +
>>
>> No VM_BUG_*, VM_WARN is good enough.
>>
>>> +            ClearPageActive(subpage);
>>> +            ClearPageUnevictable(subpage);
>>> +            list_move(&subpage->lru, &pages_to_free);
>>
>> Most checks here should operate on new_folio instead of subpage.
>>
>>
> Do you mean instead of doing the PageLRU, PageCompound and page_mapped check on the subpage, there should be checks on new_folio?
> If new_folio is a large folio, then it could be that only some of the subpages were zapped?

We do a:

struct folio *new_folio = page_folio(subpage);

Then:

PageLRU() will end up getting translated to 
folio_test_lru(page_folio(subpage))

page_mapped() will end up getting translated to
folio_mapped(page_folio(subpage))

PageCompound() is essentially a folio_test_large() check.

So what stops us from doing

VM_WARN_ON_ONCE_FOLIO(folio_test_lru(new_folio), new_folio);
VM_WARN_ON_ONCE_FOLIO(folio_test_large(new_folio), new_folio);
VM_WARN_ON_ONCE_FOLIO(folio_mapped(new_folio), new_folio);

folio_clear_active(new_folio);
folio_clear_unevictable(new_folio);
...

?

The page_ref_freeze() should make sure that we don't have a tail page of
a large folio. Tail pages would have a refcount of 0.

Or what am I missing?

> 
> Could do below if subpage makes sense
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 3305e6d0b90e..abfcd4b7cbba 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3041,9 +3041,9 @@ static void __split_huge_page(struct page *page, struct list_head *list,
>                   * and save page reclaim or migration the trouble of trying it.
>                   */
>                  if (list && page_ref_freeze(subpage, 2)) {
> -                       VM_BUG_ON_PAGE(PageLRU(subpage), subpage);
> -                       VM_BUG_ON_PAGE(PageCompound(subpage), subpage);
> -                       VM_BUG_ON_PAGE(page_mapped(subpage), subpage);
> +                       VM_WARN_ON_ONCE_PAGE(PageLRU(subpage), subpage);
> +                       VM_WARN_ON_ONCE_PAGE(PageCompound(subpage), subpage);
> +                       VM_WARN_ON_ONCE_PAGE(page_mapped(subpage), subpage);
>   
>                          ClearPageActive(subpage);
>                          ClearPageUnevictable(subpage);
> 

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ