lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 5 Sep 2022 15:29:34 +0800
From:   Miaohe Lin <linmiaohe@...wei.com>
To:     HORIGUCHI NAOYA(堀口 直也) 
        <naoya.horiguchi@....com>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
CC:     "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/6] mm, hwpoison: use __PageMovable() to detect non-lru
 movable pages

On 2022/9/5 15:15, HORIGUCHI NAOYA(堀口 直也) wrote:
> On Mon, Sep 05, 2022 at 02:53:41PM +0800, Miaohe Lin wrote:
>> On 2022/9/5 13:22, HORIGUCHI NAOYA(堀口 直也) wrote:
>>> Hi Miaohe,
>>>
>>> On Tue, Aug 30, 2022 at 08:36:00PM +0800, Miaohe Lin wrote:
>>>> It's more recommended to use __PageMovable() to detect non-lru movable
>>>> pages. We can avoid bumping page refcnt via isolate_movable_page() for
>>>> the isolated lru pages. Also if pages become PageLRU just after they're
>>>> checked but before trying to isolate them, isolate_lru_page() will be
>>>> called to do the right work.
>>>
>>> Good point, non-lru movable page is currently handled by isolate_lru_page(),
>>> which always fails.  This means that we lost the chance of soft-offlining
>>> for any non-lru movable page.  So this patch improves the situation.
>>
>> Non-lru movable page will still be handled by isolate_movable_page() before the code change
>> as they don't have PageLRU set. The current situation is that the isolated LRU pages are
>> passed to isolate_movable_page() uncorrectly. This might not hurt. But the chance that pages
>> become un-isolated and thus available just after checking could be seized with this patch.
> 
> OK, thank you for correct me.
> 
>>
>>>
>>>>
>>>> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
>>>> ---
>>>>  mm/memory-failure.c | 2 +-
>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>>>> index a923a6dde871..3966fa6abe03 100644
>>>> --- a/mm/memory-failure.c
>>>> +++ b/mm/memory-failure.c
>>>> @@ -2404,7 +2404,7 @@ EXPORT_SYMBOL(unpoison_memory);
>>>>  static bool isolate_page(struct page *page, struct list_head *pagelist)
>>>>  {
>>>>  	bool isolated = false;
>>>> -	bool lru = PageLRU(page);
>>>> +	bool lru = !__PageMovable(page);
>>>
>>> It seems that PAGE_MAPPING_MOVABLE is not set for hugetlb pages, so
>>> lru becomes true for them.  Then, if isolate_hugetlb() succeeds,
>>> inc_node_page_state() is called for hugetlb pages, maybe that's not expected.
>>
>> Yes, that's unexpected. Thanks for pointing this out.
>>
>>>
>>>>  
>>>>  	if (PageHuge(page)) {
>>>>  		isolated = !isolate_hugetlb(page, pagelist);
>>>         } else {
>>>                 if (lru)
>>>                         isolated = !isolate_lru_page(page);
>>>                 else
>>>                         isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE);
>>>
>>>                 if (isolated)
>>>                         list_add(&page->lru, pagelist);
>>>         }
>>>
>>>         if (isolated && lru)
>>>                 inc_node_page_state(page, NR_ISOLATED_ANON +
>>>                                     page_is_file_lru(page));
>>>
>>> so, how about moving this if block into the above else block?
>>> Then, the automatic variable lru can be moved into the else block.
>>
>> Do you mean something like below?
>>
>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>> index df3bf266eebf..48780f3a61d3 100644
>> --- a/mm/memory-failure.c
>> +++ b/mm/memory-failure.c
>> @@ -2404,24 +2404,25 @@ EXPORT_SYMBOL(unpoison_memory);
>>  static bool isolate_page(struct page *page, struct list_head *pagelist)
>>  {
>>         bool isolated = false;
>> -       bool lru = !__PageMovable(page);
>>
>>         if (PageHuge(page)) {
>>                 isolated = !isolate_hugetlb(page, pagelist);
>>         } else {
>> +               bool lru = !__PageMovable(page);
>> +
>>                 if (lru)
>>                         isolated = !isolate_lru_page(page);
>>                 else
>>                         isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE);
>>
>> -               if (isolated)
>> +               if (isolated) {
>>                         list_add(&page->lru, pagelist);
>> +                       if (lru)
>> +                               inc_node_page_state(page, NR_ISOLATED_ANON +
>> +                                                   page_is_file_lru(page));
>> +               }
>>         }
>>
>> -       if (isolated && lru)
>> -               inc_node_page_state(page, NR_ISOLATED_ANON +
>> -                                   page_is_file_lru(page));
>> -
>>         /*
>>          * If we succeed to isolate the page, we grabbed another refcount on
>>          * the page, so we can safely drop the one we got from get_any_pages().
>>
> 
> Yes, that's exactly what I thought of.

Hi Andrew:

The above code change could be applied to the mm-tree directly. Or should I resend
the v2 series? Which one is more convenient for you? They're all fine to me. ;)

Many thanks both.

Thanks,
Miaohe Lin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ