lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 13 Jan 2020 15:21:58 +0800
From:   Alex Shi <alex.shi@...ux.alibaba.com>
To:     Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
        cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, akpm@...ux-foundation.org,
        mgorman@...hsingularity.net, tj@...nel.org, hughd@...gle.com,
        daniel.m.jordan@...cle.com, yang.shi@...ux.alibaba.com,
        willy@...radead.org, shakeelb@...gle.com, hannes@...xchg.org
Cc:     yun.wang@...ux.alibaba.com
Subject: Re: [PATCH v7 01/10] mm/vmscan: remove unnecessary lruvec adding



在 2020/1/10 下午4:39, Konstantin Khlebnikov 写道:
> On 25/12/2019 12.04, Alex Shi wrote:
>> We don't have to add a freeable page into lru and then remove from it.
>> This change saves a couple of actions and makes the moving more clear.
>>
>> The SetPageLRU needs to be kept here for list intergrity. Otherwise:
>>
>>      #0 mave_pages_to_lru                #1 release_pages
>>                                          if (put_page_testzero())
>>      if (put_page_testzero())
>>                                         !PageLRU //skip lru_lock
>>                                                  list_add(&page->lru,);
>>      else
>>          list_add(&page->lru,) //corrupt
>>
>> Signed-off-by: Alex Shi <alex.shi@...ux.alibaba.com>
>> Cc: Andrew Morton <akpm@...ux-foundation.org>
>> Cc: Johannes Weiner <hannes@...xchg.org>
>> Cc: Hugh Dickins <hughd@...gle.com>
>> Cc: yun.wang@...ux.alibaba.com
>> Cc: linux-mm@...ck.org
>> Cc: linux-kernel@...r.kernel.org
>> ---
>>   mm/vmscan.c | 16 +++++++---------
>>   1 file changed, 7 insertions(+), 9 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 572fb17c6273..8719361b47a0 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1852,26 +1852,18 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> 
> Here is another cleanup: pass only pgdat as argument.
> 
> This function reavaluates lruvec for each page under lru lock.
> Probably this is redundant for now but could be used in the future (or your patchset already use that).

Thanks a lot for comments, Konstantin!

yes, we could use pgdat here, but since to remove the pgdat later, maybe better to save this change? :)

> 
>>       while (!list_empty(list)) {
>>           page = lru_to_page(list);
>>           VM_BUG_ON_PAGE(PageLRU(page), page);
>> +        list_del(&page->lru);
>>           if (unlikely(!page_evictable(page))) {
>> -            list_del(&page->lru);
>>               spin_unlock_irq(&pgdat->lru_lock);
>>               putback_lru_page(page);
>>               spin_lock_irq(&pgdat->lru_lock);
>>               continue;
>>           }
>> -        lruvec = mem_cgroup_page_lruvec(page, pgdat);
>> -
> 
> Please leave a comment that we must set PageLRU before dropping our page reference.

Right, I will try to give a comments here.
 
> 
>>           SetPageLRU(page);
>> -        lru = page_lru(page);
>> -
>> -        nr_pages = hpage_nr_pages(page);
>> -        update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
>> -        list_move(&page->lru, &lruvec->lists[lru]);
>>             if (put_page_testzero(page)) {
>>               __ClearPageLRU(page);
>>               __ClearPageActive(page);
>> -            del_page_from_lru_list(page, lruvec, lru);
>>                 if (unlikely(PageCompound(page))) {
>>                   spin_unlock_irq(&pgdat->lru_lock);
>> @@ -1880,6 +1872,12 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>>               } else
>>                   list_add(&page->lru, &pages_to_free);
>>           } else {
>> +            lruvec = mem_cgroup_page_lruvec(page, pgdat);
>> +            lru = page_lru(page);
>> +            nr_pages = hpage_nr_pages(page);
>> +
>> +            update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
>> +            list_add(&page->lru, &lruvec->lists[lru]);
>>               nr_moved += nr_pages;
>>           }
> 
> IMHO It looks better to in this way:> 
> SetPageLRU()
> 
> if (unlikely(put_page_testzero())) {
>  <free>
>  continue;
> }
> 
> <add>

Yes, this looks better!

Thanks
Alex

> 
>>       }
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ