lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 1 Dec 2020 16:20:30 +0800
From:   Alex Shi <alex.shi@...ux.alibaba.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     vbabka@...e.cz, Konstantin Khlebnikov <koct9i@...il.com>,
        Hugh Dickins <hughd@...gle.com>, Yu Zhao <yuzhao@...gle.com>,
        "Matthew Wilcox (Oracle)" <willy@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] mm/swap.c: pre-sort pages in pagevec for
 pagevec_lru_move_fn



在 2020/12/1 下午4:10, Michal Hocko 写道:
> On Tue 01-12-20 16:02:13, Alex Shi wrote:
>> Pages in pagevec may have different lruvec, so we have to do relock in
>> function pagevec_lru_move_fn(), but a relock may cause current cpu wait
>> for long time on the same lock for spinlock fairness reason.
>>
>> Before per memcg lru_lock, we have to bear the relock since the spinlock
>> is the only way to serialize page's memcg/lruvec. Now TestClearPageLRU
>> could be used to isolate pages exculsively, and stable the page's
>> lruvec/memcg. So it gives us a chance to sort the page's lruvec before
>> moving action in pagevec_lru_move_fn. Then we don't suffer from the
>> spinlock's fairness wait.
> Do you have any data to show any improvements from this?
> 

Hi Michal,

Thanks for quick response.

Not yet. I am running for data. but according to the lru_add result, there
should be a big gain for multiple memcgs scenario.

Also I don't except a quick accept, just send out the idea for comments 
when the thread is still warm. :)

Thanks
Alex

Powered by blists - more mailing lists