lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 11 Jun 2020 11:28:47 +0800
From:   Alex Shi <alex.shi@...ux.alibaba.com>
To:     Joonsoo Kim <js1304@...il.com>,
        Johannes Weiner <hannes@...xchg.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Linux Memory Management List <linux-mm@...ck.org>,
        Michal Hocko <mhocko@...e.com>,
        김민찬 <minchan@...nel.org>,
        mm-commits@...r.kernel.org, Rik van Riel <riel@...riel.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [patch 113/131] mm: balance LRU lists based on relative thrashing



在 2020/6/10 下午1:23, Joonsoo Kim 写道:
> 2020년 6월 9일 (화) 오후 11:46, Johannes Weiner <hannes@...xchg.org>님이 작성:
>>
>> On Tue, Jun 09, 2020 at 05:15:33PM +0800, Alex Shi wrote:
>>>
>>>
>>> 在 2020/6/4 上午7:03, Andrew Morton 写道:
>>>>
>>>> +   /* XXX: Move to lru_cache_add() when it supports new vs putback */
>>>
>>> Hi Hannes,
>>>
>>> Sorry for a bit lost, would you like to explain a bit more of your idea here?
>>>
>>>> +   spin_lock_irq(&page_pgdat(page)->lru_lock);
>>>> +   lru_note_cost(page);
>>>> +   spin_unlock_irq(&page_pgdat(page)->lru_lock);
>>>> +
>>>
>>>
>>> What could we see here w/o the lru_lock?

Why I want to know the lru_lock protection here is that currently we have 5 lru lists
but only guarded by one lock, that would cause much contention when different apps
active on a server.

I guess originally we have only one lru_lock, since 5 locks would cause cacheline bouncing
if we put them together, or a bit cacheline waste to separate them in cacheline. But
after we have qspinlock, each of cpu will just loop lock on their cacheline, no interfer 
to others. It would much much relief the performance drop by cacheline bounce.

And we could use page.mapping bits to store the using lru list index for the page.
As a quick thought, I guess, except the 5 locks for 5 lists, we still need 1 more lock for
common lruvec data or for others which relay on lru_lock now, like mlock, hpage_nr_pages..
That's the reason I want to know everything under lru_lock. :)

Any comments for this idea? :)

Thanks
Alex


>>
>> It'll just be part of the existing LRU locking in
>> pagevec_lru_move_fn(), when the new pages are added to the LRU in
>> batch. See this older patch for example:
>>
>> https://lore.kernel.org/linux-mm/20160606194836.3624-6-hannes@cmpxchg.org/
>>
>> I didn't include it in this series to reduce conflict with Joonsoo's
>> WIP series that also operates in this area and does something similar:
> 
> Thanks!
> 
>> https://lkml.org/lkml/2020/4/3/63
> 
> I haven't completed the rebase of my series but I guess that referenced patch
> "https://lkml.org/lkml/2020/4/3/63" would be removed in the next version.

Thanks a lot for the info, Johannes&Joonsoo! A long history for a interesting idea. :)

> 
> Before the I/O cost model, a new anonymous page contributes to the LRU reclaim
> balance. But, now, a new anonymous page doesn't contributes to the I/O cost
> so this adjusting patch would not be needed anymore.
> 
> If anyone wants to change this part,
> "/* XXX: Move to lru_cache_add() when it supports new vs putback */", feel free
> to do it.

Powered by blists - more mailing lists