lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56e395c6-81e7-7163-0d4f-42b91573289f@linux.alibaba.com>
Date:   Sat, 4 Jul 2020 19:34:59 +0800
From:   Alex Shi <alex.shi@...ux.alibaba.com>
To:     Konstantin Khlebnikov <koct9i@...il.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Tejun Heo <tj@...nel.org>, Hugh Dickins <hughd@...gle.com>,
        Константин Хлебников 
        <khlebnikov@...dex-team.ru>, daniel.m.jordan@...cle.com,
        yang.shi@...ux.alibaba.com, Matthew Wilcox <willy@...radead.org>,
        Johannes Weiner <hannes@...xchg.org>, lkp@...el.com,
        linux-mm@...ck.org,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Cgroups <cgroups@...r.kernel.org>, shakeelb@...gle.com,
        Joonsoo Kim <iamjoonsoo.kim@....com>, richard.weiyang@...il.com
Subject: Re: [PATCH v14 15/20] mm/swap: serialize memcg changes during
 pagevec_lru_move_fn



在 2020/7/3 下午5:13, Konstantin Khlebnikov 写道:
>> @@ -976,7 +983,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec)
>>   */
>>  void __pagevec_lru_add(struct pagevec *pvec)
>>  {
>> -       pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn);
>> +       pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn, true);
>>  }
> It seems better to open code version in lru_add than adding a bool
> argument which is true just for one user.

Right, I will rewrite this part as your suggestion. Thanks!

> 
> Also with this new lru protection logic lru_add could be optimized:
> It could prepare a list of pages and under lru_lock do only list
> splice and bumping counter.
> Since PageLRU isn't set yet nobody could touch these pages in lru.
> After that lru_add could iterate pages from first to last without
> lru_lock to set PageLRU and drop reference.
> 
> So, lru_add will do O(1) operations under lru_lock regardless of the
> count of pages it added.
> 
> Actually per-cpu vector for adding could be replaced with per-cpu
> lists and\or per-lruvec atomic slist.
> Thus incommig pages will be already in list structure rather than page vector.
> This allows to accumulate more pages and offload adding to kswapd or
> direct reclaim.
> 

That's a great idea! Guess what the new struct we need would be like this?
I like to try this. :)


diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h
index 081d934eda64..d62778c8c184 100644
--- a/include/linux/pagevec.h
+++ b/include/linux/pagevec.h
@@ -20,7 +20,7 @@
 struct pagevec {
        unsigned char nr;
        bool percpu_pvec_drained;
-       struct page *pages[PAGEVEC_SIZE];
+       struct list_head veclist;
 };

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ