lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 25 May 2019 11:28:32 +0800 From: Yang Shi <yang.shi@...ux.alibaba.com> To: ying.huang@...el.com, hannes@...xchg.org, mhocko@...e.com, mgorman@...hsingularity.net, kirill.shutemov@...ux.intel.com, josef@...icpanda.com, hughd@...gle.com, shakeelb@...gle.com, akpm@...ux-foundation.org Cc: yang.shi@...ux.alibaba.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org Subject: [v5 PATCH 1/2] mm: vmscan: remove double slab pressure by inc'ing sc->nr_scanned The commit 9092c71bb724 ("mm: use sc->priority for slab shrink targets") has broken up the relationship between sc->nr_scanned and slab pressure. The sc->nr_scanned can't double slab pressure anymore. So, it sounds no sense to still keep sc->nr_scanned inc'ed. Actually, it would prevent from adding pressure on slab shrink since excessive sc->nr_scanned would prevent from scan->priority raise. The bonnie test doesn't show this would change the behavior of slab shrinkers. w/ w/o /sec %CP /sec %CP Sequential delete: 3960.6 94.6 3997.6 96.2 Random delete: 2518 63.8 2561.6 64.6 The slight increase of "/sec" without the patch would be caused by the slight increase of CPU usage. Cc: Josef Bacik <josef@...icpanda.com> Cc: Michal Hocko <mhocko@...nel.org> Acked-by: Johannes Weiner <hannes@...xchg.org> Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com> --- v4: Added Johannes's ack mm/vmscan.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 7acd0af..b65bc50 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1137,11 +1137,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, if (!sc->may_unmap && page_mapped(page)) goto keep_locked; - /* Double the slab pressure for mapped and swapcache pages */ - if ((page_mapped(page) || PageSwapCache(page)) && - !(PageAnon(page) && !PageSwapBacked(page))) - sc->nr_scanned++; - may_enter_fs = (sc->gfp_mask & __GFP_FS) || (PageSwapCache(page) && (sc->gfp_mask & __GFP_IO)); -- 1.8.3.1
Powered by blists - more mailing lists