lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 18 Nov 2019 21:10:58 -0500 From: Daniel Jordan <daniel.m.jordan@...cle.com> To: Alex Shi <alex.shi@...ux.alibaba.com> Cc: cgroups@...r.kernel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org, akpm@...ux-foundation.org, mgorman@...hsingularity.net, tj@...nel.org, hughd@...gle.com, khlebnikov@...dex-team.ru, daniel.m.jordan@...cle.com, yang.shi@...ux.alibaba.com, willy@...radead.org, Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>, Vladimir Davydov <vdavydov.dev@...il.com>, Roman Gushchin <guro@...com>, Shakeel Butt <shakeelb@...gle.com>, Chris Down <chris@...isdown.name>, Thomas Gleixner <tglx@...utronix.de>, Vlastimil Babka <vbabka@...e.cz>, Qian Cai <cai@....pw>, Andrey Ryabinin <aryabinin@...tuozzo.com>, "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, Jérôme Glisse <jglisse@...hat.com>, Andrea Arcangeli <aarcange@...hat.com>, David Rientjes <rientjes@...gle.com>, "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>, swkhack <swkhack@...il.com>, "Potyra, Stefan" <Stefan.Potyra@...ktrobit.com>, Mike Rapoport <rppt@...ux.vnet.ibm.com>, Stephen Rothwell <sfr@...b.auug.org.au>, Colin Ian King <colin.king@...onical.com>, Jason Gunthorpe <jgg@...pe.ca>, Mauro Carvalho Chehab <mchehab+samsung@...nel.org>, Peng Fan <peng.fan@....com>, Nikolay Borisov <nborisov@...e.com>, Ira Weiny <ira.weiny@...el.com>, Kirill Tkhai <ktkhai@...tuozzo.com>, Yafang Shao <laoar.shao@...il.com> Subject: Re: [PATCH v3 3/7] mm/lru: replace pgdat lru_lock with lruvec lock On Sat, Nov 16, 2019 at 11:15:02AM +0800, Alex Shi wrote: > @@ -192,26 +190,17 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, > void *arg) > { > int i; > - struct pglist_data *pgdat = NULL; > - struct lruvec *lruvec; > - unsigned long flags = 0; > + struct lruvec *lruvec = NULL; > > for (i = 0; i < pagevec_count(pvec); i++) { > struct page *page = pvec->pages[i]; > - struct pglist_data *pagepgdat = page_pgdat(page); > > - if (pagepgdat != pgdat) { > - if (pgdat) > - spin_unlock_irqrestore(&pgdat->lru_lock, flags); > - pgdat = pagepgdat; > - spin_lock_irqsave(&pgdat->lru_lock, flags); > - } > + lruvec = lock_page_lruvec_irqsave(page, page_pgdat(page)); > > - lruvec = mem_cgroup_page_lruvec(page, pgdat); > (*move_fn)(page, lruvec, arg); > + spin_unlock_irqrestore(&lruvec->lru_lock, lruvec->irqflags); > } > - if (pgdat) > - spin_unlock_irqrestore(&pgdat->lru_lock, flags); > + > release_pages(pvec->pages, pvec->nr); > pagevec_reinit(pvec); > } Why can't you keep the locking pattern where we only drop and reacquire if the lruvec changes? It'd save a lot of locks and unlocks if most pages were from the same memcg and node, or the memory controller were unused. Thanks for running the -readtwice benchmark, by the way.
Powered by blists - more mailing lists