[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 7 Aug 2018 22:39:50 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: Stephen Rothwell <sfr@...b.auug.org.au>
Cc: Kirill Tkhai <ktkhai@...tuozzo.com>,
Andrew Morton <akpm@...ux-foundation.org>,
gregkh@...uxfoundation.org, rafael@...nel.org,
Alexander Viro <viro@...iv.linux.org.uk>,
"Darrick J. Wong" <darrick.wong@...cle.com>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
josh@...htriplett.org, Steven Rostedt <rostedt@...dmis.org>,
mathieu.desnoyers@...icios.com, jiangshanlai@...il.com,
Hugh Dickins <hughd@...gle.com>, shuah@...nel.org,
robh@...nel.org, ulf.hansson@...aro.org, aspriel@...il.com,
vivek.gautam@...eaurora.org, robin.murphy@....com, joe@...ches.com,
heikki.krogerus@...ux.intel.com,
Vladimir Davydov <vdavydov.dev@...il.com>,
Michal Hocko <mhocko@...e.com>,
Chris Wilson <chris@...is-wilson.co.uk>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Matthew Wilcox <willy@...radead.org>,
Huang Ying <ying.huang@...el.com>, jbacik@...com,
Ingo Molnar <mingo@...nel.org>, mhiramat@...nel.org,
LKML <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>
Subject: Re: [PATCH RFC 00/10] Introduce lockless shrink_slab()
On Tue, Aug 7, 2018 at 6:12 PM Stephen Rothwell <sfr@...b.auug.org.au> wrote:
>
> Hi Kirill,
>
> On Tue, 07 Aug 2018 18:37:19 +0300 Kirill Tkhai <ktkhai@...tuozzo.com> wrote:
> >
> > After bitmaps of not-empty memcg shrinkers were implemented
> > (see "[PATCH v9 00/17] Improve shrink_slab() scalability..."
> > series, which is already in mm tree), all the evil in perf
> > trace has moved from shrink_slab() to down_read_trylock().
> > As reported by Shakeel Butt:
> >
> > > I created 255 memcgs, 255 ext4 mounts and made each memcg create a
> > > file containing few KiBs on corresponding mount. Then in a separate
> > > memcg of 200 MiB limit ran a fork-bomb.
> > >
> > > I ran the "perf record -ag -- sleep 60" and below are the results:
> > > + 47.49% fb.sh [kernel.kallsyms] [k] down_read_trylock
> > > + 30.72% fb.sh [kernel.kallsyms] [k] up_read
> > > + 9.51% fb.sh [kernel.kallsyms] [k] mem_cgroup_iter
> > > + 1.69% fb.sh [kernel.kallsyms] [k] shrink_node_memcg
> > > + 1.35% fb.sh [kernel.kallsyms] [k] mem_cgroup_protected
> > > + 1.05% fb.sh [kernel.kallsyms] [k] queued_spin_lock_slowpath
> > > + 0.85% fb.sh [kernel.kallsyms] [k] _raw_spin_lock
> > > + 0.78% fb.sh [kernel.kallsyms] [k] lruvec_lru_size
> > > + 0.57% fb.sh [kernel.kallsyms] [k] shrink_node
> > > + 0.54% fb.sh [kernel.kallsyms] [k] queue_work_on
> > > + 0.46% fb.sh [kernel.kallsyms] [k] shrink_slab_memcg
> >
> > The patchset continues to improve shrink_slab() scalability and makes
> > it lockless completely. Here are several steps for that:
>
> So do you have any numbers for after theses changes?
>
I will do the same experiment as before with these patches sometime
this or next week.
BTW Kirill, thanks for pushing this.
regards,
Shakeel
Powered by blists - more mailing lists