[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkr+EJWgAQ9VhAdeTtMx+11=AX=mVVEvC-0UihROf2J+PA@mail.gmail.com>
Date: Fri, 28 Jun 2019 10:16:13 -0700
From: Yang Shi <shy828301@...il.com>
To: Christopher Lameter <cl@...ux.com>
Cc: Roman Gushchin <guro@...com>, Waiman Long <longman@...hat.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
Jonathan Corbet <corbet@....net>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Shakeel Butt <shakeelb@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>
Subject: Re: [PATCH 2/2] mm, slab: Extend vm/drop_caches to shrink kmem slabs
On Fri, Jun 28, 2019 at 8:32 AM Christopher Lameter <cl@...ux.com> wrote:
>
> On Thu, 27 Jun 2019, Roman Gushchin wrote:
>
> > so that objects belonging to different memory cgroups can share the same page
> > and kmem_caches.
> >
> > It's a fairly big change though.
>
> Could this be done at another level? Put a cgoup pointer into the
> corresponding structures and then go back to just a single kmen_cache for
> the system as a whole? You can still account them per cgroup and there
> will be no cleanup problem anymore. You could scan through a slab cache
> to remove the objects of a certain cgroup and then the fragmentation
> problem that cgroups create here will be handled by the slab allocators in
> the traditional way. The duplication of the kmem_cache was not designed
> into the allocators but bolted on later.
I'm afraid this may bring in another problem for memcg page reclaim.
When shrinking the slabs, the shrinker may end up scanning a very long
list to find out the slabs for a specific memcg. Particularly for the
count operation, it may have to scan the list from the beginning all
the way down to the end. It may take unbounded time.
When I worked on THP deferred split shrinker problem, I used to do
like this, but it turns out it may take milliseconds to count the
objects on the list, but it may just need reclaim a few of them.
>
Powered by blists - more mailing lists