[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZJU708VIyJ/3StAX@dread.disaster.area>
Date: Fri, 23 Jun 2023 16:29:39 +1000
From: Dave Chinner <david@...morbit.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Qi Zheng <zhengqi.arch@...edance.com>, akpm@...ux-foundation.org,
tkhai@...ru, roman.gushchin@...ux.dev, djwong@...nel.org,
brauner@...nel.org, paulmck@...nel.org, tytso@....edu,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
intel-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
linux-arm-msm@...r.kernel.org, dm-devel@...hat.com,
linux-raid@...r.kernel.org, linux-bcache@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-nfs@...r.kernel.org, linux-xfs@...r.kernel.org,
linux-btrfs@...r.kernel.org
Subject: Re: [PATCH 24/29] mm: vmscan: make global slab shrink lockless
On Thu, Jun 22, 2023 at 05:12:02PM +0200, Vlastimil Babka wrote:
> On 6/22/23 10:53, Qi Zheng wrote:
> > @@ -1067,33 +1068,27 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
> > if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg))
> > return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
> >
> > - if (!down_read_trylock(&shrinker_rwsem))
> > - goto out;
> > -
> > - list_for_each_entry(shrinker, &shrinker_list, list) {
> > + rcu_read_lock();
> > + list_for_each_entry_rcu(shrinker, &shrinker_list, list) {
> > struct shrink_control sc = {
> > .gfp_mask = gfp_mask,
> > .nid = nid,
> > .memcg = memcg,
> > };
> >
> > + if (!shrinker_try_get(shrinker))
> > + continue;
> > + rcu_read_unlock();
>
> I don't think you can do this unlock?
>
> > +
> > ret = do_shrink_slab(&sc, shrinker, priority);
> > if (ret == SHRINK_EMPTY)
> > ret = 0;
> > freed += ret;
> > - /*
> > - * Bail out if someone want to register a new shrinker to
> > - * prevent the registration from being stalled for long periods
> > - * by parallel ongoing shrinking.
> > - */
> > - if (rwsem_is_contended(&shrinker_rwsem)) {
> > - freed = freed ? : 1;
> > - break;
> > - }
> > - }
> >
> > - up_read(&shrinker_rwsem);
> > -out:
> > + rcu_read_lock();
>
> That new rcu_read_lock() won't help AFAIK, the whole
> list_for_each_entry_rcu() needs to be under the single rcu_read_lock() to be
> safe.
Yeah, that's the pattern we've been taught and the one we can look
at and immediately say "this is safe".
This is a different pattern, as has been explained bi Qi, and I
think it *might* be safe.
*However.*
Right now I don't have time to go through a novel RCU list iteration
pattern it one step at to determine the correctness of the
algorithm. I'm mostly worried about list manipulations that can
occur outside rcu_read_lock() section bleeding into the RCU
critical section because rcu_read_lock() by itself is not a memory
barrier.
Maybe Paul has seen this pattern often enough he could simply tell
us what conditions it is safe in. But for me to work that out from
first principles? I just don't have the time to do that right now.
> IIUC this is why Dave in [4] suggests unifying shrink_slab() with
> shrink_slab_memcg(), as the latter doesn't iterate the list but uses IDR.
Yes, I suggested the IDR route because radix tree lookups under RCU
with reference counted objects are a known safe pattern that we can
easily confirm is correct or not. Hence I suggested the unification
+ IDR route because it makes the life of reviewers so, so much
easier...
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists