[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20250506171051.7c9a4bed6ff799e8060ca0d7@linux-foundation.org>
Date: Tue, 6 May 2025 17:10:51 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: Jeongjun Park <aha310510@...il.com>, dennis@...nel.org, tj@...nel.org,
cl@...ux.com, jack@...e.cz, hughd@...gle.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] lib/percpu_counter: fix data race in
__percpu_counter_limited_add()
On Tue, 6 May 2025 13:56:13 +0200 Mateusz Guzik <mjguzik@...il.com> wrote:
> > unknown = batch * num_online_cpus();
> > count = __this_cpu_read(*fbc->counters);
> >
> > @@ -344,11 +345,10 @@ bool __percpu_counter_limited_add(struct percpu_counter *fbc,
> > ((amount > 0 && fbc->count + unknown <= limit) ||
> > (amount < 0 && fbc->count - unknown >= limit))) {
> > this_cpu_add(*fbc->counters, amount);
> > - local_irq_restore(flags);
> > - return true;
> > + good = true;
> > + goto out;
> > }
> >
> > - raw_spin_lock(&fbc->lock);
> > count = fbc->count + amount;
> >
> > /* Skip percpu_counter_sum() when safe */
> > --
> >
>
> As this always takes the centralized lock in the fast path this defeats
> the point of using a per-cpu counter in the first place.
Well. It partially "defeats the point" if the client code actually
uses percpu_counter_limited_add(). Only shmem.c does that.
> I noted this thing is buggy almost a year ago:
> https://lore.kernel.org/linux-mm/5eemkb4lo5eefp7ijgncgogwmadyzmvjfjmmmvfiki6cwdskfs@hi2z4drqeuz6/
>
> per the e-mail I don't believe existence of this routine is warranted.
>
> shmem is still the only consumer.
Totally. It would be better to remove percpu_counter_limited_add() and
to implement its intended effect within shmem.c
Powered by blists - more mailing lists