[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpHyKj0ma_2-TR3i6jFJit8exDTqDF9PHRG4E5yzkNjXLA@mail.gmail.com>
Date: Sat, 11 Jan 2025 01:59:41 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: Hillf Danton <hdanton@...a.com>
Cc: akpm@...ux-foundation.org, peterz@...radead.org, willy@...radead.org,
hannes@...xchg.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH v9 10/17] refcount: introduce __refcount_{add|inc}_not_zero_limited
On Fri, Jan 10, 2025 at 10:32 PM Hillf Danton <hdanton@...a.com> wrote:
>
> On Fri, 10 Jan 2025 20:25:57 -0800 Suren Baghdasaryan <surenb@...gle.com>
> > -bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp)
> > +bool __refcount_add_not_zero_limited(int i, refcount_t *r, int *oldp,
> > + int limit)
> > {
> > int old = refcount_read(r);
> >
> > do {
> > if (!old)
> > break;
> > +
> > + if (statically_true(limit == INT_MAX))
> > + continue;
> > +
> > + if (i > limit - old) {
> > + if (oldp)
> > + *oldp = old;
> > + return false;
> > + }
> > } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i));
>
> The acquire version should be used, see atomic_long_try_cmpxchg_acquire()
> in kernel/locking/rwsem.c.
This is how __refcount_add_not_zero() is already implemented and I'm
only adding support for a limit. If you think it's implemented wrong
then IMHO it should be fixed separately.
>
> Why not use the atomic_long_t without bothering to add this limited version?
The check against the limit is not only for overflow protection but
also to avoid refcount increment when the writer bit is set. It makes
the locking code simpler if we have a function that prevents
refcounting when the vma is detached (vm_refcnt==0) or when it's
write-locked (vm_refcnt<VMA_REF_LIMIT).
>
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@...roid.com.
>
Powered by blists - more mailing lists