[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231106111104.GK8262@noisy.programming.kicks-ass.net>
Date: Mon, 6 Nov 2023 12:11:04 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Alexander Aring <aahringo@...hat.com>
Cc: will@...nel.org, gfs2@...ts.linux.dev, boqun.feng@...il.com,
mark.rutland@....com, linux-kernel@...r.kernel.org
Subject: Re: [RFC 1/2] refcount: introduce generic lockptr funcs
On Fri, Nov 03, 2023 at 03:20:08PM -0400, Alexander Aring wrote:
> Hi,
>
> On Fri, Nov 3, 2023 at 2:54 PM Peter Zijlstra <peterz@...radead.org> wrote:
> >
> > On Fri, Nov 03, 2023 at 12:16:34PM -0400, Alexander Aring wrote:
> >
> > > diff --git a/lib/refcount.c b/lib/refcount.c
> > > index a207a8f22b3c..e28678f0f473 100644
> > > --- a/lib/refcount.c
> > > +++ b/lib/refcount.c
> > > @@ -94,6 +94,34 @@ bool refcount_dec_not_one(refcount_t *r)
> > > }
> > > EXPORT_SYMBOL(refcount_dec_not_one);
> > >
> > > +bool refcount_dec_and_lockptr(refcount_t *r, void (*lock)(void *lockptr),
> > > + void (*unlock)(void *lockptr), void *lockptr)
> > > +{
> > > + if (refcount_dec_not_one(r))
> > > + return false;
> > > +
> > > + lock(lockptr);
> > > + if (!refcount_dec_and_test(r)) {
> > > + unlock(lockptr);
> > > + return false;
> > > + }
> > > +
> > > + return true;
> > > +}
> > > +EXPORT_SYMBOL(refcount_dec_and_lockptr);
> >
> > This is terrible, you're forcing indirect calls on everything.
> >
>
> Okay, I see. How about introducing a macro producing all the code at
> preprocessor time?
__always_inline should work, then you get constant propagation for the
function pointer.
But indeed, perhaps a macro is more convenient vs the irq flags
argument. You'll then end up with something like:
#define __refcount_dec_and_lock(_ref, _lock, _unlock) \
({ bool _ret = false; \
if (!refcount_dec_not_one(_ref)) { \
_lock; \
if (!refcount_dec_and_test(_ref)) { \
_unlock; \
} else { \
_ret = true; \
} \
} \
_ret; \
})
bool refcount_dec_and_spinlock_irqsave(refcount_t *r, spinlock_t *lock,
unsigned long *flags)
{
return __refcount_dec_and_lock(r, spin_lock_irqsave(*lock, *flags),
spin_unlock_irqrestore(*lock, *flags));
}
Powered by blists - more mailing lists