[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250205093752.GA7145@noisy.programming.kicks-ass.net>
Date: Wed, 5 Feb 2025 10:37:52 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-kernel@...r.kernel.org,
André Almeida <andrealmeid@...lia.com>,
Darren Hart <dvhart@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Valentin Schneider <vschneid@...hat.com>,
Waiman Long <longman@...hat.com>
Subject: Re: [PATCH v8 08/15] futex: Prepare for reference counting of the
process private hash end of operation.
On Wed, Feb 05, 2025 at 08:54:05AM +0100, Sebastian Andrzej Siewior wrote:
> On 2025-02-04 10:49:22 [+0100], Peter Zijlstra wrote:
> > On Mon, Feb 03, 2025 at 02:59:28PM +0100, Sebastian Andrzej Siewior wrote:
> >
> > > @@ -555,11 +558,12 @@ struct futex_hash_bucket *futex_q_lock(struct futex_q *q)
> > > return hb;
> > > }
> > >
> > > -void futex_q_unlock(struct futex_hash_bucket *hb)
> > > +void futex_q_unlock_put(struct futex_hash_bucket *hb)
> > > __releases(&hb->lock)
> > > {
> > > futex_hb_waiters_dec(hb);
> > > spin_unlock(&hb->lock);
> > > + futex_hash_put(hb);
> > > }
> >
> > Here you don't
>
> unlock + put.
>
> > > @@ -288,23 +289,29 @@ extern void __futex_unqueue(struct futex_q *q);
> > > extern void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb);
> > > extern int futex_unqueue(struct futex_q *q);
> > >
> > > +static inline void futex_hb_unlock_put(struct futex_hash_bucket *hb)
> > > +{
> > > + spin_unlock(&hb->lock);
> > > + futex_hash_put(hb);
> > > +}
> > > +
> > > /**
> > > - * futex_queue() - Enqueue the futex_q on the futex_hash_bucket
> > > + * futex_queue_put() - Enqueue the futex_q on the futex_hash_bucket
> > > * @q: The futex_q to enqueue
> > > * @hb: The destination hash bucket
> > > *
> > > - * The hb->lock must be held by the caller, and is released here. A call to
> > > - * futex_queue() is typically paired with exactly one call to futex_unqueue(). The
> > > - * exceptions involve the PI related operations, which may use futex_unqueue_pi()
> > > - * or nothing if the unqueue is done as part of the wake process and the unqueue
> > > - * state is implicit in the state of woken task (see futex_wait_requeue_pi() for
> > > - * an example).
> > > + * The hb->lock must be held by the caller, and is released here and the reference
> > > + * on the hb is dropped. A call to futex_queue_put() is typically paired with
> > > + * exactly one call to futex_unqueue(). The exceptions involve the PI related
> > > + * operations, which may use futex_unqueue_pi() or nothing if the unqueue is
> > > + * done as part of the wake process and the unqueue state is implicit in the
> > > + * state of woken task (see futex_wait_requeue_pi() for an example).
> > > */
> > > -static inline void futex_queue(struct futex_q *q, struct futex_hash_bucket *hb)
> > > +static inline void futex_queue_put(struct futex_q *q, struct futex_hash_bucket *hb)
> > > __releases(&hb->lock)
> > > {
> > > __futex_queue(q, hb);
> > > - spin_unlock(&hb->lock);
> > > + futex_hb_unlock_put(hb);
> > > }
> >
> > And here you do.
>
> unlock + put. What do I don't do?
Use this futex_hb_unlock_put() helper consistently :-)
> > > @@ -380,11 +387,13 @@ double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
> > > }
> > >
> > > static inline void
> > > -double_unlock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
> > > +double_unlock_hb_put(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
> > > {
> > > spin_unlock(&hb1->lock);
> > > if (hb1 != hb2)
> > > spin_unlock(&hb2->lock);
> > > + futex_hash_put(hb1);
> > > + futex_hash_put(hb2);
> > > }
> > >
> >
> > This seems horribly inconsistent and makes my head hurt. Where are the
> > matching gets for double_lock_hb() ?
>
> There are in futex_hash().
Yeah, that took me a very long while to find. And also, ARGH at the
asymmetry of things.
Powered by blists - more mailing lists