[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zd8sDKX8XtdrMuMb@gmail.com>
Date: Wed, 28 Feb 2024 13:50:20 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Waiman Long <longman@...hat.com>
Cc: Namhyung Kim <namhyung@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>, Boqun Feng <boqun.feng@...il.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] locking/percpu-rwsem: Trigger contention tracepoints
only if contended
* Waiman Long <longman@...hat.com> wrote:
>
> On 2/27/24 18:02, Namhyung Kim wrote:
> > Hello,
> >
> > On Mon, Nov 20, 2023 at 12:28 PM Namhyung Kim <namhyung@...nel.org> wrote:
> > > Ping!
> > >
> > > On Wed, Nov 8, 2023 at 1:53 PM Namhyung Kim <namhyung@...nel.org> wrote:
> > > > It mistakenly fires lock contention tracepoints always in the writer path.
> > > > It should be conditional on the try lock result.
> > Can anybody take a look at this? This makes a large noise
> > in the lock contention result.
> >
> > Thanks,
> > Namhyung
> >
> > > > Signed-off-by: Namhyung Kim <namhyung@...nel.org>
> > > > ---
> > > > kernel/locking/percpu-rwsem.c | 11 ++++++++---
> > > > 1 file changed, 8 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
> > > > index 185bd1c906b0..6083883c4fe0 100644
> > > > --- a/kernel/locking/percpu-rwsem.c
> > > > +++ b/kernel/locking/percpu-rwsem.c
> > > > @@ -223,9 +223,10 @@ static bool readers_active_check(struct percpu_rw_semaphore *sem)
> > > >
> > > > void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> > > > {
> > > > + bool contended = false;
> > > > +
> > > > might_sleep();
> > > > rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
> > > > - trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
> > > >
> > > > /* Notify readers to take the slow path. */
> > > > rcu_sync_enter(&sem->rss);
> > > > @@ -234,8 +235,11 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> > > > * Try set sem->block; this provides writer-writer exclusion.
> > > > * Having sem->block set makes new readers block.
> > > > */
> > > > - if (!__percpu_down_write_trylock(sem))
> > > > + if (!__percpu_down_write_trylock(sem)) {
> > > > + trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
> > > > percpu_rwsem_wait(sem, /* .reader = */ false);
> > > > + contended = true;
> > > > + }
> > > >
> > > > /* smp_mb() implied by __percpu_down_write_trylock() on success -- D matches A */
> > > >
> > > > @@ -247,7 +251,8 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem)
> > > >
> > > > /* Wait for all active readers to complete. */
> > > > rcuwait_wait_event(&sem->writer, readers_active_check(sem), TASK_UNINTERRUPTIBLE);
> > > > - trace_contention_end(sem, 0);
> > > > + if (contended)
> > > > + trace_contention_end(sem, 0);
> > > > }
> > > > EXPORT_SYMBOL_GPL(percpu_down_write);
> > > >
> > > > --
> > > > 2.42.0.869.gea05f2083d-goog
>
> Yes, that makes sense. Sorry for missing this patch.
>
> Reviewed-by: Waiman Long <longman@...hat.com>
Applied to tip:locking/core, thanks guys!
Ingo
Powered by blists - more mailing lists