[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM9d7ciPHi27JwcCbCWAkHnFBn-6PRbpRjBJ1U=cfDN-UcthjA@mail.gmail.com>
Date: Tue, 9 Aug 2022 14:13:14 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: Waiman Long <longman@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] locking: Add __lockfunc to slow path functions
Hello,
On Tue, Aug 9, 2022 at 8:05 AM Waiman Long <longman@...hat.com> wrote:
>
> On 8/8/22 13:59, Namhyung Kim wrote:
> > So that we can skip the functions in the perf lock contention and other
> > places like /proc/PID/wchan.
> >
> > Signed-off-by: Namhyung Kim <namhyung@...nel.org>
> > ---
> > kernel/locking/qrwlock.c | 4 ++--
> > kernel/locking/qspinlock.c | 2 +-
> > 2 files changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
> > index 2e1600906c9f..d2ef312a8611 100644
> > --- a/kernel/locking/qrwlock.c
> > +++ b/kernel/locking/qrwlock.c
> > @@ -18,7 +18,7 @@
> > * queued_read_lock_slowpath - acquire read lock of a queued rwlock
> > * @lock: Pointer to queued rwlock structure
> > */
> > -void queued_read_lock_slowpath(struct qrwlock *lock)
> > +void __lockfunc queued_read_lock_slowpath(struct qrwlock *lock)
> > {
> > /*
> > * Readers come here when they cannot get the lock without waiting
> > @@ -63,7 +63,7 @@ EXPORT_SYMBOL(queued_read_lock_slowpath);
> > * queued_write_lock_slowpath - acquire write lock of a queued rwlock
> > * @lock : Pointer to queued rwlock structure
> > */
> > -void queued_write_lock_slowpath(struct qrwlock *lock)
> > +void __lockfunc queued_write_lock_slowpath(struct qrwlock *lock)
> > {
> > int cnts;
> >
> > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> > index 65a9a10caa6f..2b23378775fe 100644
> > --- a/kernel/locking/qspinlock.c
> > +++ b/kernel/locking/qspinlock.c
> > @@ -313,7 +313,7 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock,
> > * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
> > * queue : ^--' :
> > */
> > -void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > +void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > {
> > struct mcs_spinlock *prev, *next, *node;
> > u32 old, tail;
>
>
> For completeness, I think you should also add it to the
> __pv_queued_spin_unlock() and __pv_queued_spin_unlock_slowpath()
> function in kernel/locking/qspinlock_paravirt.h. Perhaps even the
> assembly code in arch/x86/include/asm/qspinlock_paravirt.h.
Thanks for your comment. I'm not sure about the asm part, will this be enough?
--- a/arch/x86/include/asm/qspinlock_paravirt.h
+++ b/arch/x86/include/asm/qspinlock_paravirt.h
@@ -36,7 +36,7 @@ PV_CALLEE_SAVE_REGS_THUNK(__pv_queued_spin_unlock_slowpath);
* rsi = lockval (second argument)
* rdx = internal variable (set to 0)
*/
-asm (".pushsection .text;"
+asm (".pushsection .spinlock.text;"
".globl " PV_UNLOCK ";"
".type " PV_UNLOCK ", @function;"
".align 4,0x90;"
Powered by blists - more mailing lists