[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230904151058.GB25717@noisy.programming.kicks-ass.net>
Date: Mon, 4 Sep 2023 17:10:58 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Bongkyu Kim <bongkyu7.kim@...sung.com>
Cc: mingo@...hat.com, will@...nel.org, longman@...hat.com,
boqun.feng@...il.com, linux-kernel@...r.kernel.org,
gregkh@...uxfoundation.org, kernel test robot <lkp@...el.com>
Subject: Re: [PATCH v2 2/2] locking/rwsem: Make reader optimistic spinning
optional
On Fri, Sep 01, 2023 at 10:07:04AM +0900, Bongkyu Kim wrote:
> diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
> index 9c0462d515c1..47c467880af5 100644
> --- a/kernel/locking/rwsem.c
> +++ b/kernel/locking/rwsem.c
> @@ -117,6 +117,17 @@
> # define DEBUG_RWSEMS_WARN_ON(c, sem)
> #endif
>
> +static bool __ro_after_init rwsem_opt_rspin;
> +
> +static int __init opt_rspin(char *str)
> +{
> + rwsem_opt_rspin = true;
> +
> + return 0;
> +}
> +
> +early_param("rwsem.opt_rspin", opt_rspin);
> +
> /*
> * On 64-bit architectures, the bit definitions of the count are:
> *
> @@ -1083,7 +1094,7 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
> return false;
> }
>
> -static inline bool rwsem_no_spinners(sem)
> +static inline bool rwsem_no_spinners(struct rw_semaphore *sem)
> {
> return false;
> }
> @@ -1157,6 +1168,9 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat
> return sem;
> }
>
> + if (!IS_ENABLED(CONFIG_RWSEM_SPIN_ON_OWNER) || !rwsem_opt_rspin)
> + goto queue;
> +
At the very least this should be a static_branch(), but I still very
much want an answer on how all this interacts with the handoff stuff.
Powered by blists - more mailing lists