[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <22def35f-5b8d-4424-a03b-c90e9174a14d@redhat.com>
Date: Mon, 13 May 2024 16:09:22 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>
Cc: linux-kernel@...r.kernel.org,
Valentin Schneider <valentin.schneider@....com>,
Marco Elver <elver@...gle.com>, Frederic Weisbecker <frederic@...nel.org>,
David Matlack <dmatlack@...gle.com>
Subject: Re: [PATCH] sched/core: Drop spinlocks on contention iff kernel is
preemptible
On 1/10/24 22:47, Sean Christopherson wrote:
> Use preempt_model_preemptible() to detect a preemptible kernel when
> deciding whether or not to reschedule in order to drop a contended
> spinlock or rwlock. Because PREEMPT_DYNAMIC selects PREEMPTION, kernels
> built with PREEMPT_DYNAMIC=y will yield contended locks even if the live
> preemption model is "none" or "voluntary". In short, make kernels with
> dynamically selected models behave the same as kernels with statically
> selected models.
Peter, looks like this patch fell through the cracks. Could this be
applied for 6.10?
There is a slightly confusing line in the commit message below, so that
it reads more like an RFC; but the patch fixes a CONFIG_PREEMPT_DYNAMIC
regression wrt static preemption models and has no functional change for
!CONFIG_PREEMPT_DYNAMIC.
> Somewhat counter-intuitively, NOT yielding a lock can provide better
> latency for the relevant tasks/processes. E.g. KVM x86's mmu_lock, a
> rwlock, is often contended between an invalidation event (takes mmu_lock
> for write) and a vCPU servicing a guest page fault (takes mmu_lock for
> read). For _some_ setups, letting the invalidation task complete even
> if there is mmu_lock contention provides lower latency for *all* tasks,
> i.e. the invalidation completes sooner *and* the vCPU services the guest
> page fault sooner.
>
> But even KVM's mmu_lock behavior isn't uniform, e.g. the "best" behavior
> can vary depending on the host VMM, the guest workload, the number of
> vCPUs, the number of pCPUs in the host, why there is lock contention, etc.
>
> In other words, simply deleting the CONFIG_PREEMPTION guard (or doing the
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This should be "deleting the preempt_model_preemptible() guard" given
that the patch does delete CONFIG_PREEMPTION, and only leaves
preempt_model_preemptible() in place.
Thanks,
Paolo
> opposite and removing contention yielding entirely) needs to come with a
> big pile of data proving that changing the status quo is a net positive.
>
> Cc: Valentin Schneider <valentin.schneider@....com>
> Cc: Peter Zijlstra (Intel) <peterz@...radead.org>
> Cc: Marco Elver <elver@...gle.com>
> Cc: Frederic Weisbecker <frederic@...nel.org>
> Cc: David Matlack <dmatlack@...gle.com>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
> include/linux/sched.h | 14 ++++++--------
> 1 file changed, 6 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 03bfe9ab2951..51550e64ce14 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -2229,11 +2229,10 @@ static inline bool preempt_model_preemptible(void)
> */
> static inline int spin_needbreak(spinlock_t *lock)
> {
> -#ifdef CONFIG_PREEMPTION
> + if (!preempt_model_preemptible())
> + return 0;
> +
> return spin_is_contended(lock);
> -#else
> - return 0;
> -#endif
> }
>
> /*
> @@ -2246,11 +2245,10 @@ static inline int spin_needbreak(spinlock_t *lock)
> */
> static inline int rwlock_needbreak(rwlock_t *lock)
> {
> -#ifdef CONFIG_PREEMPTION
> + if (!preempt_model_preemptible())
> + return 0;
> +
> return rwlock_is_contended(lock);
> -#else
> - return 0;
> -#endif
> }
>
> static __always_inline bool need_resched(void)
>
> base-commit: cdb3033e191fd03da2d7da23b9cd448dfa180a8e
Powered by blists - more mailing lists