[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150527090922.GW3644@twins.programming.kicks-ass.net>
Date: Wed, 27 May 2015 11:09:22 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
Cc: Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
x86@...nel.org, Linus Torvalds <torvalds@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH RESEND] sched/preempt: fix cond_resched_lock() and
cond_resched_softirq()
On Thu, May 14, 2015 at 07:23:27PM +0300, Konstantin Khlebnikov wrote:
> These functions check should_resched() before unlocking spinlock/bh-enable:
> preempt_count always non-zero => should_resched() always returns false.
> cond_resched_lock() works iff spin_needbreak is set.
>
> This patch adds argument "preempt_offset" to should_resched() add
> rearranges preempt_count offset constants for that:
>
> PREEMPT_OFFSET - offset after preempt_disable() (0 if CONFIG_PREEMPT_COUNT=n)
> PREEMPT_LOCK_OFFSET - offset after spin_lock() (alias for PREEMPT_OFFSET)
> SOFTIRQ_DISABLE_OFFSET - offset after local_bh_distable()
> SOFTIRQ_LOCK_OFFSET - offset after spin_lock_bh()
>
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
Sorry, but it doesn't apply anymore because of that whole
pagefault_disable() muck we merged.
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 48d3c5d2ecc9..6e73b74c0c60 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -2177,7 +2177,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
> vc->runner = vcpu;
> if (n_ceded == vc->n_runnable) {
> kvmppc_vcore_blocked(vc);
> - } else if (should_resched()) {
> + } else if (should_resched(PREEMPT_LOCK_OFFSET)) {
I'm thinking this wants to be: need_resched() ?
> vc->vcore_state = VCORE_PREEMPT;
> /* Let something else run */
> cond_resched_lock(&vc->lock);
> diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
> index eb6f9e6c3075..e91fb799a6da 100644
> --- a/include/asm-generic/preempt.h
> +++ b/include/asm-generic/preempt.h
> @@ -71,9 +71,9 @@ static __always_inline bool __preempt_count_dec_and_test(void)
> /*
> * Returns true when we need to resched and can (barring IRQ state).
> */
> -static __always_inline bool should_resched(void)
> +static __always_inline bool should_resched(int offset)
> {
> - return unlikely(!preempt_count() && tif_need_resched());
> + return unlikely(preempt_count() == offset && tif_need_resched());
> }
>
> #ifdef CONFIG_PREEMPT
So the reason I held off on this patch for a wee bit is because I don't
like the should_resched() change you did; although I fully understand
why you did it.
That said, I could not come up with anything better either and I suppose
that once we fix that ppc-kvm user, there really isn't a user left
outside of core code and thus we can deal with a slightly dangerous
function.
I did not really look, but it would be good if we could also get rid of
the Xen usage.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists