lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180730092513.GD2494@hirez.programming.kicks-ass.net>
Date:   Mon, 30 Jul 2018 11:25:13 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:     linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC tip/core/rcu] Avoid resched_cpu() when rescheduling
 the current CPU

On Fri, Jul 27, 2018 at 08:49:31AM -0700, Paul E. McKenney wrote:
> Hello, Peter,
> 
> It occurred to me that it is wasteful to let resched_cpu() acquire
> ->pi_lock when doing something like resched_cpu(smp_processor_id()),

rq->lock

> and that it would be better to instead use set_tsk_need_resched(current)
> and set_preempt_need_resched().
> 
> But is doing so really worthwhile?  For that matter, are there some
> constraints on the use of those two functions that I am failing to
> allow for in the patch below?


>     The resched_cpu() interface is quite handy, but it does acquire the
>     specified CPU's runqueue lock, which does not come for free.  This
>     commit therefore substitutes the following when directing resched_cpu()
>     at the current CPU:
>     
>             set_tsk_need_resched(current);
>             set_preempt_need_resched();

That is only a valid substitute for resched_cpu(smp_processor_id()).

But also note how this can cause more context switches over
resched_curr() for not checking if TIF_NEED_RESCHED wasn't already set.

Something that might be more in line with
resched_curr(smp_processor_id()) would be:

	preempt_disable();
	if (!test_tsk_need_resched(current)) {
		set_tsk_need_resched(current);
		set_preempt_need_resched();
	}
	preempt_enable();

Where the preempt_enable() could of course instantly trigger the
reschedule if it was the outer most one.

> @@ -2674,10 +2675,12 @@ static __latent_entropy void rcu_process_callbacks(struct softirq_action *unused

> -		resched_cpu(rdp->cpu); /* Provoke future context switch. */

> +		set_tsk_need_resched(current);
> +		set_preempt_need_resched();

That's not obviously correct. rdp->cpu had better be smp_processor_id().

> @@ -672,7 +672,8 @@ static void sync_rcu_exp_handler(void *unused)
>  			rcu_report_exp_rdp(rdp);
>  		} else {
>  			rdp->deferred_qs = true;
> -			resched_cpu(rdp->cpu);
> +			set_tsk_need_resched(t);
> +			set_preempt_need_resched();

That only works if @t == current.

>  		}
>  		return;
>  	}

> -	else
> -		resched_cpu(rdp->cpu);
> +	} else {
> +		set_tsk_need_resched(t);
> +		set_preempt_need_resched();

Similar...

>  }

> @@ -791,8 +791,10 @@ static void rcu_flavor_check_callbacks(int user)
>  	if (t->rcu_read_lock_nesting > 0 ||
>  	    (preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK))) {
>  		/* No QS, force context switch if deferred. */
> -		if (rcu_preempt_need_deferred_qs(t))
> -			resched_cpu(smp_processor_id());
> +		if (rcu_preempt_need_deferred_qs(t)) {
> +			set_tsk_need_resched(t);
> +			set_preempt_need_resched();
> +		}

And another dodgy one..

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ