lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160614115820.GD30921@twins.programming.kicks-ass.net>
Date:	Tue, 14 Jun 2016 13:58:20 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...nel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Clark Williams <williams@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Nick Piggin <nickpiggin@...oo.com.au>
Subject: Re: [PATCH] sched: Do not release current rq lock on non contended
 double_lock_balance()

On Mon, Jun 13, 2016 at 12:37:32PM -0400, Steven Rostedt wrote:
> The solution was to simply release the current (this_rq) lock and then
> take both locks.
> 
> 	spin_unlock(&this_rq->lock);
> 	double_rq_lock(this_rq, busiest);

> What I could not understand about Gregory's patch is that regardless of
> contention, the currently held lock is always released, opening up a
> window for this ping ponging to occur. When I changed the code to only
> release on contention of the second lock, things improved tremendously.

Its simpler to reason about and there wasn't a problem with at the time.

The above puts a strict limit on hold time and is fair because of the
queueing.

> +++ b/kernel/sched/sched.h
> @@ -1548,10 +1548,15 @@ static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
>  	__acquires(busiest->lock)
>  	__acquires(this_rq->lock)
>  {
> +	int ret = 0;
> +
> +	if (unlikely(!raw_spin_trylock(&busiest->lock))) {
> +		raw_spin_unlock(&this_rq->lock);
> +		double_rq_lock(this_rq, busiest);
> +		ret = 1;
> +	}
>  
> +	return ret;
>  }

This relies on trylock no being allowed to steal the lock, which I think
is true for all fair spinlocks (for ticket this must be true, but it is
possible with qspinlock for example).

And it does indeed make the hold time harder to analyze.

For instance; pull_rt_task() does:

	for_each_cpu() {
		double_lock_balance(this, that);
		...
		double_unlock_balance(this, that);
	}

Which, with the trylock, ends up with a max possible hold time of
O(nr_cpus).

Unlikely, sure, but RT is a game of upper bounds etc.

So should we maybe do something like:

	if (unlikely(raw_spin_is_contended(&this_rq->lock) ||
	             !raw_spin_trylock(&busiest->lock))) {
		raw_spin_unlock(&this_rq->lock);
		double_rq_lock(this_rq, busiest);
		ret = 1;
	}

?

> 	CPU 0				CPU 1
> 	-----				-----
>     [ wake up ]
> 				     spin_lock(cpu1_rq->lock);
>     spin_lock(cpu1_rq->lock)
> 				    double_lock_balance()
> 				    [ release cpu1_rq->lock ]
> 				    spin_lock(cpu1_rq->lock)
>     [due to ticket, now acquires
>      cpu1_rq->lock ]
> 
>     [goes to push task]
>     double_lock_balance()
>     [ release cpu1_rq->lock ]
>                                    [ acquires lock ]
> 				   spin_lock(cpu2_rq->lock)
> 				   [ blocks as cpu2 is using it ]
> 

Also, its not entirely clear this scenario helps illustrate how your
change is better; because here the lock _is_ contended, so we'll fail
the trylock, no?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ