lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 04 Jan 2011 12:27:00 -0500
From:	Steven Rostedt <rostedt@...dmis.org>
To:	pmorreale@...ell.com
Cc:	Gregory Haskins <ghaskins@...ell.com>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <peterz@...radead.org>,
	ThomasGleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org
Subject: Re: [RFC][RT][PATCH 3/4] rtmutex: Revert Optimize rt lock wakeup

On Tue, 2011-01-04 at 10:15 -0700, Peter W. Morreale wrote:

> My bad.  I thought preemption did change task state.
> 
> This still requires the owner to run through try_to_wake_up() and all
> its associated overhead only to find out that the waiter is running.  
> 
> The assumption I made when I suggested the original concept to Greg was
> that if the new owner is running, there is *nothing* to do wrt
> scheduling.  If that was a wrong assumption, then, yes, drop the patch
> and clean things up.  
> 
> If that was a good assumption, then we are leaving 'cycles on the table'
> as waking up a running process is a non-zero-overhead path and that is a
> bad thing considering how many times spin_unlock() is called on an rt
> system.
> 
> Bear in mind that this savings scales directly as the number of CPUs
> (assuming all are vectored on the lock).  We can only have nr_cpus-1
> spinning waiters at any given time, regardless of the number of tasks in
> contention.  Perhaps this is too little to worry about on a 4way system,
> but I suspect that it could be substantial on larger systems.  
> 
> I'll be quiet now as I know little about the intricacies of
> preemption/scheduling (obviously) and like Greg, have been removed from
> RT kernel work for several years. <sigh>

No need to be quiet ;-)

I'm working on making it spin in TASK_RUNNING state if possible, but it
is making the code a bit more complex, as it seems that there is an
assumption with the wakeup and the changing of the current->state in the
rt_spin_lock_slowlock code all being under the lock->wait_lock. I think
I'll scrap this idea.

That said, I think your wakeup patch may be worth while with Lai's new
code. His changes causes the owner to wake up the pending owner several
times, because the pending owner is never removed from the lock
wait_list. If a high prio task grabs and releases the same lock over and
over, if there is a waiter it will try to wake up that waiter each time.

Thus, having your patch may prevent that unnecessary wakeup.

I'll look more into it. Thanks!

-- Steve



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ