lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5317CDC4.8020908@oracle.com>
Date:	Wed, 05 Mar 2014 18:22:12 -0700
From:	Khalid Aziz <khalid.aziz@...cle.com>
To:	David Lang <david@...g.hm>
CC:	Oleg Nesterov <oleg@...hat.com>, Andi Kleen <andi@...stfloor.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
	"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...nel.org>,
	peterz@...radead.org, akpm@...ux-foundation.org,
	viro@...iv.linux.org.uk, linux-kernel@...r.kernel.org
Subject: Re: [RFC] [PATCH] Pre-emption control for userspace

On 03/05/2014 05:36 PM, David Lang wrote:
> Yes, you pay for two context switches, but you don't pay for threads
> B..ZZZ all running (and potentially spinning) trying to aquire the lock
> before thread A is able to complete it's work.
>

Ah, great. We are converging now.

> As soon as a second thread hits the contention, thread A gets time to
> finish.

Only as long as thread A could be scheduled immediately which may or may 
not be the case depending upon what else is running on the core thread A 
last ran on and if thread A needs to be migrated to another core.

>
> It's not as 'good' [1] as thread A just working longer,

and that is the exact spot where I am trying to improve performance.

> but it's FAR
> better than thread A sleeping while every other thread runs and
> potentially tries to get the lock

Absolutely. I agree with that.

>
> [1] it wastes the context switches, but it avoids the overhead of
> figuring out if the thread needs to extend it's time, and if it's time
> was actually extended, and what penalty it should suffer the next time
> it runs....

All of it can be done by setting and checking couple of flags in 
task_struct. That is not insignificant, but hardly expensive. Logic is 
quite simple:

resched()
{
	........
	if (immmunity) {
		if (!penalty) {
			immunity = 0;
			penalty = 1;
			-- skip context switch --
		}
		else {
			immunity = penalty = 0;
			-- do the context switch --
		}
	}
	.........
}

sched_yield()
{
	......
	penalty = 0;
	......
}

This simple logic will also work to defeat the obnoxius threads that 
keep setting immunity request flag repeatedly within the same critical 
section to give themselves multiple extensions.

Thanks,
Khalid
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ