[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1403060620350.32171@nftneq.ynat.uz>
Date: Thu, 6 Mar 2014 06:23:18 -0800 (PST)
From: David Lang <david@...g.hm>
To: Khalid Aziz <khalid.aziz@...cle.com>
cc: Oleg Nesterov <oleg@...hat.com>, Andi Kleen <andi@...stfloor.org>,
Thomas Gleixner <tglx@...utronix.de>,
One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...nel.org>,
peterz@...radead.org, akpm@...ux-foundation.org,
viro@...iv.linux.org.uk, linux-kernel@...r.kernel.org
Subject: Re: [RFC] [PATCH] Pre-emption control for userspace
On Wed, 5 Mar 2014, Khalid Aziz wrote:
> On 03/05/2014 05:36 PM, David Lang wrote:
>> Yes, you pay for two context switches, but you don't pay for threads
>> B..ZZZ all running (and potentially spinning) trying to aquire the lock
>> before thread A is able to complete it's work.
>>
>
> Ah, great. We are converging now.
>
>> As soon as a second thread hits the contention, thread A gets time to
>> finish.
>
> Only as long as thread A could be scheduled immediately which may or may not
> be the case depending upon what else is running on the core thread A last ran
> on and if thread A needs to be migrated to another core.
>
>>
>> It's not as 'good' [1] as thread A just working longer,
>
> and that is the exact spot where I am trying to improve performance.
well, you have said that the "give me more time" flag results in 3-5% better
performance for databases under some workloads, how does this compare with the
results of yield_to()?
I think that the two approaches are going to be very close. I'd lay good odds
that the difference between the two is very hard to extract from the noise of
the variation of different runs (I won't say statistically insignificant, but I
will say that I expect it to take a lot of statistical analysis to pull it out
of the clutter)
David Lang
>> but it's FAR
>> better than thread A sleeping while every other thread runs and
>> potentially tries to get the lock
>
> Absolutely. I agree with that.
>
>>
>> [1] it wastes the context switches, but it avoids the overhead of
>> figuring out if the thread needs to extend it's time, and if it's time
>> was actually extended, and what penalty it should suffer the next time
>> it runs....
>
> All of it can be done by setting and checking couple of flags in task_struct.
> That is not insignificant, but hardly expensive. Logic is quite simple:
>
> resched()
> {
> ........
> if (immmunity) {
> if (!penalty) {
> immunity = 0;
> penalty = 1;
> -- skip context switch --
> }
> else {
> immunity = penalty = 0;
> -- do the context switch --
> }
> }
> .........
> }
>
> sched_yield()
> {
> ......
> penalty = 0;
> ......
> }
>
> This simple logic will also work to defeat the obnoxius threads that keep
> setting immunity request flag repeatedly within the same critical section to
> give themselves multiple extensions.
>
> Thanks,
> Khalid
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists