lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Aug 2008 08:02:25 -0400
From:	Gregory Haskins <ghaskins@...ell.com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	mingo@...e.hu, srostedt@...hat.com, linux-kernel@...r.kernel.org,
	linux-rt-users@...r.kernel.org, npiggin@...e.de,
	gregory.haskins@...il.com
Subject: Re: [PATCH v2 3/6] sched: make double-lock-balance fair

Peter Zijlstra wrote:
> On Tue, 2008-08-26 at 13:35 -0400, Gregory Haskins wrote:
>   
>> double_lock balance() currently favors logically lower cpus since they
>> often do not have to release their own lock to acquire a second lock.
>> The result is that logically higher cpus can get starved when there is
>> a lot of pressure on the RQs.  This can result in higher latencies on
>> higher cpu-ids.
>>
>> This patch makes the algorithm more fair by forcing all paths to have
>> to release both locks before acquiring them again.  Since callsites to
>> double_lock_balance already consider it a potential preemption/reschedule
>> point, they have the proper logic to recheck for atomicity violations.
>>
>> Signed-off-by: Gregory Haskins <ghaskins@...ell.com>
>> ---
>>
>>  kernel/sched.c |   52 +++++++++++++++++++++++++++++++++++++++++++++-------
>>  1 files changed, 45 insertions(+), 7 deletions(-)
>>
>> diff --git a/kernel/sched.c b/kernel/sched.c
>> index df6b447..850b454 100644
>> --- a/kernel/sched.c
>> +++ b/kernel/sched.c
>> @@ -2782,21 +2782,43 @@ static void double_rq_unlock(struct rq *rq1, struct rq *rq2)
>>  		__release(rq2->lock);
>>  }
>>  
>> +#ifdef CONFIG_PREEMPT
>> +
>>  /*
>> - * double_lock_balance - lock the busiest runqueue, this_rq is locked already.
>> + * fair double_lock_balance: Safely acquires both rq->locks in a fair
>> + * way at the expense of forcing extra atomic operations in all
>> + * invocations.  This assures that the double_lock is acquired using the
>> + * same underlying policy as the spinlock_t on this architecture, which
>> + * reduces latency compared to the unfair variant below.  However, it
>> + * also adds more overhead and therefore may reduce throughput.
>>   */
>> -static int double_lock_balance(struct rq *this_rq, struct rq *busiest)
>> +static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
>> +	__releases(this_rq->lock)
>> +	__acquires(busiest->lock)
>> +	__acquires(this_rq->lock)
>> +{
>> +	spin_unlock(&this_rq->lock);
>> +	double_rq_lock(this_rq, busiest);
>> +
>> +	return 1;
>> +}
>>     
>
> Right - so to belabour Nick's point:
>
>   if (!spin_trylock(&busiest->lock)) {
>     spin_unlock(&this_rq->lock);
>     double_rq_lock(this_rq, busiest);
>   }
>
> might unfairly treat someone who is waiting on this_rq if I understand
> it right?
>
> I suppose one could then write it like:
>
>   if (spin_is_contended(&this_rq->lock) || !spin_trylock(&busiest->lock)) {
>     spin_unlock(&this_rq->lock);
>     double_rq_lock(this_rq, busiest);
>   }
>   

Indeed.  This does get to the heart of the problem: contention against
this_rq->lock.

> But, I'm not sure that's worth the effort at that point..
>
> Anyway - I think all this is utterly defeated on CONFIG_PREEMPT by the
> spin with IRQs enabled logic in kernel/spinlock.c.
>   

I submitted some patches related to this a while back.  The gist of it
is that the presence of ticketlocks for a given config *should* defeat
the preemptible version of the generic spinlocks or there is no point. 
Let me see if I can dig it up.

-Greg



Download attachment "signature.asc" of type "application/pgp-signature" (258 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ