lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 10 Jan 2013 08:05:00 -0500
From:	Rik van Riel <riel@...hat.com>
To:	Michel Lespinasse <walken@...gle.com>
CC:	linux-kernel@...r.kernel.org, aquini@...hat.com,
	eric.dumazet@...il.com, lwoodman@...hat.com, jeremy@...p.org,
	Jan Beulich <JBeulich@...ell.com>, knoel@...hat.com,
	chegu_vinod@...com, raghavendra.kt@...ux.vnet.ibm.com,
	mingo@...hat.com
Subject: Re: [PATCH 4/5] x86,smp: keep spinlock delay values per hashed spinlock
 address

On 01/10/2013 08:01 AM, Michel Lespinasse wrote:
> On Tue, Jan 8, 2013 at 2:31 PM, Rik van Riel <riel@...hat.com> wrote:
>> From: Eric Dumazet <eric.dumazet@...il.com>
>>
>> Eric Dumazet found a regression with the first version of the spinlock
>> backoff code, in a workload where multiple spinlocks were contended,
>> each having a different wait time.
>>
>> This patch has multiple delay values per cpu, indexed on a hash
>> of the lock address, to avoid that problem.
>>
>> Eric Dumazet wrote:
>>
>> I did some tests with your patches with following configuration :
>>
>> tc qdisc add dev eth0 root htb r2q 1000 default 3
>> (to force a contention on qdisc lock, even with a multi queue net
>> device)
>>
>> and 24 concurrent "netperf -t UDP_STREAM -H other_machine -- -m 128"
>>
>> Machine : 2 Intel(R) Xeon(R) CPU X5660  @ 2.80GHz
>> (24 threads), and a fast NIC (10Gbps)
>>
>> Resulting in a 13 % regression (676 Mbits -> 595 Mbits)
>>
>> In this workload we have at least two contended spinlocks, with
>> different delays. (spinlocks are not held for the same duration)
>>
>> It clearly defeats your assumption of a single per cpu delay being OK :
>> Some cpus are spinning too long while the lock was released.
>>
>> We might try to use a hash on lock address, and an array of 16 different
>> delays so that different spinlocks have a chance of not sharing the same
>> delay.
>>
>> With following patch, I get 982 Mbits/s with same bench, so an increase
>> of 45 % instead of a 13 % regression.
>
> Note that these results were with your v1 proposal. With v3 proposal,
> on a slightly different machine (2 socket sandybridge) with a similar
> NIC, I am not seeing the regression when not using the hash table. I
> think this is because v3 got more conservative about mixed spinlock
> hold times, and converges towards the shortest of the hold times in
> that case.

Eric,

with just patches 1-3, can you still reproduce the
regression on your system?

In other words, could we get away with dropping the
complexity of patch 4, or do we still need it?


-- 
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ