lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 22 Aug 2008 09:01:22 -0500
From:	Christoph Lameter <cl@...ux-foundation.org>
To:	Pekka Enberg <penberg@...helsinki.fi>
CC:	Ingo Molnar <mingo@...e.hu>, Jeremy Fitzhardinge <jeremy@...p.org>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Andi Kleen <andi@...stfloor.org>,
	"Pallipadi, Venkatesh" <venkatesh.pallipadi@...el.com>,
	Suresh Siddha <suresh.b.siddha@...el.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	"Paul E. McKenney" <paulmck@...ibm.com>
Subject: Re: [PATCH 2/2] smp_call_function: use rwlocks on queues rather than
 rcu

Pekka Enberg wrote:
> Hi Ingo,
> 
> On Fri, Aug 22, 2008 at 9:28 AM, Ingo Molnar <mingo@...e.hu> wrote:
>> * Jeremy Fitzhardinge <jeremy@...p.org> wrote:
>>
>>> RCU can only control the lifetime of allocated memory blocks, which
>>> forces all the call structures to be allocated.  This is expensive
>>> compared to allocating them on the stack, which is the common case for
>>> synchronous calls.
>>>
>>> This patch takes a different approach.  Rather than using RCU, the
>>> queues are managed under rwlocks.  Adding or removing from the queue
>>> requires holding the lock for writing, but multiple CPUs can walk the
>>> queues to process function calls under read locks.  In the common
>>> case, where the structures are stack allocated, the calling CPU need
>>> only wait for its call to be done, take the lock for writing and
>>> remove the call structure.
>>>
>>> Lock contention - particularly write vs read - is reduced by using
>>> multiple queues.
>> hm, is there any authorative data on what is cheaper on a big box, a
>> full-blown MESI cache miss that occurs for every reader in this new
>> fastpath, or a local SLAB/SLUB allocation+free that occurs with the
>> current RCU approach?
> 
> Christoph might have an idea about it.

Its on the stack which is presumably hot so no cache miss? If its async then
presumably we do not need to wait so its okay to call an allocator.

Generally: The larger the box (longer cacheline acquisition latencies) and the
higher the contention (cannot get cacheline because of contention) the better
a slab allocation will be compared to a cacheline miss.

RCU is problematic because it lets cachelines get cold. A hot cacheline that
is used frequently read and written to by the same cpu is very good thing for
performace.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ