lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 23 Oct 2008 19:14:23 +0200
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Christoph Lameter <cl@...ux-foundation.org>
Cc:	Pekka Enberg <penberg@...helsinki.fi>,
	Miklos Szeredi <miklos@...redi.hu>, nickpiggin@...oo.com.au,
	hugh@...itas.com, linux-mm@...ck.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	akpm@...ux-foundation.org
Subject: Re: SLUB defrag pull request?

Christoph Lameter a écrit :
> On Thu, 23 Oct 2008, Eric Dumazet wrote:
> 
>>> SLUB touches objects by default when allocating. And it does it 
>>> immediately in slab_alloc() in order to retrieve the pointer to the 
>>> next object. So there is no point of hinting there right now.
>>>
>>
>> Please note SLUB touches by reading object.
>>
>> prefetchw() gives a hint to cpu saying this cache line is going to be 
>> *modified*, even
>> if first access is a read. Some architectures can save some bus 
>> transactions, acquiring
>> the cache line in an exclusive way instead of shared one.
> 
> Most architectures actually can do that. Its probably worth to run some 
> tests with that. Conversion of a cacheline from shared to exclusive can 
> cost something.
> 

Please check following patch as a followup

[PATCH] slub: slab_alloc() can use prefetchw()

Most kmalloced() areas are initialized/written right after allocation.

prefetchw() gives a hint to cpu saying this cache line is going to be
*modified*, even if first access is a read.

Some architectures can save some bus transactions, acquiring
the cache line in an exclusive way instead of shared one.

Same optimization was done in 2005 on SLAB in commit 
34342e863c3143640c031760140d640a06c6a5f8 
([PATCH] mm/slab.c: prefetchw the start of new allocated objects)

Signed-off-by: Eric Dumazet <dada1@...mosbay.com>


View attachment "slub.patch" of type "text/plain" (586 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ