lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47C95FBC.1010703@cosmosbay.com>
Date:	Sat, 01 Mar 2008 14:53:00 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	Christoph Lameter <clameter@....com>, davem@...emloft.net,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	yanmin_zhang@...ux.intel.com, travis@....com
Subject: Re: [PATCH] alloc_percpu() fails to allocate percpu data

Andrew Morton a écrit :
> On Wed, 27 Feb 2008 11:59:32 -0800 (PST)
> Christoph Lameter <clameter@....com> wrote:
> 
>> Any decision made on what to do about this one? Mike or I can 
>> repost the per cpu allocator against mm? The fix by Eric could be used 
>> in the interim for 2.6.24?
>>
> 
> I suppose I'll merge Eric's patch when I've tested it fully (well, as fully
> as I test stuff).
> 
> It'd be nice to get that cache_line_size()/L1_CACHE_BYTES/L1_CACHE_ALIGN()
> mess sorted out.  If it's a mess - I _think_ it is?

Just coming back from hollidays, sorry for the delay.

I can provide a patch so that L1_CACHE_BYTES is not anymore a compile time 
constant if you want, but I am not sure it is worth the trouble ? (and this 
certainly not 2.6.{24|25} stuff :) )

Current situation :

L1_CACHE_BYTES is known at compile time, and can be quite large (128 bytes), 
while cache_line_size() gives the real cache line size selected at boot time 
given the hardware capabilities.

If L1_CACHE_BYTES is not anymore a constant, compiler will also uses plain 
divides to compute L1_CACHE_ALIGN()

Maybe uses of L1_CACHE_ALIGN() in fastpath would 'force' us to not only 
declare a cache_line_size() but also a cache_line_size_{mask|shift}() so that 
x86 could use :

#define L1_CACHE_ALIGN(x) ((((x)+cache_line_mask())) >> cache_line_shift())

#define L1_CACHE_BYTES (cache_line_size())

But I am not sure we want to play these games (we must also make sure nothing 
in the tree wants a constant L1_CACHE_BYTES and replace by SMP_CACHE_BYTES)


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ