lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 10 Apr 2007 09:37:00 +0200
From:	Andi Kleen <ak@...e.de>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	"Siddha, Suresh B" <suresh.b.siddha@...el.com>, mingo@...e.hu,
	nickpiggin@...oo.com.au, linux-kernel@...r.kernel.org,
	Ravikiran G Thirumalai <kiran@...lex86.org>
Subject: Re: [patch] sched: align rq to cacheline boundary


> >  
> > -static DEFINE_PER_CPU(struct rq, runqueues);
> > +static DEFINE_PER_CPU(struct rq, runqueues) ____cacheline_aligned_in_smp;
> 
> Remember that this can consume up to (linesize-4 * NR_CPUS) bytes, 

On x86 just the real possible map now -- that tends to be much smaller.

There might be some other architectures who still allocate per cpu
for all of NR_CPUs (or always set possible map to that), but those
should be just fixed.

> which is 
> rather a lot.

We should have solved the problem of limited per cpu space in .22 at least
with some patches by Jeremy. I also plan a few other changes the will
use more per CPU memory again.

> Remember also that the linesize on VSMP is 4k.
> 
> And that putting a gap in the per-cpu memory like this will reduce its
> overall cache-friendliness.

When he avoids false sharing on remote wakeup it should be more cache friendly.

> Need more convincing, please.

Was this based on some benchmark where it showed?

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ