[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BD570A8.90304@kernel.org>
Date: Mon, 26 Apr 2010 12:53:28 +0200
From: Tejun Heo <tj@...nel.org>
To: Pekka Enberg <penberg@...helsinki.fi>
CC: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
Christoph Lameter <cl@...ux.com>,
"Rafael J. Wysocki" <rjw@...k.pl>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Kernel Testers List <kernel-testers@...r.kernel.org>,
Maciej Rutecki <maciej.rutecki@...il.com>,
Alex Shi <alex.shi@...el.com>, tim.c.chen@...el.com
Subject: Re: [Bug #15713] hackbench regression due to commit 9dfc6e68bfe6e
On 04/26/2010 12:09 PM, Pekka Enberg wrote:
>> My wild speculation is that previously the cpu_slub structures of two
>> neighboring threads ended up on the same cacheline by accident thanks
>> to the back to back allocation. W/ the percpu allocator, this no
>> longer would happen as the allocator groups percpu data together
>> per-cpu.
>
> Yanmin, do we see a lot of remote frees for your hackbench run? IIRC,
> it's the "deactivate_remote_frees" stat when CONFIG_SLAB_STATS is
> enabled.
I'm not familiar with the details or scales here so please take
whatever I say with a grain of salt. For hyperthreading configuration
I think operations don't have to be remote to be affected. If the
data for cpu0 and cpu1 were on the same cache line, and cpu0 and cpu1
are occupying the same physical core thus sharing all the resources it
would benefit from the sharing whether any operation was remote or not
as it saves the physical processor one cache line.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists