[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B15A5A6.2090200@kernel.org>
Date: Wed, 02 Dec 2009 08:24:22 +0900
From: Tejun Heo <tj@...nel.org>
To: Ingo Molnar <mingo@...e.hu>
CC: Christoph Lameter <cl@...ux-foundation.org>,
Stephen Rothwell <sfr@...b.auug.org.au>,
michal.simek@...alogix.com, linux-next@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>,
Rusty Russell <rusty@...tcorp.com.au>
Subject: Re: problems in linux-next (Was: Re: linux-next: Tree for December
1)
Hello,
On 12/02/2009 01:01 AM, Ingo Molnar wrote:
>>> The problem is that on UP configurations. Percpu memory allocator
>>> becomes a simple wrapper around kmalloc and there's no way to
>>> specify larger alignment when requesting memory from kmalloc.
>>
>> There is usually no point in aligning in UP. Alignment is typically
>> done for smp configurations to limit cache line bouncing and control
>> cache line use/
>
> There is a natural minimum alignment for UP and it's smaller than the
> cache-line size: machine word size. All our allocators (except bootmem)
> align to machine word so there's no need to specify this explicitly.
>
> Larger alignment than that just wastes memory - which waste UP systems
> can afford the least.
This isn't usual alignment. struct work_struct has one data fields
which is overloaded for two purposes. Lower few bits are used to
carry flags while upper bits are used to point to sruct
cpu_workqueue_struct. So, the number of available bits for flags are
determined by the alignment of cpu_workqueue_struct. Memory usage for
cwqs isn't a big concern here. Many workqueues will go away. I think
we'll end up with less than half of what we have today while we'll
continue to have large number of works.
I'll just create alloc_cwq function which forces the alignment on UP.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists