[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070409220936.GD5275@localhost.localdomain>
Date: Mon, 9 Apr 2007 15:09:36 -0700
From: Ravikiran G Thirumalai <kiran@...lex86.org>
To: "Eric W. Biederman" <ebiederm@...ssion.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, shai@...lex86.org
Subject: Re: [patch] Pad irq_desc to internode cacheline size
On Mon, Apr 09, 2007 at 03:47:52PM -0600, Eric W. Biederman wrote:
> Andrew Morton <akpm@...ux-foundation.org> writes:
>
> > This will consume nearly 4k per irq won't it? What is the upper bound
> > here, across all configs and all hardware?
> >
> > Is VSMP the only arch which has ____cacheline_internodealigned_in_smp
> > larger than ____cacheline_aligned_in_smp?
>
> Ugh. We set internode aligned to 4k for all of x86_64.
!!! No. internode aligned is 4k only if CONFIG_X86_VSMP is chosen. The
internode line size defaults to SMP_CACHE_BYTES for all other machine types.
Please note that an INTERNODE_CACHE_SHIFT of 12 is defined under
#ifdef CONFIG_X86_VSMP in include/asm-x86_64/cache.h
>
> I believe this ups our worst case memory consumption for
> the array from 1M to 32M. Although the low end might be 2M.
> I can't recall if an irq_desc takes one cache line or two
> after we have put the cpu masks in it.
>
> My gut feel says that what we want to do is delay this until
> we are dynamically allocating the array members. Then we can at
> least have the chance of allocating the memory on the proper NUMA
> node, and won't need the extra NUMA alignment.
I was thinking on those lines as well. But, until we get there, can we have
this in as stop gap? The patch does not increase the memory footprint for
any other architecture other than vSMPowered systems.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists