lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <48976638.6010800@sgi.com>
Date:	Mon, 04 Aug 2008 13:27:36 -0700
From:	Mike Travis <travis@....com>
To:	Yinghai Lu <yhlu.kernel@...il.com>
CC:	Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 02/04] x86: add get_irq_cfg in io_apic_64.c

course

Yinghai Lu wrote:
> On Mon, Aug 4, 2008 at 8:02 AM, Mike Travis <travis@....com> wrote:
>> Yinghai Lu wrote:
>...
>>>
>>> +struct irq_cfg;
>>> +
>>>  struct irq_cfg {
>>> +     unsigned int irq;
>>> +     struct irq_cfg *next;
>>>       cpumask_t domain;
>>>       cpumask_t old_domain;
>>        ^^^^^^^^^
>> One thought here... most interrupts cannot be serviced by any cpu in
>> the system, but instead need to be serviced by the cpu attached to
>> the ioapic or on the local node.  So defining some subset of cpumask_t
>> would save a lot of space.  For example:
>>
>>        nodecpumask_t {
>>                int     node;
>>                DEFINE_BITMAP(..., MAX_CPUS_PER_NODE);
>>        };
>>
>> And of course, providing some utilities to convert nodecpumask_t <==>
>> cpumask_t.
>>
>> ("node" might not be the proper abstraction... maybe "irqcpumask_t"?)
> union irq_cpumask_t {
>             int     cpu;
>             unsigned long mask;
> };
> 
> also thinking if we can have dyn_cpumask_t etc if NR_CPU=4096, but
> nr_cpus or nr_cpu_ids=32 in running time.
> with that distributions could have NR_CPU=4096 as default config...
> 
> YH

Believe it or not, 64 might not be enough.  The Nahelem 8 core (16 HT's)
has two QPI connects.  In theory, you could put together a node with 4
cpu sockets and 2 of the new io inf's on a single board.  That's 64 cpus
and 4 PCIe busses (plus all the legacy stuff).  The Intel microarch
could very well support 8 cores in the next gen processors.

Btw, I meant the above to be a struct so node and bitmap are both
present.  This causes a contiguous subset of cpu ids to be in the
bitmask.  Of course, this would rely on the cpus being "discovered"
in topology order, possibly with holes (not clear if that's really
necessary.)

So a system with 8 nodes and 32 processors each, node 2's cpus would be
64..95 and the nodecpumask would be { 2, 0xffffffff00000000 } (assuming
max cpus per node == 64.)

Another angle thrown around was using a 128 bit cpu mask struct,
with some number of upper bits defining the remainder, which could be a
bit mask field, a pointer to a bitmask, a bitmask subset (as above),
etc.  Then all the cpus_* ops would be modified to accept the alternate
types of cpu mask sets, compiling out (optimizing) those not present on
a particular arch.

[One last point, we (SGI) are counting on _this_ release to have
NR_CPUS=4096 in the default distro config.  Sufficient to say, some
of our customers will not accept "special" built kernels, but instead
require standard, certified, licensable kernels built by the distros.
(This is for the "Enterprise" Editions, Desktop distros course probably
won't go as high.)]

Thanks,
Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ