lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1ejmt1apj.fsf@ebiederm.dsl.xmission.com>
Date:	Mon, 09 Apr 2007 15:47:52 -0600
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Ravikiran G Thirumalai <kiran@...lex86.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [patch] Pad irq_desc to internode cacheline size

Andrew Morton <akpm@...ux-foundation.org> writes:

> This will consume nearly 4k per irq won't it?  What is the upper bound
> here, across all configs and all hardware?
>
> Is VSMP the only arch which has ____cacheline_internodealigned_in_smp
> larger than ____cacheline_aligned_in_smp?

Ugh. We set internode aligned to 4k for all of x86_64.

I believe this ups our worst case memory consumption for
the array from 1M to 32M.  Although the low end might be 2M.
I can't recall if an irq_desc takes one cache line or two
after we have put the cpu masks in it.

My gut feel says that what we want to do is delay this until
we are dynamically allocating the array members.  Then we can at
least have the chance of allocating the memory on the proper NUMA
node, and won't need the extra NUMA alignment.

I'm not at all certain I'm impressed by an architecture that has
4K aligned cache lines.  That seems terribly piggy.  We might as
well do distributed shared memory in software on a cluster...

Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ