[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <51884263.4020608@zytor.com>
Date: Mon, 06 May 2013 16:53:07 -0700
From: "H. Peter Anvin" <hpa@...or.com>
To: linux-arch <linux-arch@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: The type of bitops
The type of bitops is currently "int" on most, if not all, architectures
except sparc64 where it is "unsigned long". This already has the
potential of causing failures on extremely large non-NUMA x86 boxes
(specifically if any one node contains more than 8 TiB of memory, e.g.
in an interleaved memory system.)
x86 has hardware bitmask instructions which are signed, this limits the
types to either "int" or "long".
It seems pretty clear to me at least that x86-64 really should use
"long". However, before blindly making that change I wanted to feel
people out for what this should look like across architectures.
Moving this forward, I see a couple of possibilities:
1. We simply change the type to "long" on x86, and let this be a fully
architecture-specific option. This is easy, obviously.
2. Same as above, except we also define a typedef for whatever type is
the bitops argument type (bitops_t? bitpos_t?)
3. Change the type to "long" Linux-wide, on the logic that it should be
the same as the general machine width across all platforms.
4. Do some macro hacks so the bitops are dependent on the size of the
argument.
5. Introduce _long versions of the bitops.
6. Do nothing at all.
Are there any 64-bit architectures where a 64-bit argument would be very
costly?
-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists