[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <534A107B.2050203@zytor.com>
Date: Sat, 12 Apr 2014 21:20:11 -0700
From: "H. Peter Anvin" <hpa@...or.com>
To: Andy Lutomirski <luto@...capital.net>,
Brian Gerst <brgerst@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>, stable@...r.kernel.org,
"H. Peter Anvin" <hpa@...ux.intel.com>
Subject: Re: [tip:x86/urgent] x86-64, modify_ldt: Ban 16-bit segments on 64-bit
kernels
On 04/11/2014 02:53 PM, Andy Lutomirski wrote:
>
> If you want a fully correct solution, you can use a fancier allocation
> policy that can fit quite a few cpus per 4G :)
>
The more I think about this, I think this might actually be a reasonable
option, *IF* someone is willing to deal with actually implementing it.
The difference versus my "a" alternative is rather than mapping the
existing kernel stack into an alternate part of the address space would
be that we would have a series of ministacks that is only large enough
that we can handle the IRET data *and* big enough to handle any
exceptions that IRET may throw, until we can switch back to the real
kernel stack. Tests would have to be added to the appropriate exception
paths, as early as possible. We would then *copy* the IRET data to the
ministack before returning. Each ministack would be mapped 65536 times.
If we can get away with 64 bytes per CPU, then we can get away with 4
GiB of address space per 1024 CPUs, so if MAX_CPUS is 16384 we would
need 64 GiB of address space... which is not unreasonable on 64 bits.
The total memory consumption would be about 81 bytes per CPU for the
ministacks plus page tables (just over 16K per 1K CPUs.) Again, fairly
reasonable, but a *lot* of complexity.
-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists