[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <13a45e59-5969-2fdb-25cd-adcd5298784b@redhat.com>
Date: Tue, 16 Jan 2018 12:34:36 -0500
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Joerg Roedel <joro@...tes.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
"H . Peter Anvin" <hpa@...or.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...el.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Juergen Gross <jgross@...e.com>,
Borislav Petkov <bp@...en8.de>, Jiri Kosina <jkosina@...e.cz>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
Brian Gerst <brgerst@...il.com>,
David Laight <David.Laight@...lab.com>,
Denys Vlasenko <dvlasenk@...hat.com>,
Eduardo Valentin <eduval@...zon.com>,
Greg KH <gregkh@...uxfoundation.org>,
Will Deacon <will.deacon@....com>, aliguori@...zon.com,
daniel.gruss@...k.tugraz.at, hughd@...gle.com, keescook@...gle.com,
Andrea Arcangeli <aarcange@...hat.com>,
Waiman Long <llong@...hat.com>, jroedel@...e.de
Subject: Re: [PATCH 06/16] x86/mm/ldt: Reserve high address-space range for
the LDT
On 01/16/2018 12:31 PM, Peter Zijlstra wrote:
> On Tue, Jan 16, 2018 at 06:13:43PM +0100, Joerg Roedel wrote:
>> Hi Peter,
>>
>> On Tue, Jan 16, 2018 at 05:52:13PM +0100, Peter Zijlstra wrote:
>>> On Tue, Jan 16, 2018 at 05:36:49PM +0100, Joerg Roedel wrote:
>>>> From: Joerg Roedel <jroedel@...e.de>
>>>>
>>>> Reserve 2MB/4MB of address space for mapping the LDT to
>>>> user-space.
>>> LDT is 64k, we need 2 per CPU, and NR_CPUS <= 64 on 32bit, that gives
>>> 64K*2*64=8M > 2M.
>> Thanks, I'll fix that in the next version.
> Just lower the max SMP setting until it fits or something. 32bit is too
> address space starved for lots of CPU in any case, 64 CPUs on 32bit is
> absolutely insane.
Maybe we can just scale the amount of reserved space according to the
current NR_CPUS setting. In this way, we won't waste more memory than is
necessary.
Powered by blists - more mailing lists