[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Sun, 26 Oct 2014 18:12:05 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"H. Peter Anvin" <hpa@...or.com>, X86 ML <x86@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Richard Weinberger <richard.weinberger@...il.com>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: vmalloced stacks on x86_64?
On Sun, Oct 26, 2014 at 1:29 PM, Frederic Weisbecker <fweisbec@...il.com> wrote:
> On Sat, Oct 25, 2014 at 10:49:25PM -0700, Andy Lutomirski wrote:
>> On Oct 25, 2014 9:11 PM, "Frederic Weisbecker" <fweisbec@...il.com> wrote:
>> >
>> > 2014-10-25 2:22 GMT+02:00 Andy Lutomirski <luto@...capital.net>:
>> > > Is there any good reason not to use vmalloc for x86_64 stacks?
>> > >
>> > > The tricky bits I've thought of are:
>> > >
>> > > - On any context switch, we probably need to probe the new stack
>> > > before switching to it. That way, if it's going to fault due to an
>> > > out-of-sync pgd, we still have a stack available to handle the fault.
>> >
>> > Would that prevent from any further fault on a vmalloc'ed kernel
>> > stack? We would need to ensure that pre-faulting, say the first byte,
>> > is enough to sync the whole new stack entirely otherwise we risk
>> > another future fault and some places really aren't safely faulted.
>> >
>>
>> I think so. The vmalloc faults only happen when the entire top-level
>> page table entry is missing, and those cover giant swaths of address
>> space.
>>
>> I don't know whether the vmalloc code guarantees not to span a pmd
>> (pud? why couldn't these be called pte0, pte1, pte2, etc.?) boundary.
>
> So dereferencing stack[0] is probably enough for 8KB worth of stack. I think
> we have vmalloc_sync_all() but I heard this only work on x86-64.
>
I have no desire to do this for 32-bit. But we don't need
vmalloc_sync_all -- we just need to sync the ony required entry.
> Too bad we don't have a universal solution, I have that problem with per cpu allocated
> memory faulting at random places. I hit at least two places where it got harmful:
> context tracking and perf callchains. We fixed the latter using open-coded per cpu
> allocation. I still haven't found a solution for context tracking.
In principle, we could pre-populate all top-level pgd entries at boot,
but that would cost up to 256 pages of memory, I think.
--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists