[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrXXyrVbPD6MZHYu3K=sdvA=XH+Hgxy-hO9ZEoxO-gbmYg@mail.gmail.com>
Date: Fri, 21 Nov 2014 16:41:22 -0800
From: Andy Lutomirski <luto@...capital.net>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Tejun Heo <tj@...nel.org>,
Frederic Weisbecker <fweisbec@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Dave Jones <davej@...hat.com>, Don Zickus <dzickus@...hat.com>,
Linux Kernel <linux-kernel@...r.kernel.org>,
"the arch/x86 maintainers" <x86@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Arnaldo Carvalho de Melo <acme@...stprotocols.net>
Subject: Re: frequent lockups in 3.18rc4
On Fri, Nov 21, 2014 at 4:18 PM, Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
> On Fri, Nov 21, 2014 at 4:11 PM, Tejun Heo <tj@...nel.org> wrote:
>>
>> I don't think there's much percpu allocator itself can do. The
>> ability to grow dynamically comes from being able to allocate
>> relatively consistent layout among areas for different CPUs and pretty
>> much requires vmalloc area and it'd generally be a good idea to take
>> out the vmalloc fault anyway.
>
> Why do you guys worry so much about the vmalloc fault?
>
> This started because of a very different issue: putting the actual
> stack in vmalloc space. Then it can cause nasty triple faults etc.
>
> But the normal vmalloc fault? Who cares, really? If that causes
> problems, they are bugs. Fix them.
Because of this in system_call_after_swapgs:
movq %rsp,PER_CPU_VAR(old_rsp)
movq PER_CPU_VAR(kernel_stack),%rsp
It occurs to me that, if we really want to change that, we could have
an array of syscall trampolines, one per CPU, that have the CPU number
hardcoded. But I really don't think that's worth it.
Other than that, with your fix, vmalloc faults are no big deal :)
--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists