[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrU90SiCttWhghVfwjp_kii=eB1UuKLQ0_M76JxDjpQNzA@mail.gmail.com>
Date: Tue, 28 Jun 2016 14:21:29 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Oleg Nesterov <oleg@...hat.com>, Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Tejun Heo <tj@...nel.org>, LKP <lkp@...org>,
LKML <linux-kernel@...r.kernel.org>,
kernel test robot <xiaolong.ye@...el.com>
Subject: Re: kthread_stop insanity (Re: [[DEBUG] force] 2642458962: BUG:
unable to handle kernel paging request at ffffc90000997f18)
On Tue, Jun 28, 2016 at 2:14 PM, Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
> On Tue, Jun 28, 2016 at 1:54 PM, Andy Lutomirski <luto@...capital.net> wrote:
>>
>> But I might need to that anyway for procfs to read the the stack,
>> right? Do you see another way to handle that case?
>
> I think the other way to handle the kernel stack reading would be to
> simply make the stack freeing be RCU-delayed, and use the RCU list
> itself as the stack cache.
>
> That way reading the stack is ok in a RCU context, although you might
> end up reading a stack that has been re-used.
>
> Would that work for people?
I don't think I understand your proposal. We already delay freeing
the stack for RCU. If we continue doing it, then, under workloads
like my benchmark, the RCU list gets quite large. Or are you
suggesting that we actually make a list somewhere of stacks that are
nominally unused but are still around for RCU's benefit and then
scavenge from that lest when we need a new stack? If so, that seems
considerably more complicated than just adding a reference count.
Also, my inner security nerd says that letting /proc potentially read
the wrong process's stack is bad news.
Powered by blists - more mailing lists