[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160623143126.GA16664@redhat.com>
Date: Thu, 23 Jun 2016 16:31:26 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andy Lutomirski <luto@...capital.net>,
Andy Lutomirski <luto@...nel.org>,
the arch/x86 maintainers <x86@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
Borislav Petkov <bp@...en8.de>,
Nadav Amit <nadav.amit@...il.com>,
Kees Cook <keescook@...omium.org>,
Brian Gerst <brgerst@...il.com>,
"kernel-hardening@...ts.openwall.com"
<kernel-hardening@...ts.openwall.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Jann Horn <jann@...jh.net>,
Heiko Carstens <heiko.carstens@...ibm.com>
Subject: Re: [PATCH v3 00/13] Virtually mapped stacks with guard pages (x86,
core)
On 06/22, Linus Torvalds wrote:
>
> Oleg, what do you think? Would it be reasonable to free the stack and
> thread_info synchronously at exit time, clear the pointer (to catch
> any odd use), and only RCU-delay the task_struct itself?
I didn't see the patches yet, quite possibly I misunderstood... But no,
I don't this we can do this (if we are not going to move ti->flags to
task_struct at least).
> (Obviously, we can't release it in do_exit() itself like we do some of
> the other state - it would need to be released after we've scheduled
> away to another process' stack, but we already have that TASK_DEAD
> handling in finish_task_switch for this exact reason).
Yes, but the problem is that a zombie thread can do its last schedule
before it is reaped.
Just for example, syscall_regfunc() does
read_lock(&tasklist_lock);
for_each_process_thread(p, t) {
set_tsk_thread_flag(t, TIF_SYSCALL_TRACEPOINT);
}
read_unlock(&tasklist_lock);
and this can easily hit a TASK_DEAD thread with ->stack == NULL.
And we can't free/nullify it when the parent/debuger reaps a zombie,
say, mark_oom_victim() expects that get_task_struct() protects
thread_info as well.
Oleg.
Powered by blists - more mailing lists