[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202009231209.F5F3999D@keescook>
Date: Wed, 23 Sep 2020 12:19:57 -0700
From: Kees Cook <keescook@...omium.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
Sami Tolvanen <samitolvanen@...gle.com>,
Jann Horn <jannh@...gle.com>
Subject: Re: x86/irq: Make run_on_irqstack_cond() typesafe
On Tue, Sep 22, 2020 at 09:58:52AM +0200, Thomas Gleixner wrote:
> -void asm_call_on_stack(void *sp, void *func, void *arg);
> +void asm_call_on_stack(void *sp, void (*func)(void), void *arg);
> +void asm_call_sysvec_on_stack(void *sp, void (*func)(struct pt_regs *regs),
> + struct pt_regs *regs);
> +void asm_call_irq_on_stack(void *sp, void (*func)(struct irq_desc *desc),
> + struct irq_desc *desc);
Eeeh, err. So, this is nice for the CFI case, but can we instead just
inline asm_call_on_stack() instead? Having any of these as distinct
functions in the kernel is really not safe: it provides a trivial
global stack-pivot[1] function for use in ROP attacks, which is one
of the central requirements for mounting such attacks. This allows a
completely arbitrary sp argument, function, and first argument. :(
Much better would be to keep asm_call_on_stack() as an inline so the
stack pointer is always coming from percpu variables, and to have the
irq_count actually checked (i.e. freak out if it falls below zero to
catch jumps into the middle of a function when an attempt to bypass the
load from the percpu area happens). I would expect this form to be much
robust:
inc
load sp from per-cpu
pivot sp
make call
restore sp
WARN(dec_and_test)
-Kees
[1] https://security.stackexchange.com/questions/44418/return-oriented-programming-how-to-find-a-stack-pivot
--
Kees Cook
Powered by blists - more mailing lists