[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrXam8JBtXE3S97Vyb_=+pmjgr23SoZ-Dm5pmYtwZYA+7Q@mail.gmail.com>
Date: Wed, 28 Nov 2018 22:05:54 -0800
From: Andy Lutomirski <luto@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>, X86 ML <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Andrew Lutomirski <luto@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Jason Baron <jbaron@...mai.com>, Jiri Kosina <jkosina@...e.cz>,
David Laight <David.Laight@...lab.com>,
Borislav Petkov <bp@...en8.de>, julia@...com, jeyu@...nel.org,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH v2 4/4] x86/static_call: Add inline static call
implementation for x86-64
> On Nov 27, 2018, at 12:43 AM, Peter Zijlstra <peterz@...radead.org> wrote:
>
>> On Mon, Nov 26, 2018 at 03:26:28PM -0600, Josh Poimboeuf wrote:
>>
>> Yeah, that's probably better. I assume you also mean that we would have
>> all text_poke_bp() users create a handler callback? That way the
>> interface is clear and consistent for everybody. Like:
>
> Can do, it does indeed make the interface less like a hack. It is not
> like there are too many users.
>
>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
>> index aac0c1f7e354..d4b0abe4912d 100644
>> --- a/arch/x86/kernel/jump_label.c
>> +++ b/arch/x86/kernel/jump_label.c
>> @@ -37,6 +37,11 @@ static void bug_at(unsigned char *ip, int line)
>> BUG();
>> }
>>
>> +static inline void jump_label_bp_handler(struct pt_regs *regs, void *data)
>> +{
>> + regs->ip += JUMP_LABEL_NOP_SIZE - 1;
>> +}
>> +
>> static void __ref __jump_label_transform(struct jump_entry *entry,
>> enum jump_label_type type,
>> void *(*poker)(void *, const void *, size_t),
>> @@ -91,7 +96,7 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
>> }
>>
>> text_poke_bp((void *)jump_entry_code(entry), code, JUMP_LABEL_NOP_SIZE,
>> - (void *)jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>> + jump_label_bp_handler, NULL);
>> }
>>
>> void arch_jump_label_transform(struct jump_entry *entry,
>
> Per that example..
>
>> diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
>> index d3869295b88d..e05ebc6d4db5 100644
>> --- a/arch/x86/kernel/static_call.c
>> +++ b/arch/x86/kernel/static_call.c
>> @@ -7,24 +7,30 @@
>>
>> #define CALL_INSN_SIZE 5
>>
>> +struct static_call_bp_data {
>> + unsigned long func, ret;
>> +};
>> +
>> +static void static_call_bp_handler(struct pt_regs *regs, void *_data)
>> +{
>> + struct static_call_bp_data *data = _data;
>> +
>> + /*
>> + * For inline static calls, push the return address on the stack so the
>> + * "called" function will return to the location immediately after the
>> + * call site.
>> + *
>> + * NOTE: This code will need to be revisited when kernel CET gets
>> + * implemented.
>> + */
>> + if (data->ret) {
>> + regs->sp -= sizeof(long);
>> + *(unsigned long *)regs->sp = data->ret;
>> + }
You can’t do this. Depending on the alignment of the old RSP, which
is not guaranteed, this overwrites regs->cs. IRET goes boom.
Maybe it could be fixed by pointing regs->ip at a real trampoline?
This code is subtle and executed rarely, which is a bag combination.
It would be great if we had a test case.
I think it would be great if the implementation could be, literally:
regs->ip -= 1;
return;
IOW, just retry and wait until we get the new patched instruction.
The problem is that, if we're in a context where IRQs are off, then
we're preventing on_each_cpu() from completing and, even if we somehow
just let the code know that we already serialized ourselves, we're
still potentially holding a spinlock that another CPU is waiting for
with IRQs off. Ugh. Anyone have a clever idea to fix that?
Powered by blists - more mailing lists