[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1135959694.112.1573743977897.JavaMail.zimbra@efficios.com>
Date: Thu, 14 Nov 2019 10:06:17 -0500 (EST)
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: x86 <x86@...nel.org>, linux-kernel <linux-kernel@...r.kernel.org>,
rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
bristot <bristot@...hat.com>, jbaron <jbaron@...mai.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, Nadav Amit <namit@...are.com>,
"H. Peter Anvin" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Jessica Yu <jeyu@...nel.org>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
paulmck <paulmck@...nel.org>
Subject: Re: [PATCH -v5 12/17] x86/kprobes: Fix ordering
----- On Nov 14, 2019, at 8:53 AM, Peter Zijlstra peterz@...radead.org wrote:
> On Wed, Nov 13, 2019 at 10:42:32AM -0500, Mathieu Desnoyers wrote:
>> ----- On Nov 11, 2019, at 8:13 AM, Peter Zijlstra peterz@...radead.org wrote:
>>
>> > Kprobes does something like:
>> >
>> > register:
>> > arch_arm_kprobe()
>> > text_poke(INT3)
>> > /* guarantees nothing, INT3 will become visible at some point, maybe */
>> >
>> > kprobe_optimizer()
>> > /* guarantees the bytes after INT3 are unused */
>> > syncrhonize_rcu_tasks();
>>
>> syncrhonize -> synchronize
>
> Fixed.
>
>> > text_poke_bp(JMP32);
>> > /* implies IPI-sync, kprobe really is enabled */
>> >
>> >
>> > unregister:
>> > __disarm_kprobe()
>> > unoptimize_kprobe()
>> > text_poke_bp(INT3 + tail);
>> > /* implies IPI-sync, so tail is guaranteed visible */
>> > arch_disarm_kprobe()
>> > text_poke(old);
>> > /* guarantees nothing, old will maybe become visible */
>> >
>> > synchronize_rcu()
>> >
>> > free-stuff
>> >
>> > Now the problem is that on register, the synchronize_rcu_tasks() does
>> > not imply sufficient to guarantee all CPUs have already observed INT3
>> > (although in practise this is exceedingly unlikely not to have
>>
>> practise -> practice
>
> And fixed.
>
>> > happened) (similar to how MEMBARRIER_CMD_PRIVATE_EXPEDITED does not
>> > imply MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE).
>> >
>> > Worse, even if it did, we'd have to do 2 synchronize calls to provide
>> > the guarantee we're looking for, the first to ensure INT3 is visible,
>> > the second to guarantee nobody is then still using the instruction
>> > bytes after INT3.
>>
>> I'm not entirely convinced about this last statement though. AFAIU:
>>
>> - Swapping between some instruction and INT3 is OK,
>> - Swapping back between that INT3 and _original_ instruction is OK,
>> - Anything else needs to have explicit core serialization.
>
> So far, agreed.
>
>> So I understand the part about requiring the synchronize call to guarantee
>> that nobody is then still using the instruction bytes following INT3, but not
>> the rationale for the first synchronization. What operation would theoretically
>> follow this first synchronize call ?
>
> I'm not completely sure what you're asking, so I'm going to explain too
> much and hope I answered your question along the way. If not, please be
> a little more specific.
>
> First:
>
> So what can happen with optimized kprobes is that the original
> instruction is <5 bytes and we want to write a JMP.d32 (5 bytes).
> Something like:
>
> 83e: 48 89 e5 mov %rsp,%rbp
> 841: 48 85 c0 test %rax,%rax
>
> And we put a kprobe on the MOV. Then we poke INT3 at 0x83e and IPI-sync.
> At that point the kprobe is active:
>
> 83e: cc 89 e5 int3 ; 2 byte garbage
> 841: 48 85 c0 test %rax,%rax
>
> Now we want to optimize that thing. But imagine a task being preempted
> at 0x841. If we poke in the JMP.d32 the above turns into gibberish
>
> 83e: e9 12 34 56 78 jmp +0x12345678
>
> Then our task resumes, loads the instruction at 0x841, which then reads:
>
> 841: 56 78 c0
>
> And goes *bang*.
Thanks for the reminder. I somehow forgot that optimized kprobes covered
instructions smaller than 5 bytes.
>
> So what we do, after enabling the regular kprobe, is call
> synchronize_rcu_tasks() to wait for each task to have passed through
> schedule(). That guarantees no task is preempted inside the kprobe
> shadow (when it triggers it ensures it resumes execution at an
> instruction boundary further than 5 bytes away).
Indeed, given that synchronize_rcu_tasks() awaits for voluntary context
switches (or user-space execution), it guarantees that no task was preempted
within the kprobe shadow.
Considering that synchronize_rcu_tasks() is meant only for code rewriting,
I wonder if it would make sense to include the core serializing guarantees
within this RCU API ?
>
>
> Second:
>
> The argument I was making was that if we didn't IPI-sync, and if
> synchronize_rcu_tasks() would imply the IPI-sync, we would still need
> two invocations of it to achieve the desired result. Because only after
> the implied IPI-sync must we guarantee no tasks is still inside the
> kprobe 'shadow'.
>
>
> Did that answer your question?
Yes.
Which makes me wonder if we should include the IPI-sync guarantees within
synchronize_rcu_tasks(), _and_ issue the additional IPI-sync when a full
synchronize_rcu_tasks() is not required. But this is a minor nit, I'm fine
with the approach you propose.
>
>> I understand that this patch modifies arch_{arm,disarm}_kprobe to add
>> core serialization after inserting/removing the INT3 even in the case
>> where no optimized kprobes are used, which is heavier than what is
>> strictly required without optimized kprobes.
>
> I disagree, it is required even for normal kprobes, namely:
>
> Without this additional sync it is uncertain when the kprobe would've
> taken effect (if ever).
>
> Suppose a CPU is stuck in a tight loop around the probed instruction,
> then it would never have to re-fetch the changed text and thus never
> trigger.
If the instruction cache state is not presumed to be "eventually
consistent", then this explicit core serialization is indeed required.
Please consider my comments about as food for thoughts, I'm fine with
your proposed patch:
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Thanks!
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
Powered by blists - more mailing lists