[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191108150530.GA7721@leoy-ThinkPad-X240s>
Date: Fri, 8 Nov 2019 23:05:30 +0800
From: Leo Yan <leo.yan@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Will Deacon <will@...nel.org>, Mark Rutland <mark.rutland@....com>,
Mike Leach <mike.leach@...aro.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H . Peter Anvin" <hpa@...or.com>, x86@...nel.org,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Mathieu Poirier <mathieu.poirier@...aro.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Jiri Olsa <jolsa@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC 1/6] perf/x86: Add perf text poke event
On Mon, Nov 04, 2019 at 10:23:46AM +0800, Leo Yan wrote:
[...]
> > > But for jump label, it doesn't use the stop_machine() and perf event
> > > PERF_TEXT_POKE_UPDATE will introduce race condition as below (Let's see
> > > the example for transition from nop to branch):
> > >
> > > CPU0 CPU1
> > > NOP instruction
> > > `-> static_key_enable()
> > > `-> aarch64_insn_patch_text_nosync()
> > > `-> perf event PERF_TEXT_POKE_UPDATE
> > > -> Execute nop
> > > instruction
> > > `-> aarch64_insn_write()
> > > `-> __flush_icache_range()
> > >
> > > Since x86 platform have INT3 as a mediate state, it can avoid the
> > > race condition between CPU0 (who is do transition) and other CPUs (who
> > > is possible to execute nop/branch).
> >
> > Ah, you found the _nosync thing in jump_label, here's the one in ftrace:
> >
> > arch/arm64/kernel/ftrace.c: if (aarch64_insn_patch_text_nosync((void *)pc, new))
> >
> > And yes, this is racy.
> >
> > > > The thing is, as I argued, the instruction state between PRE and POST is
> > > > ambiguous. This makes it impossible to decode the branch decision
> > > > stream.
> > > >
> > > > Suppose CPU0 emits the PRE event at T1 and the POST event at T5, but we
> > > > have CPU1 covering the instruction at T3.
> > > >
> > > > How do you decide where CPU1 goes and what the next conditional branch
> > > > is?
> > >
> > > Sorry for my not well thought.
> > >
> > > I agree that T3 is an uncertain state with below flow:
> > >
> > > CPU0 CPU1
> > > perf event PERF_TEXT_POKE_UPDATE_PRE -> T1
> > >
> > > Int3 / NOP -> T3
> > >
> > > Int3 / branch -> T3'
> > >
> > > perf event PERF_TEXT_POKE_UPDATE_POST -> T5
> > >
> > > Except if the trace has extra info and can use old/new instructions
> > > combination for analysis, otherwise PRE/POST pair events aren't helpful
> > > for resolve this issue (if trace decoder can do this, then the change in
> > > kernel will be much simpler).
> > >
> > > Below are two potential options we can use on Arm64 platform:
> > >
> > > - Change to use stop_machine() for jump label; this might introduce
> > > performance issue if jump label is altered frequently.
> > >
> > > To mitigate the impaction, we can only use stop_machine() when
> > > detect the perf events are enabled, otherwise will rollback to use
> > > the old code path.
> > >
> > > - We can use breakpoint to emulate the similiar flow with x86's int3,
> > > thus we can dismiss the race condition between one CPU alters
> > > instruction and other CPUs run into the alternative instruction.
> > >
> > > @Will, @Mark, could you help review this? Appreciate any comments
> > > and suggestions. And please let me know if you want to consolidate
> > > related works with your side (or as you know if there have ongoing
> > > discussion or someone works on this).
> >
> > Given people are building larger Aargh64 machines (I've heard about 100+
> > CPUs already), I'm thinking the 3rd option is the most performant.
> >
> > But yes, as you mention earlier, we can make this optional on the
> > TEXT_POKE_UPDATE event being in use.
> >
> > I'm thinking something along the lines of:
> >
> > static uintptr_t nosync_addr;
> > static u32 nosync_insn;
> >
> > int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
> > {
> > const u32 break = // some_breakpoint_insn;
> > uintptr_t tp = (uintptr_t)addr;
> > int ret;
> >
> > lockdep_assert_held(&text_mutex);
> >
> > /* A64 instructions must be word aligned */
> > if (tp & 0x3)
> > return -EINVAL;
> >
> > if (perf_text_poke_update_enabled()) {
> >
> > nosync_insn = insn;
> > smp_store_release(&nosync_addr, tp);
> >
> > ret = aarch64_insn_write(addr, break);
> > if (ret == 0)
> > __flush_icache_range(tp, tp + AARCH64_INSN_SIZE);
> >
> > perf_event_text_poke(....);
> > }
> >
> > ret = aarch64_insn_write(addr, insn);
> > if (ret == 0)
> > __flush_icache_range(tp, tp + AARCH64_INSN_SIZE);
> >
> > return ret;
> > }
> >
> > And have the 'break' handler do:
> >
> > aarch64_insn_break_handler(struct pt_regs *regs)
> > {
> > unsigned long addr = smp_load_acquire(&nosync_addr);
> > u32 insn = nosync_insn;
> >
> > if (regs->ip != addr)
> > return;
> >
> > // emulate @insn
> > }
> >
> > I understood from Will the whole nosync scheme only works for a limited
> > set of instructions, but you only have to implement emulation for the
> > actual instructions used of course.
> >
> > (which is what we do on x86)
> >
> > Does this sound workable?
I will update some status for prototype (the code has been uploaded into
git server [1]) and found some issues for text poke perf event on arm64.
These issues are mainly related with arch64 architecture.
- The first issue is for the nosync instruction emulation. On arm64,
some instructions need to be single stepped and some instructions
is emulated. Especially, after I read the code for kprobe
implementation on Arm64. So the main idea for prototyping is to use
the almos same method with kprobe for nosync instruction.
As result, there have duplicated code between kprobe and text
poke handling. If this is the right way to move forward, we need to
refactor the common code so that kprobe and text poke can share with
each other.
- The second issue is race condition between the CPU alters
instructions and other CPUs hit the altered instructions (and
breakpointed).
Peter's suggestion uses global variables 'nosync_insn' and
'nosync_addr' to record the altered instruction. But from the
testing I found for single static key, usually it will change for
multiple address at once.
So this might cause the issue is: CPU(a) will loop to alter
instructions for different address (sometimes the opcode also is
different for different address), at the meantime, CPU(b) hits an
altered instruction and handle exception for the breakpoint, if
CPU(a) is continuing to alter instruction for the next address, thne
CPU(a) might wrongly to use the value from 'nosync_insn' and
'nosync_addr'.
Simply to say, we cannot only support single nosync instruction but
need to support multiple nosync instructions in the loop.
- The third issue is might be nested issue. E.g. for the injected
break instruction, we have no method to pass perf event for this
instruction; and if we connect with the first issue, if the
instruction is single stepped with slot (the slot is allocated with
get_insn_slot()), we cannot to allow the perf user space to know
the instructions which are executed in the slots.
I am not sure if I think too complex here. But seems to me, we
inject more instructions for poke text event, and if these injected
instructions are executed but the userspace decoder has no way to
pass the related info.
Just clarify, I am fine for merging this patch set, but we might
consider more what's the best way on Arm64. Welcome any public
comments and suggestions; I will sync internally for how to follow up
this functionality.
Thanks,
Leo Yan
[1] https://git.linaro.org/people/leo.yan/linux-coresight.git/log/?h=arm64_perf_text_pork_prototype
Powered by blists - more mailing lists