[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK8P3a2gekEqUuvsC-_+ijhiqqff2bK-s2wQfkjn7z-HtNnMDQ@mail.gmail.com>
Date: Mon, 3 Dec 2018 22:51:52 +0100
From: Arnd Bergmann <arnd@...db.de>
To: Will Deacon <will.deacon@....com>
Cc: Anders Roxell <anders.roxell@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...hat.com>,
Catalin Marinas <catalin.marinas@....com>,
Kees Cook <keescook@...omium.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH 3/3] arm64: ftrace: add cond_resched() to func ftrace_make_(call|nop)
On Mon, Dec 3, 2018 at 8:22 PM Will Deacon <will.deacon@....com> wrote:
>
> Hi Anders,
>
> On Fri, Nov 30, 2018 at 04:09:56PM +0100, Anders Roxell wrote:
> > Both of those functions end up calling ftrace_modify_code(), which is
> > expensive because it changes the page tables and flush caches.
> > Microseconds add up because this is called in a loop for each dyn_ftrace
> > record, and this triggers the softlockup watchdog unless we let it sleep
> > occasionally.
> > Rework so that we call cond_resched() before going into the
> > ftrace_modify_code() function.
> >
> > Co-developed-by: Arnd Bergmann <arnd@...db.de>
> > Signed-off-by: Arnd Bergmann <arnd@...db.de>
> > Signed-off-by: Anders Roxell <anders.roxell@...aro.org>
> > ---
> > arch/arm64/kernel/ftrace.c | 10 ++++++++++
> > 1 file changed, 10 insertions(+)
>
> It sounds like you're running into issues with the existing code, but I'd
> like to understand a bit more about exactly what you're seeing. Which part
> of the ftrace patching is proving to be expensive?
>
> The page table manipulation only happens once per module when using PLTs,
> and the cache maintenance is just a single line per patch site without an
> IPI.
>
> Is it the loop in ftrace_replace_code() that is causing the hassle?
Yes: with an allmodconfig kernel, the ftrace selftest calls ftrace_replace_code
to look >40000 through ftrace_make_call/ftrace_make_nop, and these
end up calling
static int __kprobes __aarch64_insn_write(void *addr, __le32 insn)
{
void *waddr = addr;
unsigned long flags = 0;
int ret;
raw_spin_lock_irqsave(&patch_lock, flags);
waddr = patch_map(addr, FIX_TEXT_POKE0);
ret = probe_kernel_write(waddr, &insn, AARCH64_INSN_SIZE);
patch_unmap(FIX_TEXT_POKE0);
raw_spin_unlock_irqrestore(&patch_lock, flags);
return ret;
}
int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
{
u32 *tp = addr;
int ret;
/* A64 instructions must be word aligned */
if ((uintptr_t)tp & 0x3)
return -EINVAL;
ret = aarch64_insn_write(tp, insn);
if (ret == 0)
__flush_icache_range((uintptr_t)tp,
(uintptr_t)tp + AARCH64_INSN_SIZE);
return ret;
}
which seems to be where the main cost is. This is with inside of
qemu, and with lots of debugging options (in particular
kcov and ubsan) enabled, that make each function call
more expensive.
Arnd
Powered by blists - more mailing lists