lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 3 Dec 2015 11:48:24 +0000
From:	Will Deacon <will.deacon@....com>
To:	libin <huawei.libin@...wei.com>
Cc:	rostedt@...dmis.org, mingo@...hat.com, catalin.marinas@....com,
	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
	guohanjun@...wei.com, dingtianhong@...wei.com
Subject: Re: [PATCH] arm64: ftrace: stop using kstop_machine to
 enable/disable tracing

On Thu, Dec 03, 2015 at 05:39:56PM +0800, libin wrote:
> on 2015/12/2 21:16, Will Deacon wrote:
> > On Wed, Dec 02, 2015 at 12:36:54PM +0000, Will Deacon wrote:
> >> On Sat, Nov 28, 2015 at 03:50:09PM +0800, Li Bin wrote:
> >>> On arm64, kstop_machine which is hugely disruptive to a running
> >>> system is not needed to convert nops to ftrace calls or back,
> >>> because that modifed code is a single 32bit instructions which
> >>> is impossible to cross cache (or page) boundaries, and the used str
> >>> instruction is single-copy atomic.
> >> This commit message is misleading, since the single-copy atomicity
> >> guarantees don't apply to the instruction-side. Instead, the architecture
> >> calls out a handful of safe instructions in "Concurrent modification and
> >> execution of instructions".
> >>
> >> Now, those safe instructions *do* include NOP, B and BL, so that should
> >> be sufficient for ftrace provided that we don't patch condition codes
> >> (and I don't think we do).
> > Thinking about this some more, you also need to fix the validate=1 case
> > in ftrace_modify_code so that it can run outside of stop_machine. We
> > currently rely on that to deal with concurrent modifications (e.g.
> > module unloading).
> 
> I'm not sure it is really a problem, but on x86, which using breakpoints method,
> add_break() that run outside of stop_machine also has similar code.

Yeah, having now read through that, I also can't see any locking issues.
We should remove the comment suggesting otherwise.

> static int add_break(unsigned long ip, const char *old)
> {
>         unsigned char replaced[MCOUNT_INSN_SIZE];
>         unsigned char brk = BREAKPOINT_INSTRUCTION;
> 
>         if (probe_kernel_read(replaced, (void *)ip, MCOUNT_INSN_SIZE))
>                 return -EFAULT;
> 
>         /* Make sure it is what we expect it to be */
>         if (memcmp(replaced, old, MCOUNT_INSN_SIZE) != 0)
>                 return -EINVAL;
> 
>         return ftrace_write(ip, &brk, 1);
> }
> 
> Or I misunderstand what you mean?

Hmm, so this should all be fine if we exclusively use the probe_kernel_*
functions and handle the -EFAULT gracefully. Now, that leaves an
interesting scenario with the flush_icache_range call in
aarch64_insn_patch_text_nosync, since that's not run with
KERNEL_DS/pagefault_disable() and so we'll panic if the text disappears
underneath us.

So we probably need to add that code and call __flush_cache_user_range
instead.

What do you think?

Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ