lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <91d19848-f5f4-4e0e-b3c7-77ac2befae3e@huawei.com>
Date: Mon, 21 Oct 2024 18:45:20 +0800
From: "Liao, Chang" <liaochang1@...wei.com>
To: Catalin Marinas <catalin.marinas@....com>, <will@...nel.org>,
	<ast@...nel.org>, <puranjay@...nel.org>, <andrii@...nel.org>,
	<mark.rutland@....com>
CC: <linux-arm-kernel@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
	<linux-trace-kernel@...r.kernel.org>, <bpf@...r.kernel.org>
Subject: Re: [PATCH v2] arm64: insn: Simulate nop instruction for better
 uprobe performance



在 2024/10/16 2:58, Catalin Marinas 写道:
> On Mon, 09 Sep 2024 07:11:14 +0000, Liao Chang wrote:
>> v2->v1:
>> 1. Remove the simuation of STP and the related bits.
>> 2. Use arm64_skip_faulting_instruction for single-stepping or FEAT_BTI
>>    scenario.
>>
>> As Andrii pointed out, the uprobe/uretprobe selftest bench run into a
>> counterintuitive result that nop and push variants are much slower than
>> ret variant [0]. The root cause lies in the arch_probe_analyse_insn(),
>> which excludes 'nop' and 'stp' from the emulatable instructions list.
>> This force the kernel returns to userspace and execute them out-of-line,
>> then trapping back to kernel for running uprobe callback functions. This
>> leads to a significant performance overhead compared to 'ret' variant,
>> which is already emulated.
>>
>> [...]
> 
> Applied to arm64 (for-next/probes), thanks! I fixed it up according to
> Mark's comments.
> 
> [1/1] arm64: insn: Simulate nop instruction for better uprobe performance
>       https://git.kernel.org/arm64/c/ac4ad5c09b34
> 

Mark, Catalin and Andrii,

I am just back from a long vacation, thanks for reviewing and involvement for
this patch.

I've sent a patch [1] that simulates STP at function entry, It maps user
stack pages to kernel address space, allowing kernel to use STP directly
to push fp/lr onto stack. Unfortunately, the profiling results below show
reveals this approach increases the uprobe-push throughput by 29.3% (from
0.868M/s/cpu to 1.1238M/s/cpu) and uretprobe-push by 15.9% (from 0.616M/s/cpu
to 0.714M/s/cpu). As Andrii pointed out, this approach is a bit complex and
overkill for STP simluation. So I look forward to more input about this patch,
is it possible to reach a better result? Or should I pause this work for now
and wait for Arm64 to add some instruction for storing pairs of registers to
unprivileged memory in privileged exception level? Thanks.

xol-stp
-------
uprobe-push     ( 1 cpus):    0.868 ± 0.001M/s  (  0.868M/s/cpu)
uretprobe-push  ( 1 cpus):    0.616 ± 0.001M/s  (  0.616M/s/cpu)

simulated-stp
-------------
uprobe-push     ( 1 cpus):    1.128 ± 0.002M/s  (  1.128M/s/cpu)
uretprobe-push  ( 1 cpus):    0.714 ± 0.001M/s  (  0.714M/s/cpu)

[1] https://lore.kernel.org/all/20240910060407.1427716-1-liaochang1@huawei.com/

-- 
BR
Liao, Chang


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ