lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 26 May 2022 22:48:05 +0800
From:   Xu Kuohai <xukuohai@...wei.com>
To:     Mark Rutland <mark.rutland@....com>
CC:     <bpf@...r.kernel.org>, <linux-arm-kernel@...ts.infradead.org>,
        <linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>,
        <linux-kselftest@...r.kernel.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...hat.com>,
        Daniel Borkmann <daniel@...earbox.net>,
        Alexei Starovoitov <ast@...nel.org>,
        Zi Shen Lim <zlim.lnx@...il.com>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>,
        "David S . Miller" <davem@...emloft.net>,
        Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
        David Ahern <dsahern@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>, <x86@...nel.org>,
        <hpa@...or.com>, Shuah Khan <shuah@...nel.org>,
        Jakub Kicinski <kuba@...nel.org>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        Pasha Tatashin <pasha.tatashin@...een.com>,
        Ard Biesheuvel <ardb@...nel.org>,
        Daniel Kiss <daniel.kiss@....com>,
        Steven Price <steven.price@....com>,
        Sudeep Holla <sudeep.holla@....com>,
        Marc Zyngier <maz@...nel.org>,
        Peter Collingbourne <pcc@...gle.com>,
        Mark Brown <broonie@...nel.org>,
        Delyan Kratunov <delyank@...com>,
        Kumar Kartikeya Dwivedi <memxor@...il.com>,
        Wang ShaoBo <bobo.shaobowang@...wei.com>,
        <cj.chengjian@...wei.com>, <huawei.libin@...wei.com>,
        <xiexiuqi@...wei.com>, <liwei391@...wei.com>
Subject: Re: [PATCH bpf-next v5 1/6] arm64: ftrace: Add ftrace direct call
 support

On 5/26/2022 6:06 PM, Mark Rutland wrote:
> On Thu, May 26, 2022 at 05:45:03PM +0800, Xu Kuohai wrote:
>> On 5/25/2022 9:38 PM, Mark Rutland wrote:
>>> On Wed, May 18, 2022 at 09:16:33AM -0400, Xu Kuohai wrote:
>>>> Add ftrace direct support for arm64.
>>>>
>>>> 1. When there is custom trampoline only, replace the fentry nop to a
>>>>    jump instruction that jumps directly to the custom trampoline.
>>>>
>>>> 2. When ftrace trampoline and custom trampoline coexist, jump from
>>>>    fentry to ftrace trampoline first, then jump to custom trampoline
>>>>    when ftrace trampoline exits. The current unused register
>>>>    pt_regs->orig_x0 is used as an intermediary for jumping from ftrace
>>>>    trampoline to custom trampoline.
>>>
>>> For those of us not all that familiar with BPF, can you explain *why* you want
>>> this? The above explains what the patch implements, but not why that's useful.
>>>
>>> e.g. is this just to avoid the overhead of the ops list processing in the
>>> regular ftrace code, or is the custom trampoline there to allow you to do
>>> something special?
>>
>> IIUC, ftrace direct call was designed to *remove* the unnecessary
>> overhead of saving regs completely [1][2].
> 
> Ok. My plan is to get rid of most of the register saving generally, so I think
> that aspect can be solved without direct calls.
Looking forward to your new solution.

> 
>> [1]
>> https://lore.kernel.org/all/20191022175052.frjzlnjjfwwfov64@ast-mbp.dhcp.thefacebook.com/
>> [2] https://lore.kernel.org/all/20191108212834.594904349@goodmis.org/
>>
>> This patch itself is just a variant of [3].
>>
>> [3] https://lore.kernel.org/all/20191108213450.891579507@goodmis.org/
>>
>>>
>>> There is another patch series on the list from some of your colleagues which
>>> uses dynamic trampolines to try to avoid that ops list overhead, and it's not
>>> clear to me whether these are trying to solve the largely same problem or
>>> something different. That other thread is at:
>>>
>>>   https://lore.kernel.org/linux-arm-kernel/20220316100132.244849-1-bobo.shaobowang@huawei.com/
>>>
>>> ... and I've added the relevant parties to CC here, since there doesn't seem to
>>> be any overlap in the CC lists of the two threads.
>>
>> We're not working to solve the same problem. The trampoline introduced
>> in this series helps us to monitor kernel function or another bpf prog
>> with bpf, and also helps us to use bpf prog like a normal kernel
>> function pointer.
> 
> Ok, but why is it necessary to have a special trampoline?
> 
> Is that *just* to avoid overhead, or do you need to do something special that
> the regular trampoline won't do?
> 

Sorry for not explaining the problem. The main bpf prog accepts only a
single argument 'ctx' in r1, so to allow kernel code to call bpf prog
transparently, we need a trampoline to convert native calling convention
into BPF calling convention [1].

[1] https://lore.kernel.org/bpf/20191114185720.1641606-5-ast@kernel.org/

For example,

SEC("struct_ops/dctcp_state")
void BPF_PROG(dctcp_state, struct sock *sk, __u8 new_state)
{
    // do something
}

The above bpf prog will be compiled to something like this:

dctcp_state:
    r2 = *(u64 *)(r1 + 8)  // new_state
    r1 = *(u64 *)(r1 + 0)  // sk
    ...

It accepts only one argument 'ctx' in r1, and loads the actual arugment
'sk' and 'new_state' from r1 + 0 and r1 + 8, resepectively. So before
calling this prog, we need to construct 'ctx' and store its address to r1.

>>>
>>> In that other thread I've suggested a general approach we could follow at:
>>>   
>>>   https://lore.kernel.org/linux-arm-kernel/YmGF%2FOpIhAF8YeVq@lakrids/
>>>
>>
>> Is it possible for a kernel function to take a long jump to common
>> trampoline when we get a huge kernel image?
> 
> It is possible, but only where the kernel Image itself is massive and the .text
> section exceeeds 128MiB, at which point other things break anyway. Practically
> speaking, this doesn't happen for production kernels, or reasonable test
> kernels.
> 

So even for normal kernel functions, we need some way to construct and
destruct long jumps atomically and safely.

> I've been meaning to add some logic to detect this at boot time and idsable
> ftrace (or at build time), since live patching would also be broken in that
> case.
>>>> As noted in that thread, I have a few concerns which equally apply here:
>>>
>>> * Due to the limited range of BL instructions, it's not always possible to
>>>   patch an ftrace call-site to branch to an arbitrary trampoline. The way this
>>>   works for ftrace today relies upon knowingthe set of trampolines at
>>>   compile-time, and allocating module PLTs for those, and that approach cannot
>>>   work reliably for dynanically allocated trampolines.
>>
>> Currently patch 5 returns -ENOTSUPP when long jump is detected, so no
>> bpf trampoline is constructed for out of range patch-site:
>>
>> if (is_long_jump(orig_call, image))
>> 	return -ENOTSUPP;
> 
> Sure, my point is that in practice that means that (from the user's PoV) this
> may randomly fail to work, and I'd like something that we can ensure works
> consistently.
> 

OK, should I suspend this work until you finish refactoring ftrace?

>>>   I'd strongly prefer to avoid custom tramplines unless they're strictly
>>>   necessary for functional reasons, so that we can have this work reliably and
>>>   consistently.
>>
>> bpf trampoline is needed by bpf itself, not to replace ftrace trampolines.
> 
> As above, can you please let me know *why* specifically it is needed? Why can't
> we invoke the BPF code through the usual ops mechanism?
> 
> Is that to avoid overhead, or are there other functional reasons you need a
> special trampoline?
> 
>>>> * If this is mostly about avoiding the ops list processing overhead, I
>> beleive
>>>   we can implement some custom ops support more generally in ftrace which would
>>>   still use a common trampoline but could directly call into those custom ops.
>>>   I would strongly prefer this over custom trampolines.
>>>
>>> * I'm looking to minimize the set of regs ftrace saves, and never save a full
>>>   pt_regs, since today we (incompletely) fill that with bogus values and cannot
>>>   acquire some state reliably (e.g. PSTATE). I'd like to avoid usage of pt_regs
>>>   unless necessary, and I don't want to add additional reliance upon that
>>>   structure.
>>
>> Even if such a common trampoline is used, bpf trampoline is still
>> necessary since we need to construct custom instructions to implement
>> bpf functions, for example, to implement kernel function pointer with a
>> bpf prog.
> 
> Sorry, but I'm struggling to understand this. What specifically do you need to
> do that means this can't use the same calling convention as the regular ops
> function pointers?
> > Thanks,
> Mark.
> .


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ