lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 8 Nov 2019 14:09:56 +0000
From:   Alexei Starovoitov <ast@...com>
To:     Björn Töpel <bjorn.topel@...il.com>,
        "Alexei Starovoitov" <ast@...nel.org>
CC:     David Miller <davem@...emloft.net>,
        Daniel Borkmann <daniel@...earbox.net>,
        "x86@...nel.org" <x86@...nel.org>, Netdev <netdev@...r.kernel.org>,
        bpf <bpf@...r.kernel.org>, Kernel Team <Kernel-team@...com>,
        Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper

On 11/8/19 12:23 AM, Björn Töpel wrote:
> On Fri, 8 Nov 2019 at 07:41, Alexei Starovoitov <ast@...nel.org> wrote:
>>
>> Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch
>> nops/calls in kernel text into calls into BPF trampoline and to patch
>> calls/nops inside BPF programs too.
>>
>> Signed-off-by: Alexei Starovoitov <ast@...nel.org>
>> ---
>>   arch/x86/net/bpf_jit_comp.c | 51 +++++++++++++++++++++++++++++++++++++
>>   include/linux/bpf.h         |  8 ++++++
>>   kernel/bpf/core.c           |  6 +++++
>>   3 files changed, 65 insertions(+)
>>
>> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
>> index 0399b1f83c23..bb8467fd6715 100644
>> --- a/arch/x86/net/bpf_jit_comp.c
>> +++ b/arch/x86/net/bpf_jit_comp.c
>> @@ -9,9 +9,11 @@
>>   #include <linux/filter.h>
>>   #include <linux/if_vlan.h>
>>   #include <linux/bpf.h>
>> +#include <linux/memory.h>
>>   #include <asm/extable.h>
>>   #include <asm/set_memory.h>
>>   #include <asm/nospec-branch.h>
>> +#include <asm/text-patching.h>
>>
>>   static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len)
>>   {
>> @@ -487,6 +489,55 @@ static int emit_call(u8 **pprog, void *func, void *ip)
>>          return 0;
>>   }
>>
>> +int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
>> +                      void *old_addr, void *new_addr)
>> +{
>> +       u8 old_insn[X86_CALL_SIZE] = {};
>> +       u8 new_insn[X86_CALL_SIZE] = {};
>> +       u8 *prog;
>> +       int ret;
>> +
>> +       if (!is_kernel_text((long)ip))
>> +               /* BPF trampoline in modules is not supported */
>> +               return -EINVAL;
>> +
>> +       if (old_addr) {
>> +               prog = old_insn;
>> +               ret = emit_call(&prog, old_addr, (void *)ip);
>> +               if (ret)
>> +                       return ret;
>> +       }
>> +       if (new_addr) {
>> +               prog = new_insn;
>> +               ret = emit_call(&prog, new_addr, (void *)ip);
>> +               if (ret)
>> +                       return ret;
>> +       }
>> +       ret = -EBUSY;
>> +       mutex_lock(&text_mutex);
>> +       switch (t) {
>> +       case BPF_MOD_NOP_TO_CALL:
>> +               if (memcmp(ip, ideal_nops[NOP_ATOMIC5], X86_CALL_SIZE))
>> +                       goto out;
>> +               text_poke(ip, new_insn, X86_CALL_SIZE);
> 
> I'm probably missing something, but why isn't text_poke_bp() needed here?

I should have documented the intent better.
text_poke_bp() is being changed by Peter to emulate instructions
properly in his ftrace->text_poke conversion set.
So I cannot use it just yet.
To you point that text_poke() is technically incorrect here. Yep.
Well aware. This is temporarily. As I said in the cover letter this
needs to change to register_ftrace_direct() for kernel text poking to
play nice with ftrace. Thinking about it more... I guess I can use
text_poke_bp(). Just need to setup handler properly. I may need to do it 
for bpf prog poking anyway. Wanted to avoid extra churn that is going
to be removed during merge window when trees converge.

Since we're on this subject.
Peter,
why you don't do 8 byte atomic rewrite when start addr of insn
is properly aligned? This trap dance would be unnecessary.
That will make everything so much simpler.

Powered by blists - more mailing lists