lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 3 Aug 2017 20:09:22 -0700
From:   Y Song <ys114321@...il.com>
To:     Alexei Starovoitov <ast@...com>
Cc:     Yonghong Song <yhs@...com>, peterz@...radead.org,
        rostedt@...dmis.org, Daniel Borkmann <daniel@...earbox.net>,
        netdev <netdev@...r.kernel.org>, kernel-team@...com
Subject: Re: [PATCH net-next v3 1/2] bpf: add support for sys_enter_* and
 sys_exit_* tracepoints

On Thu, Aug 3, 2017 at 7:08 PM, Alexei Starovoitov <ast@...com> wrote:
> On 8/3/17 6:29 AM, Yonghong Song wrote:
>>
>> @@ -578,8 +596,9 @@ static void perf_syscall_enter(void *ignore, struct
>> pt_regs *regs, long id)
>>         if (!sys_data)
>>                 return;
>>
>> +       prog = READ_ONCE(sys_data->enter_event->prog);
>>         head = this_cpu_ptr(sys_data->enter_event->perf_events);
>> -       if (hlist_empty(head))
>> +       if (!prog && hlist_empty(head))
>>                 return;
>>
>>         /* get the size after alignment with the u32 buffer size field */
>> @@ -594,6 +613,13 @@ static void perf_syscall_enter(void *ignore, struct
>> pt_regs *regs, long id)
>>         rec->nr = syscall_nr;
>>         syscall_get_arguments(current, regs, 0, sys_data->nb_args,
>>                                (unsigned long *)&rec->args);
>> +
>> +       if ((prog && !perf_call_bpf_enter(prog, regs, sys_data, rec)) ||
>> +           hlist_empty(head)) {
>> +               perf_swevent_put_recursion_context(rctx);
>> +               return;
>> +       }
>
>
> hmm. if I read the patch correctly that makes it different from
> kprobe/uprobe/tracepoints+bpf behavior. Why make it different and
> force user space to perf_event_open() on every cpu?
> In other cases it's the job of the bpf program to filter by cpu
> if necessary and that is well understood by bcc scripts.

The patch actually does allow the bpf program to track all cpus.
The test:
>> +       if (!prog && hlist_empty(head))
>>                 return;
ensures that if prog is not empty, it will not return even if the
event in the current cpu is empty. Later on, perf_call_bpf_enter will
be called if prog is not empty. This ensures that
the bpf program will execute regardless of the current cpu.

Maybe I missed anything here?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ