lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <931b683e-ccb5-f258-f5fb-549b2daf47b3@fb.com>
Date:   Fri, 8 May 2020 21:18:35 -0700
From:   Yonghong Song <yhs@...com>
To:     Andrii Nakryiko <andrii.nakryiko@...il.com>
CC:     Andrii Nakryiko <andriin@...com>, bpf <bpf@...r.kernel.org>,
        Martin KaFai Lau <kafai@...com>,
        Networking <netdev@...r.kernel.org>,
        Alexei Starovoitov <ast@...com>,
        Daniel Borkmann <daniel@...earbox.net>,
        Kernel Team <kernel-team@...com>
Subject: Re: [PATCH bpf-next v3 13/21] bpf: add bpf_seq_printf and
 bpf_seq_write helpers



On 5/8/20 12:44 PM, Andrii Nakryiko wrote:
> On Wed, May 6, 2020 at 10:40 PM Yonghong Song <yhs@...com> wrote:
>>
>> Two helpers bpf_seq_printf and bpf_seq_write, are added for
>> writing data to the seq_file buffer.
>>
>> bpf_seq_printf supports common format string flag/width/type
>> fields so at least I can get identical results for
>> netlink and ipv6_route targets.
>>
>> For bpf_seq_printf and bpf_seq_write, return value -EOVERFLOW
>> specifically indicates a write failure due to overflow, which
>> means the object will be repeated in the next bpf invocation
>> if object collection stays the same. Note that if the object
>> collection is changed, depending how collection traversal is
>> done, even if the object still in the collection, it may not
>> be visited.
>>
>> bpf_seq_printf may return -EBUSY meaning that internal percpu
>> buffer for memory copy of strings or other pointees is
>> not available. Bpf program can return 1 to indicate it
>> wants the same object to be repeated. Right now, this should not
>> happen on no-RT kernels since migrate_disable(), which guards
>> bpf prog call, calls preempt_disable().
>>
>> Signed-off-by: Yonghong Song <yhs@...com>
>> ---
>>   include/uapi/linux/bpf.h       |  32 +++++-
>>   kernel/trace/bpf_trace.c       | 200 +++++++++++++++++++++++++++++++++
>>   scripts/bpf_helpers_doc.py     |   2 +
>>   tools/include/uapi/linux/bpf.h |  32 +++++-
>>   4 files changed, 264 insertions(+), 2 deletions(-)
>>
> 
> Was a bit surprised by behavior on failed memory read, I think it's
> important to emphasize and document this. But otherwise:
> 
> Acked-by: Andrii Nakryiko <andriin@...com>
> 
> [...]
> 
>> +               if (fmt[i] == 's') {
>> +                       /* try our best to copy */
>> +                       if (memcpy_cnt >= MAX_SEQ_PRINTF_MAX_MEMCPY) {
>> +                               err = -E2BIG;
>> +                               goto out;
>> +                       }
>> +
>> +                       bufs->buf[memcpy_cnt][0] = 0;
>> +                       strncpy_from_unsafe(bufs->buf[memcpy_cnt],
>> +                                           (void *) (long) args[fmt_cnt],
>> +                                           MAX_SEQ_PRINTF_STR_LEN);
> 
> So the behavior is that we try to read string, but if it fails, we
> treat it as empty string? That needs to be documented, IMHO. My
> expectation was that entire printf would fail.

Let me return proper error. Currently, two possible errors may happen:
   - user provide an invalid address, yes, an error should be returned
     and we should not do anything
   - user provide a valid address, but it needs page fault happening
     to read the content. With current implementation,
     strncpy_from_unsafe will return fail. Future sleepable
     bpf program will help for this case, so an error means a
     real address error.

> 
> Same for pointers below, right?
> 
>> +                       params[fmt_cnt] = (u64)(long)bufs->buf[memcpy_cnt];
>> +
>> +                       fmt_cnt++;
>> +                       memcpy_cnt++;
>> +                       continue;
>> +               }
>> +
> 
> [...]
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ