lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 21 Aug 2020 09:51:30 -0700
From:   Andrii Nakryiko <andrii.nakryiko@...il.com>
To:     Alan Maguire <alan.maguire@...cle.com>
Cc:     Andrii Nakryiko <andriin@...com>, bpf <bpf@...r.kernel.org>,
        Networking <netdev@...r.kernel.org>,
        Alexei Starovoitov <ast@...com>,
        Daniel Borkmann <daniel@...earbox.net>,
        Kernel Team <kernel-team@...com>
Subject: Re: [PATCH bpf-next] libbpf: add perf_buffer APIs for better
 integration with outside epoll loop

On Fri, Aug 21, 2020 at 5:51 AM Alan Maguire <alan.maguire@...cle.com> wrote:
>
> On Thu, 20 Aug 2020, Andrii Nakryiko wrote:
>
> > Add a set of APIs to perf_buffer manage to allow applications to integrate
> > perf buffer polling into existing epoll-based infrastructure. One example is
> > applications using libevent already and wanting to plug perf_buffer polling,
> > instead of relying on perf_buffer__poll() and waste an extra thread to do it.
> > But perf_buffer is still extremely useful to set up and consume perf buffer
> > rings even for such use cases.
> >
> > So to accomodate such new use cases, add three new APIs:
> >   - perf_buffer__buffer_cnt() returns number of per-CPU buffers maintained by
> >     given instance of perf_buffer manager;
> >   - perf_buffer__buffer_fd() returns FD of perf_event corresponding to
> >     a specified per-CPU buffer; this FD is then polled independently;
> >   - perf_buffer__consume_buffer() consumes data from single per-CPU buffer,
> >     identified by its slot index.
> >
> > These APIs allow for great flexiblity, but do not sacrifice general usability
> > of perf_buffer.
> >
>
> This is great! If I understand correctly, you're supporting the
> retrieval and ultimately insertion of the individual per-cpu buffer fds
> into another epoll()ed fd.  I've been exploring another possibility -

yes, exactly

> hierarchical epoll, where the top-level perf_buffer epoll_fd field is used
> rather than the individual per-cpu buffers.  In that context, would an
> interface to return the perf_buffer epoll_fd make sense too? i.e.
>
> int perf_buffer__fd(const struct perf_buffer *pb);
>
> ?
>
> When events occur for the perf_buffer__fd, we can simply call
> perf_buffer__poll(perf_buffer__fd(pb), ...) to handle them it seems.
> That approach _appears_ to work, though I do see occasional event loss.
> Is that method legit too or am I missing something?

Yes, this would also work, but it's less efficient because you either
need to do unnecessary epoll_wait() syscall to know which buffers to
process, or you need to call perf_buffer__consume(), which will
iterate over *all* available buffers, even those that don't have new
information.

But I can add a way to get epoll FD as well, if that's more convenient
for some less performance-conscious cases. I'll add:

int perf_buffer__epoll_fd(const struct perf_buffer *pb)

just __fd() is too ambiguous.

>
> > Also exercise and check new APIs in perf_buffer selftest.
> >
> > Signed-off-by: Andrii Nakryiko <andriin@...com>
>
> A few question around the test below, but
>
> Reviewed-by: Alan Maguire <alan.maguire@...cle.com>
>
>
> > ---
> >  tools/lib/bpf/libbpf.c                        | 51 ++++++++++++++-
> >  tools/lib/bpf/libbpf.h                        |  3 +
> >  tools/lib/bpf/libbpf.map                      |  7 +++
> >  .../selftests/bpf/prog_tests/perf_buffer.c    | 62 +++++++++++++++----
> >  4 files changed, 111 insertions(+), 12 deletions(-)
> >

[...]

please trim next time

> > +     for (i = 0; i < nr_cpus; i++) {
> > +             if (i >= on_len || !online[i])
> > +                     continue;
> > +
> > +             fd = perf_buffer__buffer_fd(pb, i);
> > +             CHECK(last_fd == fd, "fd_check", "last fd %d == fd %d\n", last_fd, fd);
> > +             last_fd = fd;
> > +
>
> I'm not sure why you're testing this way - shouldn't it just be a
> verification of whether we get an unexpected error code rather
> than a valid fd?

My goal was to test that I do get different buffer fd for each CPU,
but you are right, I should also check that I get valid FD here. Will
fix in v2.


>
> > +             err = perf_buffer__consume_buffer(pb, i);
> > +             if (CHECK(err, "drain_buf", "cpu %d, err %d\n", i, err))
> > +                     goto out_close;
> > +
>
> I think I'm a bit lost in what processes what here. The first
> perf_buffer__poll() should handle the processing of the records
> associated with the first set of per-cpu triggering I think.
> Is the above perf_buffer__consume_buffer() checking the
> "no data, return success" case? If that's right should we do
> something to explicitly check it indeed was a no-op, like CHECK()ing
> CPU_ISSET(i, &cpu_seen) to ensure the on_sample() handler wasn't
> called? The  "drain_buf" description makes me think I'm misreading
> this and we're draining additional events, so I wanted to check
> what's going on here to make sure.

We can't make sure that there are no extra samples, because samples
are produced for any syscall (we don't filter by thread or process).
The idea here is to drain any remaining samples before I trigger
another round. Then check that at least one sample was emitted on the
desired CPU. It could be a spurious event, of course, but I didn't
think it's important enough to make sure just one sample can be
emitted.

>
> Thanks!
>
> Alan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ