[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM9d7cjrXf5Ook+wBHrQv9tL2v=i+yasUzS-F3tJuDZDq88hhQ@mail.gmail.com>
Date: Tue, 23 Aug 2022 23:09:49 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: John Fastabend <john.fastabend@...il.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...gle.com>,
Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>, bpf <bpf@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH bpf-next] bpf: Add bpf_read_raw_record() helper
Hello,
On Tue, Aug 23, 2022 at 10:31 PM John Fastabend
<john.fastabend@...il.com> wrote:
>
> Namhyung Kim wrote:
> > The helper is for BPF programs attached to perf_event in order to read
> > event-specific raw data. I followed the convention of the
> > bpf_read_branch_records() helper so that it can tell the size of
> > record using BPF_F_GET_RAW_RECORD flag.
> >
> > The use case is to filter perf event samples based on the HW provided
> > data which have more detailed information about the sample.
> >
> > Note that it only reads the first fragment of the raw record. But it
> > seems mostly ok since all the existing PMU raw data have only single
> > fragment and the multi-fragment records are only for BPF output attached
> > to sockets. So unless it's used with such an extreme case, it'd work
> > for most of tracing use cases.
> >
> > Signed-off-by: Namhyung Kim <namhyung@...nel.org>
> > ---
>
> Acked-by: John Fastabend <john.fastabend@...il.com>
Thanks!
>
> > I don't know how to test this. As the raw data is available on some
> > hardware PMU only (e.g. AMD IBS). I tried a tracepoint event but it was
> > rejected by the verifier. Actually it needs a bpf_perf_event_data
> > context so that's not an option IIUC.
>
> not a pmu expert but also no good ideas on my side.
>
> ...
>
> >
> > +BPF_CALL_4(bpf_read_raw_record, struct bpf_perf_event_data_kern *, ctx,
> > + void *, buf, u32, size, u64, flags)
> > +{
> > + struct perf_raw_record *raw = ctx->data->raw;
> > + struct perf_raw_frag *frag;
> > + u32 to_copy;
> > +
> > + if (unlikely(flags & ~BPF_F_GET_RAW_RECORD_SIZE))
> > + return -EINVAL;
> > +
> > + if (unlikely(!raw))
> > + return -ENOENT;
> > +
> > + if (flags & BPF_F_GET_RAW_RECORD_SIZE)
> > + return raw->size;
> > +
> > + if (!buf || (size % sizeof(u32) != 0))
> > + return -EINVAL;
> > +
> > + frag = &raw->frag;
> > + WARN_ON_ONCE(!perf_raw_frag_last(frag));
> > +
> > + to_copy = min_t(u32, frag->size, size);
> > + memcpy(buf, frag->data, to_copy);
> > +
> > + return to_copy;
> > +}
> > +
> > +static const struct bpf_func_proto bpf_read_raw_record_proto = {
> > + .func = bpf_read_raw_record,
> > + .gpl_only = true,
> > + .ret_type = RET_INTEGER,
> > + .arg1_type = ARG_PTR_TO_CTX,
> > + .arg2_type = ARG_PTR_TO_MEM_OR_NULL,
> > + .arg3_type = ARG_CONST_SIZE_OR_ZERO,
> > + .arg4_type = ARG_ANYTHING,
> > +};
>
> Patch lgtm but curious why allow the ARG_PTR_TO_MEM_OR_NULL from API
> side instead of just ARG_PTR_TO_MEM? Maybe, just to match the
> existing perf_event_read()? I acked it as I think matching existing
> API is likely good enough reason.
It can query the size of raw record using BPF_F_GET_RAW_RECORD_SIZE.
In that case it can pass NULL for the buffer (and 0 for the size).
Thanks,
Namhyung
>
> > +
> > static const struct bpf_func_proto *
> > pe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> > {
> > @@ -1548,6 +1587,8 @@ pe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> > return &bpf_read_branch_records_proto;
> > case BPF_FUNC_get_attach_cookie:
> > return &bpf_get_attach_cookie_proto_pe;
> > + case BPF_FUNC_read_raw_record:
> > + return &bpf_read_raw_record_proto;
> > default:
> > return bpf_tracing_func_proto(func_id, prog);
> > }
> > --
> > 2.37.2.609.g9ff673ca1a-goog
> >
>
>
Powered by blists - more mailing lists