[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM9d7cjJKFZUQkYW2U6eBmdQJdSOrVDe0FiojhNBbknsKoEyTQ@mail.gmail.com>
Date: Wed, 11 Aug 2021 13:57:13 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: Jiri Olsa <jolsa@...hat.com>
Cc: Stephane Eranian <eranian@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
LKML <linux-kernel@...r.kernel.org>,
Andi Kleen <ak@...ux.intel.com>,
Ian Rogers <irogers@...gle.com>, Gabriel Marin <gmx@...gle.com>
Subject: Re: [RFC] perf/core: Add an ioctl to get a number of lost samples
On Wed, Aug 11, 2021 at 12:57 PM Jiri Olsa <jolsa@...hat.com> wrote:
>
> On Wed, Aug 11, 2021 at 12:33:38PM -0700, Stephane Eranian wrote:
> > On Wed, Aug 11, 2021 at 8:04 AM Jiri Olsa <jolsa@...hat.com> wrote:
> > >
> > > On Tue, Aug 10, 2021 at 11:21:35PM -0700, Namhyung Kim wrote:
> > > > Sometimes we want to know an accurate number of samples even if it's
> > > > lost. Currenlty PERF_RECORD_LOST is generated for a ring-buffer which
> > > > might be shared with other events. So it's hard to know per-event
> > > > lost count.
> > > >
> > > > Add event->lost_samples field and PERF_EVENT_IOC_LOST_SAMPLES to
> > > > retrieve it from userspace.
> > > >
> > > > Signed-off-by: Namhyung Kim <namhyung@...nel.org>
> > > > ---
> > > > include/linux/perf_event.h | 2 ++
> > > > include/uapi/linux/perf_event.h | 1 +
> > > > kernel/events/core.c | 9 +++++++++
> > > > kernel/events/ring_buffer.c | 5 ++++-
> > > > 4 files changed, 16 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> > > > index f5a6a2f069ed..44d72079c77a 100644
> > > > --- a/include/linux/perf_event.h
> > > > +++ b/include/linux/perf_event.h
> > > > @@ -756,6 +756,8 @@ struct perf_event {
> > > > struct pid_namespace *ns;
> > > > u64 id;
> > > >
> > > > + atomic_t lost_samples;
> > > > +
> > > > u64 (*clock)(void);
> > > > perf_overflow_handler_t overflow_handler;
> > > > void *overflow_handler_context;
> > > > diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
> > > > index bf8143505c49..24397799127d 100644
> > > > --- a/include/uapi/linux/perf_event.h
> > > > +++ b/include/uapi/linux/perf_event.h
> > > > @@ -505,6 +505,7 @@ struct perf_event_query_bpf {
> > > > #define PERF_EVENT_IOC_PAUSE_OUTPUT _IOW('$', 9, __u32)
> > > > #define PERF_EVENT_IOC_QUERY_BPF _IOWR('$', 10, struct perf_event_query_bpf *)
> > > > #define PERF_EVENT_IOC_MODIFY_ATTRIBUTES _IOW('$', 11, struct perf_event_attr *)
> > > > +#define PERF_EVENT_IOC_LOST_SAMPLES _IOR('$', 12, __u64 *)
> > >
> > > would it be better to use the read syscall for that?
> > > https://lore.kernel.org/lkml/20210622153918.688500-5-jolsa@kernel.org/
> > >
> > > that patchset ended up on me not having a way to reproduce the
> > > issue you guys wanted the fix for ;-) the lost count is there
> > > as well
> > >
> > Does the read format approach succeed even when the event is in error state?
>
> nope..
>
> /*
> * Return end-of-file for a read on an event that is in
> * error state (i.e. because it was pinned but it couldn't be
> * scheduled on to the CPU at some point).
> */
> if (event->state == PERF_EVENT_STATE_ERROR)
> return 0;
>
By the way, it'd be nice if the kernel would provide a way for
better error reporting. There are many cases return -EINVAL
and it's hard to know what's the problem exactly.
Thanks,
Namhyung
Powered by blists - more mailing lists