[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <37D7C6CF3E00A74B8858931C1DB2F077537F8857@SHSMSX103.ccr.corp.intel.com>
Date: Tue, 19 Dec 2017 15:23:10 +0000
From: "Liang, Kan" <kan.liang@...el.com>
To: "Wangnan (F)" <wangnan0@...wei.com>,
"acme@...nel.org" <acme@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: "jolsa@...nel.org" <jolsa@...nel.org>,
"namhyung@...nel.org" <namhyung@...nel.org>,
"ak@...ux.intel.com" <ak@...ux.intel.com>,
"yao.jin@...ux.intel.com" <yao.jin@...ux.intel.com>
Subject: RE: [PATCH V2 2/8] perf tools: rewrite perf mmap read for overwrite
> >>> +int perf_mmap__read_catchup(struct perf_mmap *map,
> >>> + bool overwrite,
> >>> + u64 *start, u64 *end,
> >>> + unsigned long *size)
> >>> {
> >>> - u64 head;
> >>> + u64 head = perf_mmap__read_head(map);
> >>> + u64 old = map->prev;
> >>> + unsigned char *data = map->base + page_size;
> >>>
> >>> - if (!refcount_read(&map->refcnt))
> >>> - return;
> >>> + *start = overwrite ? head : old;
> >>> + *end = overwrite ? old : head;
> >>>
> >>> - head = perf_mmap__read_head(map);
> >>> - map->prev = head;
> >>> + if (*start == *end)
> >>> + return 0;
> >>> +
> >>> + *size = *end - *start;
> >>> + if (*size > (unsigned long)(map->mask) + 1) {
> >>> + if (!overwrite) {
> >>> + WARN_ONCE(1, "failed to keep up with mmap data.
> >> (warn only once)\n");
> >>> +
> >>> + map->prev = head;
> >>> + perf_mmap__consume(map, overwrite);
> >>> + return 0;
> >>> + }
> >>> +
> >>> + /*
> >>> + * Backward ring buffer is full. We still have a chance to read
> >>> + * most of data from it.
> >>> + */
> >>> + if (overwrite_rb_find_range(data, map->mask, head, start,
> >> end))
> >>> + return -1;
> >>> + }
> >>> +
> >>> + return 1;
> >> Coding suggestion: this function returns 2 different value (1 and -1) in
> >> two fail path. Should return 0 for success and -1 for failure.
> >>
> > For perf top, -1 is enough for failure.
> > But for perf_mmap__push, it looks two fail paths are needed, aren't they?
> >
>
> I see. I think this is not a good practice. Please try to avoid returning
> 3 states. For example, comparing start and end outside? If can't avoid, how
> about returning an enum to notice reader about the difference?
OK. Will avoid it in V3.
> >> 2. Don't update map->prev in perf_mmap__read. Let perf_mmap__read()
> >> update start pointer, so it become stateless and suit for both backward
> >> and forward reading.
> >>
> >> 3. Update md->prev for forward reading. Merge it into
> >> perf_mmap__consume?
> > It looks we have to pass the updated 'start' to perf_mmap__consume.
> > Move interfaces like perf_evlist__mmap_read need to be changed.
> > That would impacts other tools which only support non-overwrite mode.
> >
> > How about update both 'md->prev' and 'start' in perf_mmap__read()
> > like the patch as below?
>
> What about making perf_mmap__read() totally stateless? Don't
> touch md->prev there, and makes forward reading do an extra
> mp->prev updating before consuming?
We can move the update in the new interface perf_mmap__read_event.
>
> > Introducing a new perf_mmap__read_event to get rid of
> > the perf_mmap__read_backward/forward.
> >
> > perf_mmap__read_backward will be removed later.
> > Keep perf_mmap__read_forward since other tools still use it.
>
>
> OK. For all reader who doesn't care backward or forward, it should use
> perf_mmap__read() and maintains start position by its own, and give it
> a chance to adjust map->prev if the ringbuffer is forward.
>
> perf_mmap__read_forward() borrows mp->prev to maintain start position
> like this:
>
> union perf_event *perf_mmap__read_forward(struct perf_mmap *map)
> {
> ....
> return perf_mmap__read(map, &map->prev, head);
> }
>
>
Yes, we can do that for the legacy interface.
> > It has to update the 'end' for non-overwrite mode in each read since the
> > ringbuffer is not paused.
> >
> > diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
> > index 484a394..74f33fd 100644
> > --- a/tools/perf/util/mmap.c
> > +++ b/tools/perf/util/mmap.c
> > @@ -22,16 +22,16 @@ size_t perf_mmap__mmap_len(struct perf_mmap
> *map)
> >
> > /* When check_messup is true, 'end' must points to a good entry */
> > static union perf_event *perf_mmap__read(struct perf_mmap *map,
> > - u64 start, u64 end, u64 *prev)
> > + u64 *start, u64 end, u64 *prev)
>
>
> Don't need *prev because it can be replaced by *start.
>
> > {
> > unsigned char *data = map->base + page_size;
> > union perf_event *event = NULL;
> > - int diff = end - start;
> > + int diff = end - *start;
> >
> > if (diff >= (int)sizeof(event->header)) {
> > size_t size;
> >
> > - event = (union perf_event *)&data[start & map->mask];
> > + event = (union perf_event *)&data[*start & map->mask];
> > size = event->header.size;
> >
> > if (size < sizeof(event->header) || diff < (int)size) {
> > @@ -43,8 +43,8 @@ static union perf_event *perf_mmap__read(struct
> perf_mmap *map,
> > * Event straddles the mmap boundary -- header should
> always
> > * be inside due to u64 alignment of output.
> > */
> > - if ((start & map->mask) + size != ((start + size) & map->mask))
> {
> > - unsigned int offset = start;
> > + if ((*start & map->mask) + size != ((*start + size) & map-
> >mask)) {
> > + unsigned int offset = *start;
> > unsigned int len = min(sizeof(*event), size), cpy;
> > void *dst = map->event_copy;
> >
> > @@ -59,12 +59,12 @@ static union perf_event *perf_mmap__read(struct
> perf_mmap *map,
> > event = (union perf_event *)map->event_copy;
> > }
> >
> > - start += size;
> > + *start += size;
> > }
> >
> > broken_event:
> > if (prev)
> > - *prev = start;
> > + *prev = *start;
> >
> > return event;
> > }
> > @@ -132,6 +107,42 @@ void perf_mmap__read_catchup(struct
> perf_mmap *map)
> > map->prev = head;
> > }
> > +
> > +/*
> > + * Read event from ring buffer. Return one event for each call.
> > + * Support both overwirte and non-overwrite mode.
> > + * The start and end are only available for overwirte mode, which
> > + * pause the ringbuffer.
> > + *
> > + * Usage:
> > + * perf_mmap__read_init
> > + * while(event = perf_mmap__read_event) {
> > + * //process the event
> > + * perf_mmap__consume
>
> Need a transparent method before perf_mmap__consume to adjust
> md->prev if the ring buffer is forward. Or let'w wrap perf_mmap__consume
> to do it if you don't want to break its API?
I think the best place is perf_mmap__read_event, if you don't want to
update it in perf_mmap__read.
Wrapping perf_mmap__consume still need to pass the 'start' as parameter,
and add need a new wrapper interface.
In perf_mmap__read_event, 'start' is already there.
Only need to do this.
+ if (!overwrite)
+ map->prev = *start;
Thanks,
Kan
Powered by blists - more mailing lists