[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1516047651-164336-3-git-send-email-kan.liang@intel.com>
Date: Mon, 15 Jan 2018 12:20:38 -0800
From: kan.liang@...el.com
To: acme@...nel.org, peterz@...radead.org, mingo@...hat.com,
linux-kernel@...r.kernel.org
Cc: wangnan0@...wei.com, jolsa@...nel.org, namhyung@...nel.org,
ak@...ux.intel.com, yao.jin@...ux.intel.com,
Kan Liang <kan.liang@...el.com>
Subject: [PATCH V4 02/15] perf mmap: introduce perf_mmap__read_init()
From: Kan Liang <kan.liang@...el.com>
The perf record has specific codes to calculate the ringbuffer position
for both overwrite and non-overwrite mode. Now, only perf record
supports both modes. The perf top will support both modes later.
It is useful to make the specific codes generic.
Introduce a new interface perf_mmap__read_init() to find ringbuffer
position. The perf_mmap__read_init() is actually factored out from
perf_mmap__push().
There are slight differences.
- Add a check for map->refcnt
- Add new return value logic, EAGAIN and EINVAL.
Signed-off-by: Kan Liang <kan.liang@...el.com>
---
tools/perf/util/mmap.c | 43 +++++++++++++++++++++++++++++++++++++++++++
tools/perf/util/mmap.h | 2 ++
2 files changed, 45 insertions(+)
diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
index 05076e6..414089f 100644
--- a/tools/perf/util/mmap.c
+++ b/tools/perf/util/mmap.c
@@ -267,6 +267,49 @@ static int overwrite_rb_find_range(void *buf, int mask, u64 head, u64 *start, u6
return -1;
}
+/*
+ * Report the start and end of the available data in ringbuffer
+ */
+int perf_mmap__read_init(struct perf_mmap *map, bool overwrite,
+ u64 *startp, u64 *endp)
+{
+ unsigned char *data = map->base + page_size;
+ u64 head = perf_mmap__read_head(map);
+ u64 old = map->prev;
+ unsigned long size;
+
+ /*
+ * Check if event was unmapped due to a POLLHUP/POLLERR.
+ */
+ if (!refcount_read(&map->refcnt))
+ return -EINVAL;
+
+ *startp = overwrite ? head : old;
+ *endp = overwrite ? old : head;
+
+ if (*startp == *endp)
+ return -EAGAIN;
+
+ size = *endp - *startp;
+ if (size > (unsigned long)(map->mask) + 1) {
+ if (!overwrite) {
+ WARN_ONCE(1, "failed to keep up with mmap data. (warn only once)\n");
+
+ map->prev = head;
+ perf_mmap__consume(map, overwrite);
+ return -EAGAIN;
+ }
+
+ /*
+ * Backward ring buffer is full. We still have a chance to read
+ * most of data from it.
+ */
+ if (overwrite_rb_find_range(data, map->mask, head, startp, endp))
+ return -EINVAL;
+ }
+ return 0;
+}
+
int perf_mmap__push(struct perf_mmap *md, bool overwrite,
void *to, int push(void *to, void *buf, size_t size))
{
diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h
index e43d7b5..0633308 100644
--- a/tools/perf/util/mmap.h
+++ b/tools/perf/util/mmap.h
@@ -94,4 +94,6 @@ int perf_mmap__push(struct perf_mmap *md, bool backward,
size_t perf_mmap__mmap_len(struct perf_mmap *map);
+int perf_mmap__read_init(struct perf_mmap *map, bool overwrite,
+ u64 *startp, u64 *endp);
#endif /*__PERF_MMAP_H */
--
2.5.5
Powered by blists - more mailing lists