[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190305122534.GB16615@krava>
Date: Tue, 5 Mar 2019 13:25:34 +0100
From: Jiri Olsa <jolsa@...hat.com>
To: Alexey Budankov <alexey.budankov@...ux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Andi Kleen <ak@...ux.intel.com>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v5 07/10] perf record: implement -z,--compression_level=n
option and compression
On Fri, Mar 01, 2019 at 06:58:32PM +0300, Alexey Budankov wrote:
SNIP
>
> /*
> * Increment md->refcount to guard md->data[idx] buffer
> @@ -350,7 +357,7 @@ int perf_mmap__aio_push(struct perf_mmap *md, void *to, int idx,
> md->prev = head;
> perf_mmap__consume(md);
>
> - rc = push(to, &md->aio.cblocks[idx], md->aio.data[idx], size0 + size, *off);
> + rc = push(to, md->aio.data[idx], size0 + size, *off, &md->aio.cblocks[idx]);
> if (!rc) {
> *off += size0 + size;
> } else {
> @@ -556,13 +563,15 @@ int perf_mmap__read_init(struct perf_mmap *map)
> }
>
> int perf_mmap__push(struct perf_mmap *md, void *to,
> - int push(struct perf_mmap *map, void *to, void *buf, size_t size))
> + int push(struct perf_mmap *map, void *to, void *buf, size_t size),
> + perf_mmap__compress_fn_t compress, void *comp_data)
> {
> u64 head = perf_mmap__read_head(md);
> unsigned char *data = md->base + page_size;
> unsigned long size;
> void *buf;
> int rc = 0;
> + size_t mmap_len = perf_mmap__mmap_len(md);
>
> rc = perf_mmap__read_init(md);
> if (rc < 0)
> @@ -574,7 +583,10 @@ int perf_mmap__push(struct perf_mmap *md, void *to,
> buf = &data[md->start & md->mask];
> size = md->mask + 1 - (md->start & md->mask);
> md->start += size;
> -
> + if (compress) {
> + size = compress(comp_data, md->data, mmap_len, buf, size);
> + buf = md->data;
> + }
> if (push(md, to, buf, size) < 0) {
> rc = -1;
> goto out;
when we discussed the compress callback should be another layer
in perf_mmap__push I was thinking more of the layered/fifo design,
like:
normaly we call:
perf_mmap__push(... push = record__pushfn ...)
-> reads mmap data and calls push(data), which translates as:
record__pushfn(data);
- which stores the data
for compressed it'd be:
perf_mmap__push(... push = compressed_push ...)
-> reads mmap data and calls push(data), which translates as:
compressed_push(data)
-> reads data, compresses them and calls, next push callback in line:
record__pushfn(data)
- which stores the data
there'd need to be the logic for compressed_push to
remember the 'next push' function
but I think this was the original idea behind the
perf_mmap__push -> it gets the data and pushes them for
the next processing.. it should stay as simple as that
jirka
Powered by blists - more mailing lists