[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e75ae90b-2ea6-0047-2045-31609c17db47@linux.intel.com>
Date: Tue, 28 Aug 2018 12:39:33 +0300
From: Alexey Budankov <alexey.budankov@...ux.intel.com>
To: Jiri Olsa <jolsa@...hat.com>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>,
Andi Kleen <ak@...ux.intel.com>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 2/2]: perf record: enable asynchronous trace writing
Hi,
On 28.08.2018 11:50, Jiri Olsa wrote:
> On Mon, Aug 27, 2018 at 09:16:55PM +0300, Alexey Budankov wrote:
>>
>> Trace file offset are linearly calculated by perf_mmap__push() code
>> for the next possible write operation, but file position is updated by
>> the kernel only in the second lseek() syscall after the loop.
>> The first lseek() syscall reads that file position for
>> the next loop iterations.
>>
>> record__mmap_read_sync implements sort of a barrier between spilling
>> ready profiling data to disk.
>>
>> Signed-off-by: Alexey Budankov <alexey.budankov@...ux.intel.com>
>> ---
>> Changes in v3:
>> - written comments about nanosleep(0.5ms) call prior aio_suspend()
>> to cope with intrusiveness of its implementation in glibc;
>> - written comments about rationale behind coping profiling data
>> into mmap->data buffer;
>> ---
>> tools/perf/builtin-record.c | 125 +++++++++++++++++++++++++++++++++++++++++---
>> tools/perf/util/mmap.c | 36 ++++++++-----
>> tools/perf/util/mmap.h | 2 +-
>> 3 files changed, 143 insertions(+), 20 deletions(-)
>>
>> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
>> index 22ebeb92ac51..4ac61399a09a 100644
>> --- a/tools/perf/builtin-record.c
>> +++ b/tools/perf/builtin-record.c
>> @@ -53,6 +53,7 @@
>> #include <sys/mman.h>
>> #include <sys/wait.h>
>> #include <linux/time64.h>
>> +#include <aio.h>
>>
>> struct switch_output {
>> bool enabled;
>> @@ -121,6 +122,23 @@ static int record__write(struct record *rec, void *bf, size_t size)
>> return 0;
>> }
>>
>> +static int record__aio_write(int trace_fd, struct aiocb *cblock,
>> + void *buf, size_t size, off_t off)
>> +{
>> + cblock->aio_fildes = trace_fd;
>> + cblock->aio_buf = buf;
>> + cblock->aio_nbytes = size;
>> + cblock->aio_offset = off;
>> + cblock->aio_sigevent.sigev_notify = SIGEV_NONE;
>> +
>> + if (aio_write(cblock) == -1) {
>> + pr_err("failed to queue perf data, error: %m\n");
>> + return -1;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> static int process_synthesized_event(struct perf_tool *tool,
>> union perf_event *event,
>> struct perf_sample *sample __maybe_unused,
>> @@ -130,12 +148,14 @@ static int process_synthesized_event(struct perf_tool *tool,
>> return record__write(rec, event, event->header.size);
>> }
>>
>> -static int record__pushfn(void *to, void *bf, size_t size)
>> +static int record__pushfn(void *to, void *bf, size_t size, off_t off)
>> {
>> struct record *rec = to;
>> + struct perf_mmap *map = bf;
>
> the argument needs to change for record__pushfn,
> now with your changes, it's no longer 'void *bf',
> but 'struct perf_mmap *map'
Ok. Included into [PATCH v4 2/2].
>
> also I'm little confused why we have '*to' and cast
> it back to 'struct record', but so be it ;-)
Supported. :)
>
> thanks,
> jirka
>
Powered by blists - more mailing lists