[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d03c0f6b-57d8-773b-7d82-316c2bef2fb3@linux.intel.com>
Date: Wed, 9 Jan 2019 12:12:37 +0300
From: Alexey Budankov <alexey.budankov@...ux.intel.com>
To: Jiri Olsa <jolsa@...hat.com>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Namhyung Kim <namhyung@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 2/4] perf record: bind the AIO user space buffers to
nodes
Hi,
On 02.01.2019 0:41, Jiri Olsa wrote:
> On Mon, Dec 24, 2018 at 03:24:36PM +0300, Alexey Budankov wrote:
>
> SNIP
>
>> +static void perf_mmap__aio_free(void **data, size_t len __maybe_unused)
>> +{
>> + zfree(data);
>> +}
>> +
>> +static void perf_mmap__aio_bind(void *data __maybe_unused, size_t len __maybe_unused,
>> + int cpu __maybe_unused, int affinity __maybe_unused)
>> +{
>> +}
>> +#endif
>> +
>> static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
>> {
>> int delta_max, i, prio;
>> @@ -177,11 +220,13 @@ static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
>> }
>> delta_max = sysconf(_SC_AIO_PRIO_DELTA_MAX);
>> for (i = 0; i < map->aio.nr_cblocks; ++i) {
>> - map->aio.data[i] = malloc(perf_mmap__mmap_len(map));
>> + size_t mmap_len = perf_mmap__mmap_len(map);
>> + perf_mmap__aio_alloc(&(map->aio.data[i]), mmap_len);
>> if (!map->aio.data[i]) {
>> pr_debug2("failed to allocate data buffer area, error %m");
>> return -1;
>> }
>> + perf_mmap__aio_bind(map->aio.data[i], mmap_len, map->cpu, mp->affinity);
>
> this all does not work if bind fails.. I think we need to
> propagate the error value here and fail
Proceeding further from this point still makes sense because
the buffer is available for operations and thread migration
alone can bring performance benefits. So the error is not fatal
and an explicit warning is implemented in v3. If you still think
it is better to propagate error from here it can be implemented.
Thanks,
Alexey
>
> jirka
>
Powered by blists - more mailing lists