lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 9 Jan 2019 17:49:15 +0100
From:   Jiri Olsa <jolsa@...hat.com>
To:     Alexey Budankov <alexey.budankov@...ux.intel.com>
Cc:     Arnaldo Carvalho de Melo <acme@...nel.org>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Namhyung Kim <namhyung@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Andi Kleen <ak@...ux.intel.com>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 2/4] perf record: bind the AIO user space buffers to
 nodes

On Wed, Jan 09, 2019 at 12:12:37PM +0300, Alexey Budankov wrote:
> Hi,
> 
> On 02.01.2019 0:41, Jiri Olsa wrote:
> > On Mon, Dec 24, 2018 at 03:24:36PM +0300, Alexey Budankov wrote:
> > 
> > SNIP
> > 
> >> +static void perf_mmap__aio_free(void **data, size_t len __maybe_unused)
> >> +{
> >> +	zfree(data);
> >> +}
> >> +
> >> +static void perf_mmap__aio_bind(void *data __maybe_unused, size_t len __maybe_unused,
> >> +                                int cpu __maybe_unused, int affinity __maybe_unused)
> >> +{
> >> +}
> >> +#endif
> >> +
> >>  static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
> >>  {
> >>  	int delta_max, i, prio;
> >> @@ -177,11 +220,13 @@ static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
> >>  		}
> >>  		delta_max = sysconf(_SC_AIO_PRIO_DELTA_MAX);
> >>  		for (i = 0; i < map->aio.nr_cblocks; ++i) {
> >> -			map->aio.data[i] = malloc(perf_mmap__mmap_len(map));
> >> +			size_t mmap_len = perf_mmap__mmap_len(map);
> >> +			perf_mmap__aio_alloc(&(map->aio.data[i]), mmap_len);
> >>  			if (!map->aio.data[i]) {
> >>  				pr_debug2("failed to allocate data buffer area, error %m");
> >>  				return -1;
> >>  			}
> >> +			perf_mmap__aio_bind(map->aio.data[i], mmap_len, map->cpu, mp->affinity);
> > 
> > this all does not work if bind fails.. I think we need to
> > propagate the error value here and fail
> 
> Proceeding further from this point still makes sense because 
> the buffer is available for operations and thread migration 
> alone can bring performance benefits. So the error is not fatal 
> and an explicit warning is implemented in v3. If you still think 
> it is better to propagate error from here it can be implemented.

so if that fails that the aio buffers won't be bound to node,
while mmaps are, so I guess the speedup is from there?

if I use:

# perf record --aio --affinity=node

and see:
  "failed to bind..."

I can still see the benefit..? I guess the warning is ok then,
another option seems confusing

jirka

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ