[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190221094906.GD10990@krava>
Date: Thu, 21 Feb 2019 10:49:06 +0100
From: Jiri Olsa <jolsa@...hat.com>
To: Alexey Budankov <alexey.budankov@...ux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Namhyung Kim <namhyung@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 2/4] perf record: implement -z=<level> and
--mmap-flush=<thres> options
On Wed, Feb 20, 2019 at 06:24:30PM +0300, Alexey Budankov wrote:
>
> On 12.02.2019 16:08, Jiri Olsa wrote:
> > On Mon, Feb 11, 2019 at 11:22:38PM +0300, Alexey Budankov wrote:
> >
> > SNIP
> >
> >> +static int perf_mmap__aio_mmap_blocks(struct perf_mmap *map);
> >> +
> >> static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
> >> {
> >> - int delta_max, i, prio, ret;
> >> + int i, ret = 0, init_blocks = 1;
> >>
> >> map->aio.nr_cblocks = mp->nr_cblocks;
> >> + if (map->aio.nr_cblocks == -1) {
> >> + map->aio.nr_cblocks = 1;
> >> + init_blocks = 0;
> >> + }
> >> +
> >> if (map->aio.nr_cblocks) {
> >> - map->aio.aiocb = calloc(map->aio.nr_cblocks, sizeof(struct aiocb *));
> >> - if (!map->aio.aiocb) {
> >> - pr_debug2("failed to allocate aiocb for data buffer, error %m\n");
> >> - return -1;
> >> - }
> >> - map->aio.cblocks = calloc(map->aio.nr_cblocks, sizeof(struct aiocb));
> >> - if (!map->aio.cblocks) {
> >> - pr_debug2("failed to allocate cblocks for data buffer, error %m\n");
> >> - return -1;
> >> - }
> >> map->aio.data = calloc(map->aio.nr_cblocks, sizeof(void *));
> >> if (!map->aio.data) {
> >> pr_debug2("failed to allocate data buffer, error %m\n");
> >> return -1;
> >> }
> >> - delta_max = sysconf(_SC_AIO_PRIO_DELTA_MAX);
> >> for (i = 0; i < map->aio.nr_cblocks; ++i) {
> >> ret = perf_mmap__aio_alloc(map, i);
> >> if (ret == -1) {
> >> @@ -251,29 +245,16 @@ static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
> >> ret = perf_mmap__aio_bind(map, i, map->cpu, mp->affinity);
> >> if (ret == -1)
> >> return -1;
> >> - /*
> >> - * Use cblock.aio_fildes value different from -1
> >> - * to denote started aio write operation on the
> >> - * cblock so it requires explicit record__aio_sync()
> >> - * call prior the cblock may be reused again.
> >> - */
> >> - map->aio.cblocks[i].aio_fildes = -1;
> >> - /*
> >> - * Allocate cblocks with priority delta to have
> >> - * faster aio write system calls because queued requests
> >> - * are kept in separate per-prio queues and adding
> >> - * a new request will iterate thru shorter per-prio
> >> - * list. Blocks with numbers higher than
> >> - * _SC_AIO_PRIO_DELTA_MAX go with priority 0.
> >> - */
> >> - prio = delta_max - i;
> >> - map->aio.cblocks[i].aio_reqprio = prio >= 0 ? prio : 0;
> >> }
> >> + if (init_blocks)
> >> + ret = perf_mmap__aio_mmap_blocks(map);
> >> }
> >>
> >> - return 0;
> >> + return ret;
> >> }
> >
> > SNIP
> >
> > it seems like little refactoring happened in here (up and down) for
> > aio code, which is not explained and I'm unable to follow it.. please
> > separate this in simple change
>
> AIO buffers management has been taken out of HAVE_AIO_SUPPORT define
> to be used for compression in case of serial streaming. It will be
> revisited after other issues are addressed.
as I said earlier, please separate this from aio
jirka
Powered by blists - more mailing lists