lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <38f9158b-dca5-2b09-99cb-f12bb62ad5dc@linux.intel.com>
Date:   Tue, 28 Aug 2018 14:31:04 +0300
From:   Alexey Budankov <alexey.budankov@...ux.intel.com>
To:     Jiri Olsa <jolsa@...hat.com>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Namhyung Kim <namhyung@...nel.org>,
        Andi Kleen <ak@...ux.intel.com>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 2/2]: perf record: enable asynchronous trace writing

Hi,

On 28.08.2018 11:57, Jiri Olsa wrote:
> On Mon, Aug 27, 2018 at 09:16:55PM +0300, Alexey Budankov wrote:
> 
> SNIP
> 
>> +	int trace_fd = rec->session->data->file.fd;
>> +	struct aiocb **mmap_aio = rec->evlist->mmap_aio;
>> +	int mmap_aio_size = 0;
>> +	off_t off;
>>  
>>  	if (!evlist)
>>  		return 0;
>> @@ -528,14 +632,17 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
>>  	if (overwrite && evlist->bkw_mmap_state != BKW_MMAP_DATA_PENDING)
>>  		return 0;
>>  
>> +	off = lseek(trace_fd, 0, SEEK_CUR);
>> +
>>  	for (i = 0; i < evlist->nr_mmaps; i++) {
>>  		struct auxtrace_mmap *mm = &maps[i].auxtrace_mmap;
>>  
>>  		if (maps[i].base) {
>> -			if (perf_mmap__push(&maps[i], rec, record__pushfn) != 0) {
>> -				rc = -1;
>> +			rc = perf_mmap__push(&maps[i], rec, record__pushfn, &off);
>> +			if (rc < 0)
>>  				goto out;
>> -			}
>> +			else if (rc > 0)
>> +				mmap_aio[mmap_aio_size++] = &maps[i].cblock;
> 
> I understand the purpose of mmap_aio array, but I don't see a reason
> to fill it in every time we call record__mmap_read_evlist

The cycle trips the same number of iterations over kernel buffers 
for every call of record__mmap_read_evlist(). Called perf_mmap__push() 
checks if there is data ready for spill in the corresponding buffer 
and if there is no such data returns 0. So every time we execute 
the cycle we get different set of buffers to spill and in this 
circumstances dynamic filling of mmap_aio looks preferable.
Lifetime management of perf_mmap object and referenced memory 
is not related another thing.

> 
> the way I see it, when 'pushing the data' it's either all or nothing,
> 
> if there's an error in pushing one map, we bail out completely..
> so the mmap_aio array could be preallocated (it is now) and
> pre-filled with cblock pointers
> 
> that would probably ease up the reference counting I mentioned
> in the previous email
> 
> thanks,
> jirka
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ