lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 25 Feb 2019 18:30:08 +0300
From:   Alexey Budankov <alexey.budankov@...ux.intel.com>
To:     Jiri Olsa <jolsa@...hat.com>
Cc:     Arnaldo Carvalho de Melo <acme@...nel.org>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Namhyung Kim <namhyung@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Andi Kleen <ak@...ux.intel.com>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 2/4] perf record: implement -z=<level> and
 --mmap-flush=<thres> options


On 20.02.2019 17:14, Alexey Budankov wrote:
> 
> On 12.02.2019 16:09, Jiri Olsa wrote:
>> On Mon, Feb 11, 2019 at 11:22:38PM +0300, Alexey Budankov wrote:
>>
>> SNIP
>>
>>> @@ -1147,6 +1193,10 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
>>>  	fd = perf_data__fd(data);
>>>  	rec->session = session;
>>>  
>>> +	rec->opts.comp_level = 0;
>>> +	session->header.env.comp_level = rec->opts.comp_level;
>>> +	session->header.env.comp_type = PERF_COMP_NONE;
>>> +
>>>  	record__init_features(rec);
>>>  
>>>  	if (rec->opts.use_clockid && rec->opts.clockid_res_ns)
>>> @@ -1176,6 +1226,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
>>>  		err = -1;
>>>  		goto out_child;
>>>  	}
>>> +	session->header.env.comp_mmap_len = session->evlist->mmap_len;
>>
>> so the comp_mmap_len is the max length of the compressed packet?
> 
> comp_mmap_len is the size of buffer to encompass one compressed chunk 
> of data after its decompression.
> 
>>
>> any idea if this value might have some impact on the processing speed?
> 
> It increases memory consumption at the loading and processing stages.
> 
>>
>> I see you mentioned the size reduction, could you also meassure
>> the record overhead?
> 
> Let's get back to this after the code review.

Overhead numbers are provided in v3.

> 
> Thanks,
> Alexey
> 
>>
>> thanks,
>> jirka
>>
> 

Powered by blists - more mailing lists