lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 16 Feb 2018 10:25:31 +0800
From:   "Jin, Yao" <yao.jin@...ux.intel.com>
To:     Jiri Olsa <jolsa@...hat.com>
Cc:     acme@...nel.org, jolsa@...nel.org, peterz@...radead.org,
        mingo@...hat.com, alexander.shishkin@...ux.intel.com,
        Linux-kernel@...r.kernel.org, ak@...ux.intel.com,
        kan.liang@...el.com, yao.jin@...el.com
Subject: Re: [PATCH] perf report: Fix a memory corrupton issue when enabling
 --branch-history



On 2/13/2018 10:00 PM, Jin, Yao wrote:
> 
> 
> On 2/13/2018 5:45 PM, Jiri Olsa wrote:
>> On Tue, Feb 13, 2018 at 04:44:28PM +0800, Jin Yao wrote:
>>> Following command lines will cause perf crash.
>>>
>>> perf record -j call -g -a <application>
>>> perf report --branch-history
>>>
>>> *** Error in `perf': double free or corruption (!prev): 
>>> 0x00000000104aa040 ***
>>> ======= Backtrace: =========
>>> /lib/x86_64-linux-gnu/libc.so.6(+0x77725)[0x7f6b37254725]
>>> /lib/x86_64-linux-gnu/libc.so.6(+0x7ff4a)[0x7f6b3725cf4a]
>>> /lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f6b37260abc]
>>> perf[0x51b914]
>>> perf(hist_entry_iter__add+0x1e5)[0x51f305]
>>> perf[0x43cf01]
>>> perf[0x4fa3bf]
>>> perf[0x4fa923]
>>> perf[0x4fd396]
>>> perf[0x4f9614]
>>> perf(perf_session__process_events+0x89e)[0x4fc38e]
>>> perf(cmd_report+0x15d2)[0x43f202]
>>> perf[0x4a059f]
>>> perf(main+0x631)[0x427b71]
>>> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f6b371fd830]
>>> perf(_start+0x29)[0x427d89]
>>>
>>> The memory corruption happens at:
>>>
>>> iter_add_next_cumulative_entry()
>>> {
>>>          ...
>>>          for (i = 0; i < iter->curr; i++) {
>>>          ...
>>> }
>>>
>>> Whatever in iter_next_cumulative_entry() or in 
>>> iter_add_next_cumulative_entry(),
>>> they all don't check if iter->curr exceeds the array 'he_cache[]'.
>>>
>>> If there are too many nodes in callchain, it's possible that 
>>> iter->curr >
>>> iter->max_stack, then memory corruption occurs.
>>>
>>> This patch will reallocate array 'he_cache[]' in 
>>> iter_next_cumulative_entry()
>>> if necessary (the case of too many nodes in callchain).
>>
>> right, the max_stack does not say how many nodes end up in
>> callchain_cursor at the end.. good catch, please mention
>> that also in the changelog
>>
> 
> max_stack looks only to limit the number of calls but not for other 
> branches.
> 
>> however we know the final count from callchain_cursor itself,
>> the attached patch might do the same job, right?
>>
> 
> I think the attached patch is ok.
> 
>> also could we now get rid of iter->max_stack?
>>
> 
>  From my opinion, the option '--max-stack' in perf report looks not very 
> necessary. While it's just my personal opinion, need to hear from more 
> people. :)
> 
> Thanks
> Jin Yao
> 
>> thanks,
>> jirka
>>
>>
>> ---
>> diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
>> index b6140950301e..b50b7b70dcca 100644
>> --- a/tools/perf/util/hist.c
>> +++ b/tools/perf/util/hist.c
>> @@ -879,7 +879,7 @@ iter_prepare_cumulative_entry(struct 
>> hist_entry_iter *iter,
>>        * cumulated only one time to prevent entries more than 100%
>>        * overhead.
>>        */
>> -    he_cache = malloc(sizeof(*he_cache) * (iter->max_stack + 1));
>> +    he_cache = malloc(sizeof(*he_cache) * (callchain_cursor.nr + 1));
>>       if (he_cache == NULL)
>>           return -ENOMEM;
>>

Hi Jiri,

I guess you will post this patch, right?

Thanks
Jin Yao

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ