lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 23 Mar 2023 19:41:25 +0530
From:   Ravi Bangoria <ravi.bangoria@....com>
To:     Namhyung Kim <namhyung@...nel.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Stephane Eranian <eranian@...gle.com>,
        Kan Liang <kan.liang@...ux.intel.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>,
        Ravi Bangoria <ravi.bangoria@....com>
Subject: Re: [PATCH] perf/x86/ibs: Set data_src.mem_lvl_num as well

Hi Namhyung,

>>> @@ -748,12 +750,14 @@ static void perf_ibs_get_mem_lvl(union ibs_op_data2 *op_data2,
>>>       if (ibs_caps & IBS_CAPS_ZEN4) {
>>>               if (ibs_data_src == IBS_DATA_SRC_EXT_LOC_CACHE) {
>>>                       data_src->mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_HIT;
>>> +                     data_src->mem_lvl_num = PERF_MEM_LVLNUM_L3;
>>>                       return;
>>>               }
>>>       } else {
>>>               if (ibs_data_src == IBS_DATA_SRC_LOC_CACHE) {
>>>                       data_src->mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_REM_CCE1 |
>>>                                           PERF_MEM_LVL_HIT;
>>> +                     data_src->mem_lvl_num = PERF_MEM_LVLNUM_L3;
>>
>> mem_lvl_num does not have option to set multiple sources. Setting just
>> PERF_MEM_LVLNUM_L3 is bit misleading here. Documentation (PPR 55898 Rev
>> 0.70 - Oct 14, 2022) says:
>>
>>  "data returned from shared L3, other L2 on same CCX or other core's
>>   cache trough same node."
>>
>> As per my knowledge, "shared L3" and "other L2 on same CCX" has similar
>> latency. But request need to go through DF for "other core's cache trough
>> same node" which incurs higher latency. Thus, setting both is important.
>> This was one of the reason to not use mem_lvl_num in IBS code.
> 
> I suspect it's a quality issue for CPUs prior to Zen4 not to identify
> data source precisely.  How about setting LVLNUM_ANY_CACHE then?

Ok. Although, ANY_CACHE is mostly clueless, adding HOPS_0 will make it
more consumable. There are many other places where this patch needs to
set mem_remote and mem_hops. Also, these changes will result in too many
assignment operations. So, I think IBS code should switch to using
PERF_MEM_S() macro. Do you mind if I send v2 with all those changes?

> 
>>
>> 2nd reason was, perf c2c (c2c_decode_stats()) does not use mem_lvl_num.
> 
> Maybe we can change that.  It'd be easy as long as they provide
> the same information.  IOW mem_lvl = mem_lvl_num + remote + snoop.
> 
>>
>> 3rd reason was, perf mem sorting logic (sort__lvl_cmp()) does not consider
>> mem_lvl_num.
> 
> Likewise.
> 
>>
>> 4th one was, if I set both mem_lvl and mem_lvl_num, like what other archs
>> do, `perf mem report` prints both, which is kind of ugly:
>>
>>           464029  N/A
>>           340728  L1 or L1 hit
>>             8312  LFB/MAB or LFB/MAB hit
>>             7901  L2 or L2 hit
>>              123  L3 or Remote Cache (1 hop) or L3 hit
>>
>> Without mem_lvl_num it's much cleaner:
>>
>>           330057  N/A
>>           229646  L1 hit
>>             5842  L2 hit
>>             5726  LFB/MAB hit
>>               78  L3 or Remote Cache (1 hop) hit
> 
> Agreed.  It doesn't need to repeat the same information.
> 
>>
>> I think we should clean this before applying this patch? Other option is
>> to add bpf filter support for mem_lvl. What do you think?
> 
> I still prefer using mem_lvl_num as I think it's the way to go,
> but I'm open for change.

Sure. 2nd, 3rd and 4th are all tool side improvements. Although it would
be good to fix those, let me post v2 of this patch for now?

Thanks,
Ravi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ