lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 28 Jun 2022 17:23:09 +0200
From:   Krzysztof Kozlowski <krzysztof.kozlowski@...aro.org>
To:     Rajendra Nayak <quic_rjendra@...cinc.com>,
        Bjorn Andersson <bjorn.andersson@...aro.org>
Cc:     Andy Gross <agross@...nel.org>, Georgi Djakov <djakov@...nel.org>,
        Rob Herring <robh+dt@...nel.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>, linux-arm-msm@...r.kernel.org,
        linux-pm@...r.kernel.org, devicetree@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
        Thara Gopinath <thara.gopinath@...aro.org>
Subject: Re: [PATCH v4 4/4] arm64: dts: qcom: sdm845: Add CPU BWMON

On 28/06/2022 17:20, Rajendra Nayak wrote:
> 
> 
> On 6/28/2022 7:32 PM, Krzysztof Kozlowski wrote:
>> On 28/06/2022 15:15, Rajendra Nayak wrote:
>>>
>>>
>>> On 6/28/2022 4:20 PM, Krzysztof Kozlowski wrote:
>>>> On 28/06/2022 12:36, Rajendra Nayak wrote:
>>>>>
>>>>> On 6/27/2022 6:09 PM, Krzysztof Kozlowski wrote:
>>>>>> On 26/06/2022 05:28, Bjorn Andersson wrote:
>>>>>>> On Thu 23 Jun 07:58 CDT 2022, Krzysztof Kozlowski wrote:
>>>>>>>
>>>>>>>> On 23/06/2022 08:48, Rajendra Nayak wrote:
>>>>>>>>>>>> diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
>>>>>>>>>>>> index 83e8b63f0910..adffb9c70566 100644
>>>>>>>>>>>> --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
>>>>>>>>>>>> +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
>>>>>>>>>>>> @@ -2026,6 +2026,60 @@ llcc: system-cache-controller@...0000 {
>>>>>>>>>>>>       			interrupts = <GIC_SPI 582 IRQ_TYPE_LEVEL_HIGH>;
>>>>>>>>>>>>       		};
>>>>>>>>>>>>       
>>>>>>>>>>>> +		pmu@...6400 {
>>>>>>>>>>>> +			compatible = "qcom,sdm845-cpu-bwmon";
>>>>>>>>>>>> +			reg = <0 0x01436400 0 0x600>;
>>>>>>>>>>>> +
>>>>>>>>>>>> +			interrupts = <GIC_SPI 581 IRQ_TYPE_LEVEL_HIGH>;
>>>>>>>>>>>> +
>>>>>>>>>>>> +			interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
>>>>>>>>>>>> +					<&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
>>>>>>>>>>>> +			interconnect-names = "ddr", "l3c";
>>>>>>>>>>>
>>>>>>>>>>> Is this the pmu/bwmon instance between the cpu and caches or the one between the caches and DDR?
>>>>>>>>>>
>>>>>>>>>> To my understanding this is the one between CPU and caches.
>>>>>>>>>
>>>>>>>>> Ok, but then because the OPP table lists the DDR bw first and Cache bw second, isn't the driver
>>>>>>>>> ending up comparing the bw values thrown by the pmu against the DDR bw instead of the Cache BW?
>>>>>>>>
>>>>>>>> I double checked now and you're right.
>>>>>>>>
>>>>>>>>> Atleast with my testing on sc7280 I found this to mess things up and I always was ending up at
>>>>>>>>> higher OPPs even while the system was completely idle. Comparing the values against the Cache bw
>>>>>>>>> fixed it.(sc7280 also has a bwmon4 instance between the cpu and caches and a bwmon5 between the cache
>>>>>>>>> and DDR)
>>>>>>>>
>>>>>>>> In my case it exposes different issue - under performance. Somehow the
>>>>>>>> bwmon does not report bandwidth high enough to vote for high bandwidth.
>>>>>>>>
>>>>>>>> After removing the DDR interconnect and bandwidth OPP values I have for:
>>>>>>>> sysbench --threads=8 --time=60 --memory-total-size=20T --test=memory
>>>>>>>> --memory-block-size=4M run
>>>>>>>>
>>>>>>>> 1. Vanilla: 29768 MB/s
>>>>>>>> 2. Vanilla without CPU votes: 8728 MB/s
>>>>>>>> 3. Previous bwmon (voting too high): 32007 MB/s
>>>>>>>> 4. Fixed bwmon 24911 MB/s
>>>>>>>> Bwmon does not vote for maximum L3 speed:
>>>>>>>> bwmon report 9408 MB/s (thresholds set: <9216000 15052801>
>>>>>>>> )
>>>>>>>> osm l3 aggregate 14355 MBps -> 897 MHz, level 7, bw 14355 MBps
>>>>>>>>
>>>>>>>> Maybe that's just problem with missing governor which would vote for
>>>>>>>> bandwidth rounding up or anticipating higher needs.
>>>>>>>>
>>>>>>>>>>> Depending on which one it is, shouldn;t we just be scaling either one and not both the interconnect paths?
>>>>>>>>>>
>>>>>>>>>> The interconnects are the same as ones used for CPU nodes, therefore if
>>>>>>>>>> we want to scale both when scaling CPU, then we also want to scale both
>>>>>>>>>> when seeing traffic between CPU and cache.
>>>>>>>>>
>>>>>>>>> Well, they were both associated with the CPU node because with no other input to decide on _when_
>>>>>>>>> to scale the caches and DDR, we just put a mapping table which simply mapped a CPU freq to a L3 _and_
>>>>>>>>> DDR freq. So with just one input (CPU freq) we decided on what should be both the L3 freq and DDR freq.
>>>>>>>>>
>>>>>>>>> Now with 2 pmu's, we have 2 inputs, so we can individually scale the L3 based on the cache PMU
>>>>>>>>> counters and DDR based on the DDR PMU counters, no?
>>>>>>>>>
>>>>>>>>> Since you said you have plans to add the other pmu support as well (bwmon5 between the cache and DDR)
>>>>>>>>> how else would you have the OPP table associated with that pmu instance? Would you again have both the
>>>>>>>>> L3 and DDR scale based on the inputs from that bwmon too?
>>>>>>>>
>>>>>>>> Good point, thanks for sharing. I think you're right. I'll keep only the
>>>>>>>> l3c interconnect path.
>>>>>>>>
>>>>>>>
>>>>>>> If I understand correctly, <&osm_l3 MASTER_OSM_L3_APPS &osm_l3
>>>>>>> SLAVE_OSM_L3> relates to the L3 cache speed, which sits inside the CPU
>>>>>>> subsystem. As such traffic hitting this cache will not show up in either
>>>>>>> bwmon instance.
>>>>>>>
>>>>>>> The path <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>
>>>>>>> affects the DDR frequency. So the traffic measured by the cpu-bwmon
>>>>>>> would be the CPU subsystems traffic that missed the L1/L2/L3 caches and
>>>>>>> hits the memory bus towards DDR.
>>>>>
>>>>> That seems right, looking some more into the downstream code and register definitions
>>>>> I see the 2 bwmon instances actually lie on the path outside CPU SS towards DDR,
>>>>> first one (bwmon4) is between the CPUSS and LLCC (system cache) and the second one
>>>>> (bwmon5) between LLCC and DDR. So we should use the counters from bwmon4 to
>>>>> scale the CPU-LLCC path (and not L3), on sc7280 that would mean splitting the
>>>>> <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3> into
>>>>> <&gem_noc MASTER_APPSS_PROC 3 &gem_noc SLAVE_LLCC 3> (voting based on the bwmon4 inputs)
>>
>> For sdm845 SLAVE_LLCC is in mem_noc, so I guess mc_virt on sc7280?
> 
> thats correct,
> 
>>
>>>>> and <&mc_virt MASTER_LLCC 3 &mc_virt SLAVE_EBI1 3> (voting based on the bwmon5 inputs)
>>>>> and similar for sdm845 too.
>>>>>
>>>>> L3 should perhaps still be voted based on the cpu freq as done today.
>>>>
>>>> This would mean that original bandwidth values (800 - 7216 MB/s) were
>>>> correct. However we have still your observation that bwmon kicks in very
>>>> fast and my measurements that sampled bwmon data shows bandwidth ~20000
>>>> MB/s.
>>>
>>> Right, thats because the bandwidth supported between the cpu<->llcc path is much higher
>>> than the DDR frequencies. For instance on sc7280, I see (2288 - 15258 MB/s) for LLCC while
>>> the DDR max is 8532 MB/s.
>>
>> OK, that sounds right.
>>
>> Another point is that I did not find actual scaling of throughput via
>> that interconnect path:
>> <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_LLCC 3>
> 
> Shouldn't this be <&gladiator_noc MASTER_APPSS_PROC 3 &gladiator_noc SLAVE_LLCC 3> on sdm845?

When I tried this, I got icc xlate error. If I read the code correctly,
it's in mem_noc:
https://elixir.bootlin.com/linux/v5.19-rc4/source/drivers/interconnect/qcom/sdm845.c#L349

Best regards,
Krzysztof

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ