[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47e1fcb4-237b-b880-b1b2-3910cc19e727@linaro.org>
Date: Mon, 27 Jun 2022 14:39:41 +0200
From: Krzysztof Kozlowski <krzysztof.kozlowski@...aro.org>
To: Bjorn Andersson <bjorn.andersson@...aro.org>
Cc: Rajendra Nayak <quic_rjendra@...cinc.com>,
Andy Gross <agross@...nel.org>,
Georgi Djakov <djakov@...nel.org>,
Rob Herring <robh+dt@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>, linux-arm-msm@...r.kernel.org,
linux-pm@...r.kernel.org, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
Thara Gopinath <thara.gopinath@...aro.org>
Subject: Re: [PATCH v4 4/4] arm64: dts: qcom: sdm845: Add CPU BWMON
On 26/06/2022 05:28, Bjorn Andersson wrote:
> On Thu 23 Jun 07:58 CDT 2022, Krzysztof Kozlowski wrote:
>
>> On 23/06/2022 08:48, Rajendra Nayak wrote:
>>>>>> diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
>>>>>> index 83e8b63f0910..adffb9c70566 100644
>>>>>> --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
>>>>>> +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
>>>>>> @@ -2026,6 +2026,60 @@ llcc: system-cache-controller@...0000 {
>>>>>> interrupts = <GIC_SPI 582 IRQ_TYPE_LEVEL_HIGH>;
>>>>>> };
>>>>>>
>>>>>> + pmu@...6400 {
>>>>>> + compatible = "qcom,sdm845-cpu-bwmon";
>>>>>> + reg = <0 0x01436400 0 0x600>;
>>>>>> +
>>>>>> + interrupts = <GIC_SPI 581 IRQ_TYPE_LEVEL_HIGH>;
>>>>>> +
>>>>>> + interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
>>>>>> + <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
>>>>>> + interconnect-names = "ddr", "l3c";
>>>>>
>>>>> Is this the pmu/bwmon instance between the cpu and caches or the one between the caches and DDR?
>>>>
>>>> To my understanding this is the one between CPU and caches.
>>>
>>> Ok, but then because the OPP table lists the DDR bw first and Cache bw second, isn't the driver
>>> ending up comparing the bw values thrown by the pmu against the DDR bw instead of the Cache BW?
>>
>> I double checked now and you're right.
>>
>>> Atleast with my testing on sc7280 I found this to mess things up and I always was ending up at
>>> higher OPPs even while the system was completely idle. Comparing the values against the Cache bw
>>> fixed it.(sc7280 also has a bwmon4 instance between the cpu and caches and a bwmon5 between the cache
>>> and DDR)
>>
>> In my case it exposes different issue - under performance. Somehow the
>> bwmon does not report bandwidth high enough to vote for high bandwidth.
>>
>> After removing the DDR interconnect and bandwidth OPP values I have for:
>> sysbench --threads=8 --time=60 --memory-total-size=20T --test=memory
>> --memory-block-size=4M run
>>
>> 1. Vanilla: 29768 MB/s
>> 2. Vanilla without CPU votes: 8728 MB/s
>> 3. Previous bwmon (voting too high): 32007 MB/s
>> 4. Fixed bwmon 24911 MB/s
>> Bwmon does not vote for maximum L3 speed:
>> bwmon report 9408 MB/s (thresholds set: <9216000 15052801>
>> )
>> osm l3 aggregate 14355 MBps -> 897 MHz, level 7, bw 14355 MBps
>>
>> Maybe that's just problem with missing governor which would vote for
>> bandwidth rounding up or anticipating higher needs.
>>
>>>>> Depending on which one it is, shouldn;t we just be scaling either one and not both the interconnect paths?
>>>>
>>>> The interconnects are the same as ones used for CPU nodes, therefore if
>>>> we want to scale both when scaling CPU, then we also want to scale both
>>>> when seeing traffic between CPU and cache.
>>>
>>> Well, they were both associated with the CPU node because with no other input to decide on _when_
>>> to scale the caches and DDR, we just put a mapping table which simply mapped a CPU freq to a L3 _and_
>>> DDR freq. So with just one input (CPU freq) we decided on what should be both the L3 freq and DDR freq.
>>>
>>> Now with 2 pmu's, we have 2 inputs, so we can individually scale the L3 based on the cache PMU
>>> counters and DDR based on the DDR PMU counters, no?
>>>
>>> Since you said you have plans to add the other pmu support as well (bwmon5 between the cache and DDR)
>>> how else would you have the OPP table associated with that pmu instance? Would you again have both the
>>> L3 and DDR scale based on the inputs from that bwmon too?
>>
>> Good point, thanks for sharing. I think you're right. I'll keep only the
>> l3c interconnect path.
>>
>
> If I understand correctly, <&osm_l3 MASTER_OSM_L3_APPS &osm_l3
> SLAVE_OSM_L3> relates to the L3 cache speed, which sits inside the CPU
> subsystem. As such traffic hitting this cache will not show up in either
> bwmon instance.
>
> The path <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>
> affects the DDR frequency. So the traffic measured by the cpu-bwmon
> would be the CPU subsystems traffic that missed the L1/L2/L3 caches and
> hits the memory bus towards DDR.
>
>
> If this is the case it seems to make sense to keep the L3 scaling in the
> opp-tables for the CPU and make bwmon only scale the DDR path. What do
> you think?
The reported data throughput by this bwmon instance is beyond the DDR
OPP table bandwidth, e.g.: 16-22 GB/s, so it seems it measures still
within cache controller, not the memory bus.
Best regards,
Krzysztof
Powered by blists - more mailing lists