lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <564e5b54-66a5-2cb8-5bfc-b13b08f5eb6e@linaro.org>
Date:   Tue, 21 Mar 2023 18:33:35 +0100
From:   Konrad Dybcio <konrad.dybcio@...aro.org>
To:     Georgi Djakov <djakov@...nel.org>, Andy Gross <agross@...nel.org>,
        Bjorn Andersson <andersson@...nel.org>,
        Dmitry Baryshkov <dmitry.baryshkov@...aro.org>
Cc:     Bryan O'Donoghue <bryan.odonoghue@...aro.org>,
        linux-arm-msm@...r.kernel.org, linux-pm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v7 2/9] interconnect: qcom: rpm: Add support for
 specifying channel num



On 21.03.2023 15:49, Georgi Djakov wrote:
> On 21.03.23 16:23, Konrad Dybcio wrote:
>>
>>
>> On 21.03.2023 15:21, Georgi Djakov wrote:
>>> On 21.03.23 16:09, Konrad Dybcio wrote:
>>>>
>>>> On 21.03.2023 15:06, Georgi Djakov wrote:
>>>>> Hi Konrad,
>>>>>
>>>>> Thanks for the patch!
>>>>>
>>>>> On 8.03.23 23:40, Konrad Dybcio wrote:
>>>>>> Some nodes, like EBI0 (DDR) or L3/LLCC, may be connected over more than
>>>>>> one channel. This should be taken into account in bandwidth calcualtion,
>>>>>> as we're supposed to feed msmbus with the per-channel bandwidth. Add
>>>>>> support for specifying that and use it during bandwidth aggregation.
>>>>>>
>>>>>
>>>>> This looks good, but do you have any follow-up patch to use this and set
>>>>> the channels in some driver?
>>>> Yes, I have a couple of OOT drivers that are gonna make use of it.
>>>> TBF it should have been sent separately from the QoS mess, but I
>>>> don't think it's much of an issue to take it as-is.
>>>>
>>>> The aforementioned OOT drivers for MSM8998 and SM6375 will be
>>>> submitted after we reach a consensus on how we want to ensure
>>>> that each node is guaranteed to have its clocks enabled before
>>>> access, among some other minor things.
>>>
>>> Yes, these QoS clocks are confusing. Maybe you can even submit them
>>> without configuring any QoS stuff in first place? Does enabling QoS
>>> actually show any benefits on these devices?
>> Haven't tested that thoroughly to be honest. But I'll try to get
>> some numbers.
> 
> I expect this to have impact only on some latency sensitive stuff like
> modem or when there is heavy traffic flows. Maybe we can start without
> QoS first and then add it on top as a next step?
I only now remembered why I didn't do that.. Adding QoS at a later time
will break older DTs, as with QoS we need to pass some clocks to the driver.

Konrad
> 
> BR,
> Georgi
> 
>>
>> Konrad
>>>
>>> Thanks,
>>> Georgi
>>>
>>>> Konrad
>>>>>
>>>>> BR,
>>>>> Georgi
>>>>>
>>>>>> Reviewed-by: Bryan O'Donoghue <bryan.odonoghue@...aro.org>
>>>>>> Signed-off-by: Konrad Dybcio <konrad.dybcio@...aro.org>
>>>>>> ---
>>>>>>     drivers/interconnect/qcom/icc-rpm.c | 7 ++++++-
>>>>>>     drivers/interconnect/qcom/icc-rpm.h | 2 ++
>>>>>>     2 files changed, 8 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
>>>>>> index 35fd75ae70e3..27c4c6497994 100644
>>>>>> --- a/drivers/interconnect/qcom/icc-rpm.c
>>>>>> +++ b/drivers/interconnect/qcom/icc-rpm.c
>>>>>> @@ -317,6 +317,7 @@ static void qcom_icc_bus_aggregate(struct icc_provider *provider,
>>>>>>     {
>>>>>>         struct icc_node *node;
>>>>>>         struct qcom_icc_node *qn;
>>>>>> +    u64 sum_avg[QCOM_ICC_NUM_BUCKETS];
>>>>>>         int i;
>>>>>>           /* Initialise aggregate values */
>>>>>> @@ -334,7 +335,11 @@ static void qcom_icc_bus_aggregate(struct icc_provider *provider,
>>>>>>         list_for_each_entry(node, &provider->nodes, node_list) {
>>>>>>             qn = node->data;
>>>>>>             for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
>>>>>> -            agg_avg[i] += qn->sum_avg[i];
>>>>>> +            if (qn->channels)
>>>>>> +                sum_avg[i] = div_u64(qn->sum_avg[i], qn->channels);
>>>>>> +            else
>>>>>> +                sum_avg[i] = qn->sum_avg[i];
>>>>>> +            agg_avg[i] += sum_avg[i];
>>>>>>                 agg_peak[i] = max_t(u64, agg_peak[i], qn->max_peak[i]);
>>>>>>             }
>>>>>>         }
>>>>>> diff --git a/drivers/interconnect/qcom/icc-rpm.h b/drivers/interconnect/qcom/icc-rpm.h
>>>>>> index 8ba1918d7997..8aed5400afda 100644
>>>>>> --- a/drivers/interconnect/qcom/icc-rpm.h
>>>>>> +++ b/drivers/interconnect/qcom/icc-rpm.h
>>>>>> @@ -66,6 +66,7 @@ struct qcom_icc_qos {
>>>>>>      * @id: a unique node identifier
>>>>>>      * @links: an array of nodes where we can go next while traversing
>>>>>>      * @num_links: the total number of @links
>>>>>> + * @channels: number of channels at this node (e.g. DDR channels)
>>>>>>      * @buswidth: width of the interconnect between a node and the bus (bytes)
>>>>>>      * @sum_avg: current sum aggregate value of all avg bw requests
>>>>>>      * @max_peak: current max aggregate value of all peak bw requests
>>>>>> @@ -78,6 +79,7 @@ struct qcom_icc_node {
>>>>>>         u16 id;
>>>>>>         const u16 *links;
>>>>>>         u16 num_links;
>>>>>> +    u16 channels;
>>>>>>         u16 buswidth;
>>>>>>         u64 sum_avg[QCOM_ICC_NUM_BUCKETS];
>>>>>>         u64 max_peak[QCOM_ICC_NUM_BUCKETS];
>>>>>>
>>>>>
>>>
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ