[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZMFU-hFSOHLr3hFP@gerhold.net>
Date: Wed, 26 Jul 2023 19:16:42 +0200
From: Stephan Gerhold <stephan@...hold.net>
To: Konrad Dybcio <konrad.dybcio@...aro.org>
Cc: Andy Gross <agross@...nel.org>,
Bjorn Andersson <andersson@...nel.org>,
Georgi Djakov <djakov@...nel.org>,
Marijn Suijten <marijn.suijten@...ainline.org>,
linux-arm-msm@...r.kernel.org, linux-pm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/4] interconnect: qcom: icc-rpm: Add AB/IB calculations
coefficients
On Wed, Jul 26, 2023 at 06:25:43PM +0200, Konrad Dybcio wrote:
> Presumably due to the hardware being so complex, some nodes (or busses)
> have different (usually higher) requirements for bandwidth than what
> the usual calculations would suggest.
>
Weird. I just hope this was never abused to workaround other broken
configuration. A nice round ib_percent = 200 has mostly the same effect as
- Doubling the requested peek bandwidth in the consumer driver (perhaps
they were too lazy to fix the driver in downstream at some point)
- Halving the node buswidth
It's probably hard to say for sure...
> Looking at the available downstream files, it seems like AB values are
> adjusted per-bus and IB values are adjusted per-node.
> With that in mind, introduce percentage-based coefficient struct members
> and use them in the calculations.
>
> One thing to note is that downstream does (X%)*AB and IB/(Y%) which
> feels a bit backwards, especially given that the divisors for IB turn
> out to always be 25, 50, 200 making this a convenient conversion to 4x,
> 2x, 0.5x.. This commit uses the more sane, non-inverse approach.
>
> Signed-off-by: Konrad Dybcio <konrad.dybcio@...aro.org>
> ---
> drivers/interconnect/qcom/icc-rpm.c | 10 +++++++++-
> drivers/interconnect/qcom/icc-rpm.h | 5 +++++
> 2 files changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
> index 2c16917ba1fd..2de0e1dfe225 100644
> --- a/drivers/interconnect/qcom/icc-rpm.c
> +++ b/drivers/interconnect/qcom/icc-rpm.c
> @@ -298,9 +298,11 @@ static int qcom_icc_bw_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
> */
> static void qcom_icc_bus_aggregate(struct icc_provider *provider, u64 *agg_clk_rate)
> {
> - u64 agg_avg_rate, agg_rate;
> + struct qcom_icc_provider *qp = to_qcom_provider(provider);
> + u64 agg_avg_rate, agg_peak_rate, agg_rate;
> struct qcom_icc_node *qn;
> struct icc_node *node;
> + u16 percent;
> int i;
>
> /*
> @@ -315,6 +317,12 @@ static void qcom_icc_bus_aggregate(struct icc_provider *provider, u64 *agg_clk_r
> else
> agg_avg_rate = qn->sum_avg[i];
>
> + percent = qp->ab_percent ? qp->ab_percent : 100;
> + agg_avg_rate = mult_frac(percent, agg_avg_rate, 100);
if (qp->ab_percent)
agg_avg_rate = mult_frac(qp->ab_percent, agg_avg_rate, 100);
Would be likely more efficient (no calculation if unspecified) and not
much harder to read.
> +
> + percent = qn->ib_percent ? qn->ib_percent : 100;
> + agg_peak_rate = mult_frac(percent, qn->max_peak[i], 100);
> +
agg_peak_rate doesn't seem to be used anywhere else? 🤔
Thanks,
Stephan
Powered by blists - more mailing lists