[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YdMP5P7Lwwkf6uRn@abelvesa>
Date: Mon, 3 Jan 2022 17:01:56 +0200
From: Abel Vesa <abel.vesa@....com>
To: Georgi Djakov <djakov@...nel.org>
Cc: Shawn Guo <shawnguo@...nel.org>,
Sascha Hauer <s.hauer@...gutronix.de>,
Fabio Estevam <festevam@...il.com>,
Pengutronix Kernel Team <kernel@...gutronix.de>,
NXP Linux Team <linux-imx@....com>, linux-pm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 2/3] interconnect: imx: Add imx_icc_get_bw and
imx_icc_aggregate functions
On 22-01-01 20:26:05, Georgi Djakov wrote:
> Hi Abel,
>
> On 1.01.22 18:39, Abel Vesa wrote:
> > The aggregate function will return whatever is the highest
> > rate for that specific node. The imx_icc_get_bw sets the
>
> Adding some more details about why we switch from
> icc_std_aggregate to imx_icc_aggregate would be nice.
>
On a second look, I think I can drop the imx_icc_aggregate and use the
icc_std_aggregate instead, as long as I use the peak_bw and forget about
the agg_bw.
> > initial avg and peak to 0 in order to avoid setting them to
> > INT_MAX by the interconnect core.
>
> Do we need a Fixes tag for this?
>
Neah, the imx interconnect is not used by any platform yet.
> I would recommend to split imx_icc_get_bw and imx_icc_aggregate
> changes into separate patches. These also seem to be unrelated to
> the imx_icc_node_adj_desc patchset.
>
Since I can use icc_std_aggregate, the imx_icc_aggregate change will be
dropped.
> Thanks,
> Georgi
>
> > Signed-off-by: Abel Vesa <abel.vesa@....com>
> > ---
> >
> > No changes since v1.
> >
> > drivers/interconnect/imx/imx.c | 20 +++++++++++++++++++-
> > 1 file changed, 19 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/interconnect/imx/imx.c b/drivers/interconnect/imx/imx.c
> > index 34bfc7936387..4d8a2a2d2608 100644
> > --- a/drivers/interconnect/imx/imx.c
> > +++ b/drivers/interconnect/imx/imx.c
> > @@ -25,6 +25,23 @@ struct imx_icc_node {
> > struct dev_pm_qos_request qos_req;
> > };
> > +static int imx_icc_get_bw(struct icc_node *node, u32 *avg, u32 *peak)
> > +{
> > + *avg = 0;
> > + *peak = 0;
> > +
> > + return 0;
> > +}
> > +
> > +static int imx_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
> > + u32 peak_bw, u32 *agg_avg, u32 *agg_peak)
> > +{
> > + *agg_avg = max(*agg_avg, avg_bw);
> > + *agg_peak = max(*agg_peak, peak_bw);
> > +
> > + return 0;
> > +}
> > +
> > static int imx_icc_node_set(struct icc_node *node)
> > {
> > struct device *dev = node->provider->dev;
> > @@ -233,7 +250,8 @@ int imx_icc_register(struct platform_device *pdev,
> > if (!provider)
> > return -ENOMEM;
> > provider->set = imx_icc_set;
> > - provider->aggregate = icc_std_aggregate;
> > + provider->get_bw = imx_icc_get_bw;
> > + provider->aggregate = imx_icc_aggregate;
> > provider->xlate = of_icc_xlate_onecell;
> > provider->data = data;
> > provider->dev = dev;
>
Powered by blists - more mailing lists