[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAD=FV=WjLB8TV80cMef9ZqYn7h3szJ6x67_6pL9FnM1Q+8i_2A@mail.gmail.com>
Date: Fri, 18 Jan 2019 10:06:27 -0800
From: Doug Anderson <dianders@...omium.org>
To: Jordan Crouse <jcrouse@...eaurora.org>,
Georgi Djakov <georgi.djakov@...aro.org>,
Rob Clark <robdclark@...il.com>
Cc: freedreno <freedreno@...ts.freedesktop.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Arnd Bergmann <arnd@...db.de>,
Stephen Boyd <swboyd@...omium.org>,
Kees Cook <keescook@...omium.org>,
Sharat Masetty <smasetty@...eaurora.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
LKML <linux-kernel@...r.kernel.org>,
Andy Gross <andy.gross@...aro.org>,
David Airlie <airlied@...ux.ie>,
Johan Hovold <johan@...nel.org>,
Colin Ian King <colin.king@...onical.com>,
Evan Green <evgreen@...omium.org>,
Sean Paul <seanpaul@...omium.org>
Subject: Re: [PATCH v3 1/3] drm/msm/a6xx: Add support for an interconnect path
Hi,
On Thu, Dec 20, 2018 at 9:30 AM Jordan Crouse <jcrouse@...eaurora.org> wrote:
>
> Try to get the interconnect path for the GPU and vote for the maximum
> bandwidth to support all frequencies. This is needed for performance.
> Later we will want to scale the bandwidth based on the frequency to
> also optimize for power but that will require some device tree
> infrastructure that does not yet exist.
>
> v5: Remove hardcoded interconnect name and just use the default
nit: ${SUBJECT} says v3, but this is v5.
I'll put in my usual plug for considering "patman" to help post
patches. Even though it lives in the u-boot git repo it's still a gem
for kernel work.
<http://git.denx.de/?p=u-boot.git;a=blob;f=tools/patman/README>
> @@ -85,6 +89,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index)
> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
>
> gmu->freq = gmu->gpu_freqs[index];
> +
> + /*
> + * Eventually we will want to scale the path vote with the frequency but
> + * for now leave it at max so that the performance is nominal.
> + */
> + icc_set(gpu->icc_path, 0, MBps_to_icc(7216));
You'll need to change icc_set() here to icc_set_bw() to match v13, AKA:
- https://patchwork.kernel.org/patch/10766335/
- https://lkml.kernel.org/r/20190116161103.6937-2-georgi.djakov@linaro.org
> @@ -695,6 +707,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
> if (ret)
> goto out;
>
> + /* Set the bus quota to a reasonable value for boot */
> + icc_set(gpu->icc_path, 0, MBps_to_icc(3072));
This will also need to change to icc_set_bw()
> @@ -781,6 +798,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
> /* Tell RPMh to power off the GPU */
> a6xx_rpmh_stop(gmu);
>
> + /* Remove the bus vote */
> + icc_set(gpu->icc_path, 0, 0);
This will also need to change to icc_set_bw()
I have the same questions for this series that I had in response to
the email ("[v5 2/3] drm/msm/dpu: Integrate interconnect API in MDSS")
<https://lkml.kernel.org/r/CAD=FV=XUeMTGH+CDwGs3PfK4igdQrCbwucw7_2ViBc4i7grvxg@mail.gmail.com>
Copy / pasting here (with minor name changes) so folks don't have to
follow links / search email.
==
I'm curious what the plan is for landing this series. Rob / Gerogi:
do you have any preference? Options I'd imagine:
A) Wait until interconnect lands (in 5.1?) and land this through
msm-next in the version after (5.2?)
B) Georgi provides an immutable branch for interconnect when his lands
(assuming he's landing via pull request) and that gets pulled into the
the relevant drm tree.
C) Rob Acks this series and indicates that it should go in through
Gerogi's tree (probably only works if Georgi plans to send a pull
request). If we're going this route then (IIUC) we'd want to land
this in Gerogi's tree sooner rather than later so it can get some bake
time? NOTE: as per my prior reply, I believe Rob has already Acked
this patch.
Does anyone have a preference? It's be nice if whoever is planning to
land this could indicate whether they'd prefer Jordan send a new
version to handle the API change or if the relevant maintainer can
just do the fixup when the patch lands.
Thanks!
-Doug
Powered by blists - more mailing lists