lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2463ddcc535279a8076588e14e691ade@codeaurora.org>
Date:   Thu, 28 May 2020 02:11:20 +0530
From:   Sibi Sankar <sibis@...eaurora.org>
To:     Saravana Kannan <saravanak@...gle.com>
Cc:     Rob Clark <robdclark@...il.com>,
        Sharat Masetty <smasetty@...eaurora.org>,
        freedreno <freedreno@...ts.freedesktop.org>,
        "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" 
        <devicetree@...r.kernel.org>, dri-devel@...edesktop.org,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Georgi Djakov <georgi.djakov@...aro.org>,
        Matthias Kaehlcke <mka@...omium.org>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Rajendra Nayak <rnayak@...eaurora.org>,
        Jordan Crouse <jcrouse@...eaurora.org>
Subject: Re: [Freedreno] [PATCH 5/6] drm: msm: a6xx: use dev_pm_opp_set_bw to
 set DDR bandwidth

On 2020-05-27 23:01, Saravana Kannan wrote:
> On Wed, May 27, 2020 at 8:38 AM Rob Clark <robdclark@...il.com> wrote:
>> 
>> On Wed, May 27, 2020 at 1:47 AM Sharat Masetty 
>> <smasetty@...eaurora.org> wrote:
>> >
>> > + more folks
>> >
>> > On 5/18/2020 9:55 PM, Rob Clark wrote:
>> > > On Mon, May 18, 2020 at 7:23 AM Jordan Crouse <jcrouse@...eaurora.org> wrote:
>> > >> On Thu, May 14, 2020 at 04:24:18PM +0530, Sharat Masetty wrote:
>> > >>> This patches replaces the previously used static DDR vote and uses
>> > >>> dev_pm_opp_set_bw() to scale GPU->DDR bandwidth along with scaling
>> > >>> GPU frequency.
>> > >>>
>> > >>> Signed-off-by: Sharat Masetty <smasetty@...eaurora.org>
>> > >>> ---
>> > >>>   drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +-----
>> > >>>   1 file changed, 1 insertion(+), 5 deletions(-)
>> > >>>
>> > >>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>> > >>> index 2d8124b..79433d3 100644
>> > >>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>> > >>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
>> > >>> @@ -141,11 +141,7 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
>> > >>>
>> > >>>        gmu->freq = gmu->gpu_freqs[perf_index];
>> > >>>
>> > >>> -     /*
>> > >>> -      * Eventually we will want to scale the path vote with the frequency but
>> > >>> -      * for now leave it at max so that the performance is nominal.
>> > >>> -      */
>> > >>> -     icc_set_bw(gpu->icc_path, 0, MBps_to_icc(7216));
>> > >>> +     dev_pm_opp_set_bw(&gpu->pdev->dev, opp);
>> > >>>   }
>> > >> This adds an implicit requirement that all targets need bandwidth settings
>> > >> defined in the OPP or they won't get a bus vote at all. I would prefer that
>> > >> there be an default escape valve but if not you'll need to add
>> > >> bandwidth values for the sdm845 OPP that target doesn't regress.
>> > >>
>> > > it looks like we could maybe do something like:
>> > >
>> > >    ret = dev_pm_opp_set_bw(...);
>> > >    if (ret) {
>> > >        dev_warn_once(dev, "no bandwidth settings");
>> > >        icc_set_bw(...);
>> > >    }
>> > >
>> > > ?
>> > >
>> > > BR,
>> > > -R
>> >
>> > There is a bit of an issue here - Looks like its not possible to two icc
>> > handles to the same path.  Its causing double enumeration of the paths
>> > in the icc core and messing up path votes. With [1] Since opp/core
>> > already gets a handle to the icc path as part of table add,  drm/msm
>> > could do either
> 
> Are you sure this is the real issue? I'd be surprised if this is a
> real limitation. And if it is, it either needs to be fixed in the ICC
> framework or OPP shouldn't be getting path handles by default (and

not really, this is already handled well
in the icc framework. In this case
the max peak vote would be considered
among the two paths.

> maybe let the driver set the handles before using OPP APIs to change
> BW). I'd lean towards the former.

https://patchwork.kernel.org/patch/11573827/
Yes the core shouldn't get paths
by default unless the bw values
are specified in the opps.

> 
>> > a) Conditionally enumerate gpu->icc_path handle only when pm/opp core
>> > has not got the icc path handle. I could use something like [2] to
>> > determine if should initialize gpu->icc_path*
> 
> This seems like a bandaid. Let's fix it correctly in ICC framework or
> OPP framework.
> 
>> > b) Add peak-opp-configs in 845 dt and mandate all future versions to use

I can't understand ^^ proposal as well.
We would ideally want to add scaling
support for SDM845 as well while we are
at it.

>> > this bindings. With this, I can remove gpu->icc_path from msm/drm
>> > completely and only rely on opp/core for bw voting.
> 
> I don't know what you mean by "peak-opp-configs" but I guess you are
> referring to some kind of DT flag to say if you should vote for BW
> directly or use the OPP framework? If so, I'm pretty sure that won't
> fly. That's an OS implementation specific flag.
> 
> -Saravana

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ