lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <df86a758-e2a3-4848-8ffb-8fe079bdafbe@arm.com>
Date:   Wed, 1 Nov 2023 11:28:25 +0000
From:   Steven Price <steven.price@....com>
To:     AngeloGioacchino Del Regno 
        <angelogioacchino.delregno@...labora.com>,
        boris.brezillon@...labora.com
Cc:     robh@...nel.org, maarten.lankhorst@...ux.intel.com,
        mripard@...nel.org, tzimmermann@...e.de, airlied@...il.com,
        daniel@...ll.ch, dri-devel@...ts.freedesktop.org,
        linux-kernel@...r.kernel.org, wenst@...omium.org,
        kernel@...labora.com
Subject: Re: [PATCH 1/4] drm/panfrost: Implement ability to turn on/off GPU
 clocks in suspend

On 31/10/2023 10:33, AngeloGioacchino Del Regno wrote:
<snip>
>> Anyway, as for the GPU_PM_CLK_DIS feature - I feel like being
>> extremely careful
>> with this is still a good idea... thing is, even if we're sure that
>> the GPU itself
>> is fine with us turning off/on clocks (even aggressively), I'm not
>> sure that *all*
>> of the SoCs using Mali GPUs don't have any kind of quirk and for
>> safety I don't
>> want to place any bets.
>>
>> My idea is to add this with feature opt-in - then, if after some time
>> we discover
>> that all SoCs want it and can safely use it, we can simplify the flow
>> by removing
>> the feature bit.

Yeah I agree it's best to start with opt-in that way we can avoid
regressions and focus the changes on platforms where this matters.

> 
> Sorry for the double email - after some analysis and some trials of your
> wait
> solution, I've just seen that... well, panfrost_gpu_power_off() is, and
> has always
> been entirely broken, as in it has never done any poweroff!
> 
> What it does is:
> 
>     gpu_write(pfdev, TILER_PWROFF_LO, 0);
>     gpu_write(pfdev, SHADER_PWROFF_LO, 0);
>     gpu_write(pfdev, L2_PWROFF_LO, 0);
> 
> ...but the {TILER,SHADER,L2}_PWROFF_LO register is a bitmap and in order
> to request
> poweroff of tiler/shader cores and cache we shall flip bits to 1, but
> this is doing
> the *exact opposite* of what it's supposed to do.
> 
> It's doing nothing, at all.

Doh! I'd looked at that function when replying to your email and still
not spotted that it is broken as you point out!

I guess I always get a little distracted by the fact that it's
technically "broken" in two other ways: first only the _LO registers are
used (but equally there are no implementations with > 32 cores so this
doesn't matter) and secondly we shouldn't really trigger the L2 power
off while the tiler/shader are powering down. Although it doesn't matter
here because the L2 power down will coordinate with the tiler and shader
and do the right thing. In reality a single write is sufficient as the
L2 power down will trigger the dependent cores to power down:

	gpu_write(pfdev, L2_PWROFF_LO, pfdev->features.l2_present);

> I've just fixed that locally (running some tests on MT8195 as we speak)
> like so:
> 
> gpu_write(pfdev, TILER_PWROFF_LO, pfdev->features.tiler_present);
> gpu_write(pfdev, SHADER_PWROFF_LO, pfdev->features.shader_present &
> core_mask);
> gpu_write(pfdev, L2_PWROFF_LO, pfdev->features.l2_present & core_mask);

But this should be fine too - as above the L2 transition will just wait.

Please can you include a fix (as a separate patch) for that in your next
posting? I think that should be worthy of a backport.

> ...and now it appears that I can actually manage clocks aggressively
> during runtime
> power management without any side issues.
> 
> Apparently, v2 of this series will have "more juice" than initially
> intended...


Thanks for looking in to this!

Thanks,

Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ