[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <59974e61-13f8-4080-850a-55e599c41cb5@kernel.org>
Date: Wed, 18 Sep 2024 01:37:10 +0200
From: Konrad Dybcio <konradybcio@...nel.org>
To: Rob Clark <robdclark@...il.com>, Konrad Dybcio <konradybcio@...nel.org>
Cc: dri-devel@...ts.freedesktop.org, linux-arm-msm@...r.kernel.org,
freedreno@...ts.freedesktop.org, Akhil P Oommen <quic_akhilpo@...cinc.com>,
Connor Abbott <cwabbott0@...il.com>, Rob Clark <robdclark@...omium.org>,
Sean Paul <sean@...rly.run>, Abhinav Kumar <quic_abhinavk@...cinc.com>,
Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
Marijn Suijten <marijn.suijten@...ainline.org>,
David Airlie <airlied@...il.com>, Daniel Vetter <daniel@...ll.ch>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] drm/msm/a6xx+: Insert a fence wait before SMMU table
update
On 17.09.2024 5:30 PM, Rob Clark wrote:
> On Tue, Sep 17, 2024 at 6:47 AM Konrad Dybcio <konradybcio@...nel.org> wrote:
>>
>> On 13.09.2024 9:51 PM, Rob Clark wrote:
>>> From: Rob Clark <robdclark@...omium.org>
>>>
>>> The CP_SMMU_TABLE_UPDATE _should_ be waiting for idle, but on some
>>> devices (x1-85, possibly others), it seems to pass that barrier while
>>> there are still things in the event completion FIFO waiting to be
>>> written back to memory.
>>
>> Can we try to force-fault around here on other GPUs and perhaps
>> limit this workaround?
>
> not sure what you mean by "force-fault"...
I suppose 'reproduce' is what I meant
> we could probably limit
> this to certain GPUs, the only reason I didn't is (a) it should be
> harmless when it is not needed,
Do we have any realistic perf hits here?
> and (b) I have no real good way to get
> an exhaustive list of where it is needed. Maybe/hopefully it is only
> x1-85, but idk.
>
> It does bring up an interesting question about preemption, though
Yeah..
Do we know what windows does here?
Konrad
Powered by blists - more mailing lists