[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <305a8e43-4d65-490c-9f83-afce6490bc83@ideasonboard.com>
Date: Fri, 22 Mar 2024 20:09:59 +0200
From: Tomi Valkeinen <tomi.valkeinen@...asonboard.com>
To: Sean Anderson <sean.anderson@...ux.dev>
Cc: Michal Simek <michal.simek@....com>, David Airlie <airlied@...il.com>,
linux-kernel@...r.kernel.org, Daniel Vetter <daniel@...ll.ch>,
linux-arm-kernel@...ts.infradead.org,
Laurent Pinchart <laurent.pinchart@...asonboard.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v2 5/8] drm: zynqmp_dp: Don't retrain the link in our IRQ
On 22/03/2024 18:18, Sean Anderson wrote:
> On 3/22/24 01:32, Tomi Valkeinen wrote:
>> On 21/03/2024 21:17, Sean Anderson wrote:
>>> On 3/21/24 15:08, Tomi Valkeinen wrote:
>>>> On 21/03/2024 20:01, Sean Anderson wrote:
>>>>> On 3/21/24 13:25, Tomi Valkeinen wrote:
>>>>>> On 21/03/2024 17:52, Sean Anderson wrote:
>>>>>>> On 3/20/24 02:53, Tomi Valkeinen wrote:
>>>>>>>> On 20/03/2024 00:51, Sean Anderson wrote:
>>>>>>>> Do we need to handle interrupts while either delayed work is being done?
>>>>>>>
>>>>>>> Probably not.
>>>>>>>
>>>>>>>> If we do need a delayed work, would just one work be enough which
>>>>>>>> handles both HPD_EVENT and HPD_IRQ, instead of two?
>>>>>>>
>>>>>>> Maybe, but then we need to determine which pending events we need to
>>>>>>> handle. I think since we have only two events it will be easier to just
>>>>>>> have separate workqueues.
>>>>>>
>>>>>> The less concurrency, the better...Which is why it would be nice to do it all in the threaded irq.
>>>>>
>>>>> Yeah, but we can use a mutex for this which means there is not too much
>>>>> interesting going on.
>>>>
>>>> Ok. Yep, if we get (hopefully) a single mutex with clearly defined fields that it protects, I'm ok with workqueues.
>>>>
>>>> I'd still prefer just one workqueue, though...
>>>
>>> Yeah, but then we need a spinlock or something to tell the workqueue what it should do.
>>
>> Yep. We could also always look at the HPD (if we drop the big sleeps) in the wq, and have a flag for the HPD IRQ, which would reduce the state to a single bit.
>
> How about something like
>
> zynqmp_dp_irq_handler(...)
> {
> /* Read status and handle underflow/overflow/vblank */
>
> status &= ZYNQMP_DP_INT_HPD_EVENT | ZYNQMP_DP_INT_HPD_IRQ;
> if (status) {
> atomic_or(status, &dp->status);
> return IRQ_WAKE_THREAD;
> }
>
> return IRQ_HANDLED;
> }
>
> zynqmp_dp_thread_handler(...)
> {
> status = atomic_xchg(&dp->status, 0);
> /* process HPD stuff */
> }
>
> which gets rid of the workqueue too.
I like it. We can't use IRQF_ONESHOT, as that would keep the irq masked
while the threaded handler is being ran. I don't think that's a problem,
but just something to keep in mind that both handlers can run concurrently.
Tomi
Powered by blists - more mailing lists