[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <35e2106f-d738-4018-50f2-17afcbc627f7@linaro.org>
Date: Thu, 17 Dec 2020 12:49:17 -0600
From: Alex Elder <elder@...aro.org>
To: rishabhb@...eaurora.org
Cc: Bjorn Andersson <bjorn.andersson@...aro.org>,
linux-remoteproc@...r.kernel.org, linux-kernel@...r.kernel.org,
tsoni@...eaurora.org, psodagud@...eaurora.org,
sidgup@...eaurora.org
Subject: Re: [PATCH] remoteproc: Create a separate workqueue for recovery
tasks
On 12/17/20 12:21 PM, rishabhb@...eaurora.org wrote:
> On 2020-12-17 08:12, Alex Elder wrote:
>> On 12/15/20 4:55 PM, Bjorn Andersson wrote:
>>> On Sat 12 Dec 14:48 CST 2020, Rishabh Bhatnagar wrote:
>>>
>>>> Create an unbound high priority workqueue for recovery tasks.
>>
>> I have been looking at a different issue that is caused by
>> crash notification.
>>
>> What happened was that the modem crashed while the AP was
>> in system suspend (or possibly even resuming) state. And
>> there is no guarantee that the system will have called a
>> driver's ->resume callback when the crash notification is
>> delivered.
>>
>> In my case (in the IPA driver), handling a modem crash
>> cannot be done while the driver is suspended; i.e. the
>> activities in its ->resume callback must be completed
>> before we can recover from the crash.
>>
>> For this reason I might like to change the way the
>> crash notification is handled, but what I'd rather see
>> is to have the work queue not run until user space
>> is unfrozen, which would guarantee that all drivers
>> that have registered for a crash notification will
>> be resumed when the notification arrives.
>>
>> I'm not sure how that interacts with what you are
>> looking for here. I think the workqueue could still
>> be unbound, but its work would be delayed longer before
>> any notification (and recovery) started.
>>
>> -Alex
>>
>>
> In that case, maybe adding a "WQ_FREEZABLE" flag might help?
Yes, exactly. But how does that affect whatever you were
trying to do with your patch?
-Alex
. . .
Powered by blists - more mailing lists