[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d2b0544e-bd75-4ccd-9b7e-60bf58fcc0d5@google.com>
Date: Mon, 30 Oct 2023 23:11:22 -0700
From: Jayant Chowdhary <jchowdhary@...gle.com>
To: Michael Grzeschik <mgr@...gutronix.de>,
Alan Stern <stern@...land.harvard.edu>
Cc: Laurent Pinchart <laurent.pinchart@...asonboard.com>,
Thinh.Nguyen@...opsys.com, arakesh@...gle.com, etalvala@...gle.com,
dan.scally@...asonboard.com, gregkh@...uxfoundation.org,
linux-kernel@...r.kernel.org, linux-usb@...r.kernel.org
Subject: Re: [PATCH v2] usb:gadget:uvc Do not use worker thread to pump usb
requests
Hi,
On 10/28/23 07:09, Jayant Chowdhary wrote:
> Hi,
>
> On 10/28/23 04:10, Michael Grzeschik wrote:
>> On Fri, Oct 27, 2023 at 10:58:11AM -0400, Alan Stern wrote:
>>> On Fri, Oct 27, 2023 at 03:39:44PM +0200, Michael Grzeschik wrote:
>>>> On Fri, Oct 27, 2023 at 02:47:52PM +0300, Laurent Pinchart wrote:
>>>>> On Fri, Oct 27, 2023 at 01:10:21PM +0200, Michael Grzeschik wrote:
>>>>>> On Fri, Oct 27, 2023 at 10:51:17AM +0300, Laurent Pinchart wrote:
>>>>>>> On Thu, Oct 26, 2023 at 09:56:35PM +0000, Jayant Chowdhary wrote:
>>>>>>>> This patch is based on top of
>>>>>>>> https://lore.kernel.org/linux-usb/20230930184821.310143-1-arakesh@google.com/T/#t:
>>>>>>>>
>>>>>>>> When we use an async work queue to perform the function of pumping
>>>>>>>> usb requests to the usb controller, it is possible that thread scheduling
>>>>>>>> affects at what cadence we're able to pump requests. This could mean usb
>>>>>>>> requests miss their uframes - resulting in video stream flickers on the host
>>>>>>>> device.
>>>>>>>>
>>>>>>>> In this patch, we move the pumping of usb requests to
>>>>>>>> 1) uvcg_video_complete() complete handler for both isoc + bulk
>>>>>>>> endpoints. We still send 0 length requests when there is no uvc buffer
>>>>>>>> available to encode.
>>>>>>> This means you will end up copying large amounts of data in interrupt
>>>>>>> context. The work queue was there to avoid exactly that, as it will
>>>>>>> introduce delays that can affect other parts of the system. I think this
>>>>>>> is a problem.
>>>>>> Regarding Thin's argument about possible scheduling latency that is already
>>>>>> introducing real errors, this seemed like a good solution.
>>>>>>
>>>>>> But sure, this potential latency introduced in the interrupt context can
>>>>>> trigger other side effects.
>>>>>>
>>>>>> However I think we need some compromise since both arguments are very valid.
>>>>> Agreed.
>>>>>
>>>>>> Any ideas, how to solve this?
>>>>> I'm afraid not.
>>>> We discussed this and came to the conclusion that we could make use of
>>>> kthread_create and sched_setattr with an attr->sched_policy = SCHED_DEADLINE
>>>> here instead of the workqueue. This way we would ensure that the worker
>>>> would be triggered with hard definitions.
>>>>
>>>> Since the SG case is not that heavy on the completion handler, we could
>>>> also make this kthread conditionaly to the memcpy case.
>>> If you don't mind a naive suggestion from someone who knows nothing
>>> about the driver...
>>>
>>> An attractive possibility is to have the work queue (or kthread) do the
>>> time-consuming copying, but leave the submission up to the completion
>>> handler. If the data isn't ready (or there's no data to send) when the
>>> handler runs, then queue a 0-length request.
>>>
>>> That will give you the best of both worlds: low latency while in
>>> interrupt context and a steady, constant flow of USB transfers at all
>>> times. The question of how to schedule the work queue or kthread is a
>>> separate matter, not directly relevant to this design decision.
>> That's it. This is probably the best way to tackle the overall problem.
>>
>> So we leave the call of the encode callback to the worker, that will
>> probably still can be a workqueue. The complete callback is calling
>> the explicit uvcg_video_ep_queue when prepared requests are available
>> and if there is nothing pending it will just enqueue zero requests.
>>
>> Thank you Alan, this makes so much sense!
>>
>> Jayant, Laurent: Do you agree?
>> If yes, Jayant will you change the patch accordingly?
>>
>>
> Thanks for all the discussion Greg, Michael, Laurent and Alan.
> Apologies for not responding earlier since I am OOO.
>
> While I haven't tried this out this does seem like a very good idea.
> Thank you Alan! I will aim to make changes and post a patch on Monday night PST.
I got caught up with some work which is taking longer than expected. Apologies for the
delay :) I'm testing some things out right now. I hope to be able to post a patch in the
next couple of days. Thanks for your patience.
Jayant
Powered by blists - more mailing lists