[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <af18507c-8d22-f33d-a5b2-c59f6cf5058b@gmail.com>
Date: Wed, 30 Aug 2023 21:47:52 +0200
From: Heiner Kallweit <hkallweit1@...il.com>
To: imre.deak@...el.com, Tejun Heo <tj@...nel.org>
Cc: intel-gfx@...ts.freedesktop.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Rodrigo Vivi <rodrigo.vivi@...el.com>
Subject: Re: [Intel-gfx] WQ_UNBOUND warning since recent workqueue refactoring
On 30.08.2023 20:56, Imre Deak wrote:
> On Wed, Aug 30, 2023 at 07:51:13AM -1000, Tejun Heo wrote:
> Hi,
>
>> Hello,
>>
>> (cc'ing i915 folks)
>>
>> On Wed, Aug 30, 2023 at 04:57:42PM +0200, Heiner Kallweit wrote:
>>> Recently I started to see the following warning on linux-next and presumably
>>> this may be related to the refactoring of the workqueue core code.
>>>
>>> [ 56.900223] workqueue: output_poll_execute [drm_kms_helper] hogged CPU for >10000us 4 times, consider switching to WQ_UNBOUND
>>> [ 56.923226] workqueue: i915_hpd_poll_init_work [i915] hogged CPU for >10000us 4 times, consider switching to WQ_UNBOUND
>>> [ 97.860430] workqueue: output_poll_execute [drm_kms_helper] hogged CPU for >10000us 8 times, consider switching to WQ_UNBOUND
>>> [ 97.884453] workqueue: i915_hpd_poll_init_work [i915] hogged CPU for >10000us 8 times, consider switching to WQ_UNBOUND
>>>
>>> Adding WQ_UNBOUND to these queues didn't change the behavior.
>>
>> That should have made them go away as the code path isn't active at all for
>> WQ_UNBOUND workqueues. Can you please double check?
>>
I tried the patch given below and double-checked. No change in behavior.
>>> Maybe relevant: I run the affected system headless.
>>
>> i915 folks, workqueue recently added debug warnings which trigger when a
>> per-cpu work item hogs the CPU for too long - 10ms in this case. This is
>> problematic because such work item can stall other per-cpu work items.
>>
>> * Is it expected for the above two work functions to occupy the CPU for over
>> 10ms repeatedly?
>
> No, this shouldn't happen.
>
> I assume it happens in
> https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
>
> after cfd48ad8c4a9 ("drm/i915: Fix HPD polling, reenabling the output poll work as needed")
>
> which could result in the above problem.
>
> Could you give a try to
> https://lore.kernel.org/all/20230809104307.1218058-1-imre.deak@intel.com/
>
Didn't help
> and if that doesn't help provide more information/logs, by opening a
> ticket at:
> https://gitlab.freedesktop.org/drm/intel/-/issues/new
>
> Thanks,
> Imre
>
>> * If so, can we make them use an unbound workqueue instead?
>>
>> Thanks.
>>
>> --
>> tejun
diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
index 3f479483d..ac28b8d0f 100644
--- a/drivers/gpu/drm/drm_probe_helper.c
+++ b/drivers/gpu/drm/drm_probe_helper.c
@@ -279,7 +279,7 @@ static void reschedule_output_poll_work(struct drm_device *dev)
*/
delay = HZ;
- schedule_delayed_work(&dev->mode_config.output_poll_work, delay);
+ queue_delayed_work(system_unbound_wq, &dev->mode_config.output_poll_work, delay);
}
/**
@@ -614,7 +614,7 @@ int drm_helper_probe_single_connector_modes(struct drm_connector *connector,
*/
dev->mode_config.delayed_event = true;
if (dev->mode_config.poll_enabled)
- mod_delayed_work(system_wq,
+ mod_delayed_work(system_unbound_wq,
&dev->mode_config.output_poll_work,
0);
}
diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c
index ec4d26b3c..c0592b77b 100644
--- a/drivers/gpu/drm/i915/i915_driver.c
+++ b/drivers/gpu/drm/i915/i915_driver.c
@@ -138,7 +138,7 @@ static int i915_workqueues_init(struct drm_i915_private *dev_priv)
* to be scheduled on the system_wq before moving to a driver
* instance due deprecation of flush_scheduled_work().
*/
- dev_priv->unordered_wq = alloc_workqueue("i915-unordered", 0, 0);
+ dev_priv->unordered_wq = alloc_workqueue("i915-unordered", WQ_UNBOUND, 0);
if (dev_priv->unordered_wq == NULL)
goto out_free_dp_wq;
--
2.42.0
Powered by blists - more mailing lists