[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5649A7C2.90206@linux.intel.com>
Date: Mon, 16 Nov 2015 09:54:10 +0000
From: Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>
To: Chris Wilson <chris@...is-wilson.co.uk>,
Jens Axboe <axboe@...nel.dk>, intel-gfx@...ts.freedesktop.org,
linux-kernel@...r.kernel.org
Cc: dri-devel@...ts.freedesktop.org,
Daniel Vetter <daniel.vetter@...ll.ch>,
Eero Tamminen <eero.t.tamminen@...el.com>,
"Rantala, Valtteri" <valtteri.rantala@...el.com>,
stable@...nel.vger.org
Subject: Re: [PATCH 1/2] drm/i915: Break busywaiting for requests on pending
signals
Hi,
On 15/11/15 13:32, Chris Wilson wrote:
> The busywait in __i915_spin_request() does not respect pending signals
> and so may consume the entire timeslice for the task instead of
> returning to userspace to handle the signal.
Obviously correct to break the spin, but if spending a jiffie to react
to signals was the only problem then it is not too severe.
Add something to the commit message about how it was found/reported and
about the severity of impact, etc?
Otherwise,
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@...el.com>
> Fixes regression from
> commit 2def4ad99befa25775dd2f714fdd4d92faec6e34 [v4.2]
> Author: Chris Wilson <chris@...is-wilson.co.uk>
> Date: Tue Apr 7 16:20:41 2015 +0100
>
> drm/i915: Optimistically spin for the request completion
>
> Signed-off-by: Chris Wilson <chris@...is-wilson.co.uk>
> Cc: Jens Axboe <axboe@...nel.dk>
> Cc; "Rogozhkin, Dmitry V" <dmitry.v.rogozhkin@...el.com>
> Cc: Daniel Vetter <daniel.vetter@...ll.ch>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>
> Cc: Eero Tamminen <eero.t.tamminen@...el.com>
> Cc: "Rantala, Valtteri" <valtteri.rantala@...el.com>
> Cc: stable@...nel.vger.org
> ---
> drivers/gpu/drm/i915/i915_gem.c | 13 ++++++++-----
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 5cf4a1998273..740530c571d1 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -1146,7 +1146,7 @@ static bool missed_irq(struct drm_i915_private *dev_priv,
> return test_bit(ring->id, &dev_priv->gpu_error.missed_irq_rings);
> }
>
> -static int __i915_spin_request(struct drm_i915_gem_request *req)
> +static int __i915_spin_request(struct drm_i915_gem_request *req, int state)
> {
> unsigned long timeout;
>
> @@ -1158,6 +1158,9 @@ static int __i915_spin_request(struct drm_i915_gem_request *req)
> if (i915_gem_request_completed(req, true))
> return 0;
>
> + if (signal_pending_state(state, current))
> + break;
> +
> if (time_after_eq(jiffies, timeout))
> break;
>
> @@ -1197,6 +1200,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
> struct drm_i915_private *dev_priv = dev->dev_private;
> const bool irq_test_in_progress =
> ACCESS_ONCE(dev_priv->gpu_error.test_irq_rings) & intel_ring_flag(ring);
> + int state = interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE;
> DEFINE_WAIT(wait);
> unsigned long timeout_expire;
> s64 before, now;
> @@ -1221,7 +1225,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
> before = ktime_get_raw_ns();
>
> /* Optimistic spin for the next jiffie before touching IRQs */
> - ret = __i915_spin_request(req);
> + ret = __i915_spin_request(req, state);
> if (ret == 0)
> goto out;
>
> @@ -1233,8 +1237,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
> for (;;) {
> struct timer_list timer;
>
> - prepare_to_wait(&ring->irq_queue, &wait,
> - interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE);
> + prepare_to_wait(&ring->irq_queue, &wait, state);
>
> /* We need to check whether any gpu reset happened in between
> * the caller grabbing the seqno and now ... */
> @@ -1252,7 +1255,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
> break;
> }
>
> - if (interruptible && signal_pending(current)) {
> + if (signal_pending_state(state, current)) {
> ret = -ERESTARTSYS;
> break;
> }
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists