lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <564A08C9.8090508@kernel.dk>
Date:	Mon, 16 Nov 2015 09:48:09 -0700
From:	Jens Axboe <axboe@...nel.dk>
To:	Chris Wilson <chris@...is-wilson.co.uk>,
	intel-gfx@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Cc:	dri-devel@...ts.freedesktop.org,
	Daniel Vetter <daniel.vetter@...ll.ch>,
	Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>,
	Eero Tamminen <eero.t.tamminen@...el.com>,
	"Rantala, Valtteri" <valtteri.rantala@...el.com>,
	stable@...nel.vger.org
Subject: Re: [PATCH 2/2] drm/i915: Limit the busy wait on requests to 2us not
 10ms!

On 11/15/2015 06:32 AM, Chris Wilson wrote:
> When waiting for high frequency requests, the finite amount of time
> required to set up the irq and wait upon it limits the response rate. By
> busywaiting on the request completion for a short while we can service
> the high frequency waits as quick as possible. However, if it is a slow
> request, we want to sleep as quickly as possible. The tradeoff between
> waiting and sleeping is roughly the time it takes to sleep on a request,
> on the order of a microsecond. Based on measurements from big core, I
> have set the limit for busywaiting as 2 microseconds.
>
> The code currently uses the jiffie clock, but that is far too coarse (on
> the order of 10 milliseconds) and results in poor interactivity as the
> CPU ends up being hogged by slow requests. To get microsecond resolution
> we need to use a high resolution timer. The cheapest of which is polling
> local_clock(), but that is only valid on the same CPU. If we switch CPUs
> because the task was preempted, we can also use that as an indicator that
>   the system is too busy to waste cycles on spinning and we should sleep
> instead.

I tried this (1+2), and it feels better. However, I added some counters 
just to track how well it's faring:

[  491.077612] i915: invoked=7168, success=50

so out of 6144 invocations, we only avoided going to sleep 49 of those 
times. As a percentage, that's 99.3% of the time we spun 2usec for no 
good reason other than to burn up more of my battery. So the reason 
there's an improvement for me is that we're just not spinning the 10ms 
anymore, however we're still just wasting time for my use case.

I'd recommend putting this behind some option so that people can enable 
it and play with it if they want, but not making it default to on until 
some more clever tracking has been added to dynamically adapt to on when 
to poll and when not to. It should not be a default-on type of thing 
until it's closer to doing the right thing for a normal workload, not 
just some synthetic benchmark.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ