lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 22 Jul 2014 13:15:16 +0200
From:	Daniel Vetter <>
To:	Oded Gabbay <>
Cc:	Daniel Vetter <>,
	Jerome Glisse <>,
	Christian König <>,
	David Airlie <>,
	Alex Deucher <>,
	Andrew Morton <>,
	John Bridgman <>,
	Joerg Roedel <>,
	Andrew Lewycky <>,
	Michel Dänzer <>,
	Ben Goz <>,
	Alexey Skidanov <>,
	"" <>,
	"" <>,
	linux-mm <>, "Sellek, Tom" <>
Subject: Re: [PATCH v2 00/25] AMDKFD kernel driver

On Tue, Jul 22, 2014 at 12:52:43PM +0300, Oded Gabbay wrote:
> On 22/07/14 12:21, Daniel Vetter wrote:
> >On Tue, Jul 22, 2014 at 10:19 AM, Oded Gabbay <> wrote:
> >>>Exactly, just prevent userspace from submitting more. And if you have
> >>>misbehaving userspace that submits too much, reset the gpu and tell it
> >>>that you're sorry but won't schedule any more work.
> >>
> >>I'm not sure how you intend to know if a userspace misbehaves or not. Can
> >>you elaborate ?
> >
> >Well that's mostly policy, currently in i915 we only have a check for
> >hangs, and if userspace hangs a bit too often then we stop it. I guess
> >you can do that with the queue unmapping you've describe in reply to
> >Jerome's mail.
> >-Daniel
> >
> What do you mean by hang ? Like the tdr mechanism in Windows (checks if a
> gpu job takes more than 2 seconds, I think, and if so, terminates the job).

Essentially yes. But we also have some hw features to kill jobs quicker,
e.g. for media workloads.
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 -
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists