lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260211112802.2956132f@fedora>
Date: Wed, 11 Feb 2026 11:28:02 +0100
From: Boris Brezillon <boris.brezillon@...labora.com>
To: Philipp Stanner <phasta@...lbox.org>
Cc: phasta@...nel.org, Danilo Krummrich <dakr@...nel.org>, Alice Ryhl
 <aliceryhl@...gle.com>, Christian König
 <christian.koenig@....com>, David Airlie <airlied@...il.com>, Simona Vetter
 <simona@...ll.ch>, Gary Guo <gary@...yguo.net>, Benno Lossin
 <lossin@...nel.org>, Daniel Almeida <daniel.almeida@...labora.com>, Joel
 Fernandes <joelagnelf@...dia.com>, linux-kernel@...r.kernel.org,
 dri-devel@...ts.freedesktop.org, rust-for-linux@...r.kernel.org,
 lucas.demarchi@...el.com, thomas.hellstrom@...ux.intel.com,
 rodrigo.vivi@...el.com
Subject: Re: [RFC PATCH 2/4] rust: sync: Add dma_fence abstractions

On Wed, 11 Feb 2026 11:08:55 +0100
Philipp Stanner <phasta@...lbox.org> wrote:

> On Wed, 2026-02-11 at 10:57 +0100, Danilo Krummrich wrote:
> > (Cc: Xe maintainers)
> > 
> > On Tue Feb 10, 2026 at 12:40 PM CET, Alice Ryhl wrote:  
> > > On Tue, Feb 10, 2026 at 11:46:44AM +0100, Christian König wrote:  
> > > > On 2/10/26 11:36, Danilo Krummrich wrote:  
> > > > > On Tue Feb 10, 2026 at 11:15 AM CET, Alice Ryhl wrote:  
> > > > > >   
> 
> […]
> 
> > > > > 
> > > > > Or in other words, there must be no more than wq->max_active - 1 works that
> > > > > execute code violating the DMA fence signalling rules.  
> > > 
> > > Ouch, is that really the best way to do that? Why not two workqueues?  
> > 
> > Most drivers making use of this re-use the same workqueue for multiple GPU
> > scheduler instances in firmware scheduling mode (i.e. 1:1 relationship between
> > scheduler and entity). This is equivalent to the JobQ use-case.
> > 
> > Note that we will have one JobQ instance per userspace queue, so sharing the
> > workqueue between JobQ instances can make sense.  
> 
> Why, what for?

Because, even if it's not necessarily a 1:N relationship between queues
and threads these days (with the concept of shared worker pools), each
new workqueue usually imply the creation of new threads/resources, and
we usually don't need to have this level of parallelization (especially
if the communication channel with the FW can't be accessed
concurrently).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ