[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c3spwcoq7j4fx5yg4l7njeiofhkaasbknze3byh4dl45yeacvr@rb6u6j5kz7oe>
Date: Sat, 22 Feb 2025 16:22:08 -0500
From: Kent Overstreet <kent.overstreet@...ux.dev>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Ventura Jack <venturajack85@...il.com>, Gary Guo <gary@...yguo.net>, airlied@...il.com,
boqun.feng@...il.com, david.laight.linux@...il.com, ej@...i.de,
gregkh@...uxfoundation.org, hch@...radead.org, ksummit@...ts.linux.dev,
linux-kernel@...r.kernel.org, miguel.ojeda.sandonis@...il.com, rust-for-linux@...r.kernel.org
Subject: Re: C aggregate passing (Rust kernel policy)
On Sat, Feb 22, 2025 at 12:54:31PM -0800, H. Peter Anvin wrote:
> VLIW and OoO might seem orthogonal, but they aren't – because they are
> trying to solve the same problem, combining them either means the OoO
> engine can't do a very good job because of false dependencies (if you
> are scheduling molecules) or you have to break them instructions down
> into atoms, at which point it is just a (often quite inefficient) RISC
> encoding. In short, VLIW *might* make sense when you are statically
> scheduling a known pipeline, but it is basically a dead end for
> evolution – so unless you can JIT your code for each new chip
> generation...
JITing for each chip generation would be a part of any serious new VLIW
effort. It's plenty doable in the open source world and the gains are
too big to ignore.
> But OoO still is more powerful, because it can do *dynamic*
> scheduling. A cache miss doesn't necessarily mean that you have to
> stop the entire machine, for example.
Power hungry and prone to information leaks, though.
Powered by blists - more mailing lists