[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YzFfzWJYsuhpUiPG@infradead.org>
Date: Mon, 26 Sep 2022 01:16:13 -0700
From: Christoph Hellwig <hch@...radead.org>
To: Oded Gabbay <oded.gabbay@...il.com>
Cc: "Linux-Kernel@...r. Kernel. Org" <linux-kernel@...r.kernel.org>,
Yuji Ishikawa <yuji2.ishikawa@...hiba.co.jp>,
Jiho Chu <jiho.chu@...sung.com>,
Alexandre Bailon <abailon@...libre.com>,
Kevin Hilman <khilman@...libre.com>,
Dave Airlie <airlied@...il.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jason Gunthorpe <jgg@...dia.com>,
Arnd Bergmann <arnd@...db.de>,
dri-devel <dri-devel@...ts.freedesktop.org>,
Daniel Vetter <daniel.vetter@...ll.ch>
Subject: Re: New subsystem for acceleration devices
Btw, there is another interesting thing around on the block:
NVMe Computational Storage devices. Don't be fooled by the name, much
of it is not about neither computation not storage, but it allows to
use the existing NVMe queuing model model to allow access to arbitrary
accelerators, including a way to expose access to on-device memory.
The probably most current version is here:
https://www.snia.org/educational-library/nvme-computational-storage-update-standard-2022
The first version will be rather limited and miss some important
functionality like directly accessing host DRAM or CXL integration,
but much of that is planned. The initial version also probably won't
be able to be supported by Linux at all, but we need to think hard about
how to support it.
It woud also be really elpful to get more people with accelerator
experience into the NVMe working group.
Powered by blists - more mailing lists