[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <503d101d-7273-757a-2809-e272db93c45d@suse.de>
Date: Mon, 17 May 2021 21:12:03 +0200
From: Thomas Zimmermann <tzimmermann@...e.de>
To: Arnd Bergmann <arnd@...db.de>, Dave Airlie <airlied@...il.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Maciej Kwapulinski <maciej.kwapulinski@...ux.intel.com>,
Jonathan Corbet <corbet@....net>,
Derek Kiernan <derek.kiernan@...inx.com>,
Dragan Cvetic <dragan.cvetic@...inx.com>,
Andy Shevchenko <andy.shevchenko@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
DRI Development <dri-devel@...ts.freedesktop.org>
Subject: Re: [PATCH v3 00/14] Driver of Intel(R) Gaussian & Neural Accelerator
Hi
Am 17.05.21 um 09:40 schrieb Daniel Vetter:
> On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
>> On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman
>> <gregkh@...uxfoundation.org> wrote:
>>> On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote:
>>>> Dear kernel maintainers,
>>>>
>>>> This submission is a kernel driver to support Intel(R) Gaussian & Neural
>>>> Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor
>>>> available on multiple Intel platforms. AI developers and users can offload
>>>> continuous inference workloads to an Intel(R) GNA device in order to
free
>>>> processor resources and save power. Noise reduction and speech recognition
>>>> are the examples of the workloads Intel(R) GNA deals with while its usage
>>>> is not limited to the two.
>>>
>>> How does this compare with the "nnpi" driver being proposed here:
>>> https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com
>>>
>>> Please work with those developers to share code and userspace api and
>>> tools. Having the community review two totally different apis and
>>> drivers for the same type of functionality from the same company is
>>> totally wasteful of our time and energy.
>>
>> Agreed, but I think we should go further than this and work towards a
>> subsystem across companies for machine learning and neural networks
>> accelerators for both inferencing and training.
>
> We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or
> think G as in General, not Graphisc.
I hope this was a joke.
Just some thoughts:
AFAICT AI first came as an application of GPUs, but has now
evolved/specialized into something of its own. I can imagine sharing
some code among the various subsystems, say GEM/TTM internals for memory
management. Besides that there's probably little that can be shared in
the userspace interfaces. A GPU is device that puts an image onto the
screen and an AI accelerator isn't. Treating both as the same, even if
they share similar chip architectures, seems like a stretch. They might
evolve in different directions and fit less and less under the same
umbrella.
And as Dave mentioned, these devices are hard to obtain. We don't really
know what we sign up for.
Just my 2 cents.
Best regards
Thomas
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
Download attachment "OpenPGP_signature" of type "application/pgp-signature" (841 bytes)
Powered by blists - more mailing lists