lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 3 Aug 2022 23:28:48 +0300
From:   Oded Gabbay <ogabbay@...nel.org>
To:     Yuji Ishikawa <yuji2.ishikawa@...hiba.co.jp>
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Jiho Chu <jiho.chu@...sung.com>, Arnd Bergmann <arnd@...db.de>,
        "Linux-Kernel@...r. Kernel. Org" <linux-kernel@...r.kernel.org>
Subject: Re: New subsystem for acceleration devices

On Wed, Aug 3, 2022 at 7:44 AM <yuji2.ishikawa@...hiba.co.jp> wrote:
>
> > -----Original Message-----
> > From: Oded Gabbay <ogabbay@...nel.org>
> > Sent: Monday, August 1, 2022 5:21 PM
> > To: ishikawa yuji(石川 悠司 ○RDC□AITC○EA開)
> > <yuji2.ishikawa@...hiba.co.jp>
> > Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>; Jiho Chu
> > <jiho.chu@...sung.com>; Arnd Bergmann <arnd@...db.de>;
> > Linux-Kernel@...r. Kernel. Org <linux-kernel@...r.kernel.org>
> > Subject: Re: New subsystem for acceleration devices
> >
> > On Mon, Aug 1, 2022 at 5:35 AM <yuji2.ishikawa@...hiba.co.jp> wrote:
> > >
> > > > -----Original Message-----
> > > > From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
> > > > Sent: Monday, August 1, 2022 12:38 AM
> > > > To: Oded Gabbay <oded.gabbay@...il.com>
> > > > Cc: ishikawa yuji(石川 悠司 ○RDC□AITC○EA開)
> > > > <yuji2.ishikawa@...hiba.co.jp>; Jiho Chu <jiho.chu@...sung.com>;
> > > > Arnd Bergmann <arnd@...db.de>; Linux-Kernel@...r. Kernel. Org
> > > > <linux-kernel@...r.kernel.org>
> > > > Subject: Re: New subsystem for acceleration devices
> > > >
> > > > On Sun, Jul 31, 2022 at 02:45:34PM +0300, Oded Gabbay wrote:
> > > > > Hi,
> > > > > Greg and I talked a couple of months ago about preparing a new
> > > > > accel subsystem for compute/acceleration devices that are not GPUs
> > > > > and I think your drivers that you are now trying to upstream fit it as well.
> > > > >
> > > > > Would you be open/interested in migrating your drivers to this new
> > > > subsystem ?
> > > > >
> > > > > Because there were no outstanding candidates, I have done so far
> > > > > only a basic and partial implementation of the infrastructure for
> > > > > this subsystem, but if you are willing to join I believe I can
> > > > > finish it rather quickly.
> > > > >
> > > > > At start, the new subsystem will provide only a common device
> > > > > character (e.g. /dev/acX) so everyone will do open/close/ioctl on
> > > > > the same device character. Also sysfs/debugfs entries will be
> > > > > under that device and maybe an IOCTL to retrieve information.
> > > > >
> > > > > In the future I plan to move some of habanalabs driver's code into
> > > > > the subsystem itself, for common tasks such as memory management,
> > > > > dma memory allocation, etc.
> > > > >
> > > > > Of course, you will be able to add your own IOCTLs as you see fit.
> > > > > There will be a range of IOCTLs which are device-specific (similar
> > > > > to drm).
> > > > >
> > > > > wdyt ?
> > > >
> > > > That sounds reasonable to me as a sane plan forward, thanks for
> > > > working on this.
> > > >
> > > > greg k-h
> > >
> > > Hi,
> > > Thank you for your suggestion.
> > > I'm really interested in the idea to have a dedicated subsystem for
> > accelerators.
> > > Let me challenge if the Visconti DNN driver can be re-written to the
> > framework.
> > > As Visconti SoC has several accelerators as well as DNN, having a
> > > general/common interface will be a big benefit for us to maintain drivers for a
> > long time.
> > >
> > > I've heard that the framework has some basic implementation.
> > > Do you have some sample code or enumeration of idea to describe the
> > framework?
> > > Would you share them with me? if that does not bother you.
> > >
> > Great to hear that!
> >
> > I will need a couple of days to complete the code and clean it up because I
> > stopped working on it a couple of months ago as there were no other candidates
> > at the time.
> > Once I have it ready I will put it in my linux repo and send you a branch name.
> >
> > In the meantime, I'd like to describe at a high level how this framework is going
> > to work.
> >
> > I'll start with the main theme of this framework which is allowing maximum
> > flexibility to the different device drivers. This is because in my opinion there is
> > and always will be a large variance between different compute accelerators,
> > which stems from the fact their h/w is designed for different purposes. I believe
> > that it would be nearly impossible to create a standard acceleration API that will
> > be applicable to all compute accelerators.
> >
> > Having said that, there are many things that can be common. I'll just name here
> > a few things:
> >
> > - Exposing devices in a uniform way (same device character files) and
> > managing the major/minor/open/close (with callbacks to the open/close of the
> > device driver)
> >
> > - Defining a standard user API for monitoring applications that usually run in a
> > data-center. There is a big difference between the acceleration uAPI and a
> > monitoring uAPI and while the former is highly specialized, the latter can be
> > standardized.
> >
> > - Common implementation of a memory manager that will give services of
> > allocating kernel memory that can be
> >   mmap'ed by the user.
> >
> > - For devices with on-chip MMU, common memory manager for allocating
> > device memory and managing it.
> >
> > - Integration with dma-buf for interoperability with other drivers.
> >
> > The initial implementation of the framework will expose two device character
> > files per device. One will be used for the main/compute operations and the
> > other one will be used for monitoring applications.
> > This is done in case there are some limitations on the main device file (e.g.
> > allowing only a single compute application to be connected to the device).
> >
> > Each driver will call a register function during its probe function (very similar to
> > what is done in the drm subsystem). This will register the device in the accel
> > framework and expose the device character files. Every call on those files (e.g.
> > open, ioctl) will then go through the accel subsystem first and then will go to
> > the callbacks of the specific device drivers. The framework will take care of
> > allocating major and minor numbers and handle these issues in a uniform way.
> >
> > For now, I plan to define a single ioctl in the common part, which is an
> > information ioctl, allowing the user to retrieve various information on the device
> > (fixed or dynamic). I don't want to jump ahead of myself and define other
> > common ioctls before gaining some traction. As I wrote above, each driver will
> > be allowed to define its own custom ioctls.
> >
> > There will be an infrastructure to add custom sysfs and debugfs entries. Once
> > we start working, I'm sure we will find some properties that we can move to the
> > infrastructure itself (e.g. asking to reset the device, operational status)
> >
> > I don't know if I want to add the memory manager we prepared in habanalabs
> > driver at day one, because I would like to minimize the effort we have to convert
> > our drivers to using this framework. From the above, I believe you can see the
> > current effort is not large.
> >
> > That's it from high-level pov. As I wrote at the beginning, I'll get back to you in a
> > few days with a link to a git repo containing the initial implementation.
> >
>
> Hi Oded,
>
> Thank you for sharing high level idea with me.
> I'm grad to hear that the DNN driver can be ported to the accelerator framework without too-much effort. I think we can start with minimum implementation and find better way.
> Currently the driver does not have API for monitoring. I'll prepare some logic to match the framework.
>
> Some of my interests are handling IOMMU and DMA-BUF.
> Currently the DNN driver has its own minimum implementation of those two.
> Will DMA-BUF feature of accelerator framework be provided soon?
I didn't plan it to be on the initial release, but I'm sure we can
make it a priority to be added soon after we do the initial
integration and everything is accepted to the kernel.
Oded
>
> > > If you need further information on the current DNN driver, please let me know.
> > > Also, I can provide some for other accelerator drivers which are currently
> > under cleaning up for contribution.
> > I looked at the patches you sent and I believe it will be a good match for this
> > framework.
> > If you have additional drivers that you think can be a goot match, by all means,
> > please send links to me.
> >
> > Thanks,
> > Oded
>
> Visconti DSP driver might be interesting for you as it provides multiple functions and requires external firmware.
> The problem is that I don't have official public repository.
> I wonder if it's possible to post some pieces of code to this ML.
>
> Regards,
>         Yuji

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ