lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 Oct 2022 09:10:55 +0300
From:   Oded Gabbay <ogabbay@...nel.org>
To:     Alex Deucher <alexdeucher@...il.com>
Cc:     David Airlie <airlied@...il.com>, Daniel Vetter <daniel@...ll.ch>,
        Arnd Bergmann <arnd@...db.de>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
        Jason Gunthorpe <jgg@...dia.com>,
        John Hubbard <jhubbard@...dia.com>,
        Alex Deucher <alexander.deucher@....com>,
        Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>,
        Jacek Lawrynowicz <jacek.lawrynowicz@...ux.intel.com>,
        Jeffrey Hugo <quic_jhugo@...cinc.com>,
        Jiho Chu <jiho.chu@...sung.com>,
        Christoph Hellwig <hch@...radead.org>,
        Thomas Zimmermann <tzimmermann@...e.de>,
        Kevin Hilman <khilman@...libre.com>,
        Yuji Ishikawa <yuji2.ishikawa@...hiba.co.jp>,
        Maciej Kwapulinski <maciej.kwapulinski@...ux.intel.com>,
        Jagan Teki <jagan@...rulasolutions.com>
Subject: Re: [RFC PATCH 0/3] new subsystem for compute accelerator devices

On Mon, Oct 24, 2022 at 6:10 PM Alex Deucher <alexdeucher@...il.com> wrote:
>
> On Mon, Oct 24, 2022 at 10:41 AM Oded Gabbay <ogabbay@...nel.org> wrote:
> >
> > On Mon, Oct 24, 2022 at 4:55 PM Alex Deucher <alexdeucher@...il.com> wrote:
> > >
> > > On Sat, Oct 22, 2022 at 5:46 PM Oded Gabbay <ogabbay@...nel.org> wrote:
> > > >
> > > > In the last couple of months we had a discussion [1] about creating a new
> > > > subsystem for compute accelerator devices in the kernel.
> > > >
> > > > After an analysis that was done by DRM maintainers and myself, and following
> > > > a BOF session at the Linux Plumbers conference a few weeks ago [2], we
> > > > decided to create a new subsystem that will use the DRM subsystem's code and
> > > > functionality. i.e. the accel core code will be part of the DRM subsystem.
> > > >
> > > > This will allow us to leverage the extensive DRM code-base and
> > > > collaborate with DRM developers that have experience with this type of
> > > > devices. In addition, new features that will be added for the accelerator
> > > > drivers can be of use to GPU drivers as well (e.g. RAS).
> > > >
> > > > As agreed in the BOF session, the accelerator devices will be exposed to
> > > > user-space with a new, dedicated device char files and a dedicated major
> > > > number (261), to clearly separate them from graphic cards and the graphic
> > > > user-space s/w stack. Furthermore, the drivers will be located in a separate
> > > > place in the kernel tree (drivers/accel/).
> > > >
> > > > This series of patches is the first step in this direction as it adds the
> > > > necessary infrastructure for accelerator devices to DRM. The new devices will
> > > > be exposed with the following convention:
> > > >
> > > > device char files - /dev/accel/accel*
> > > > sysfs             - /sys/class/accel/accel*/
> > > > debugfs           - /sys/kernel/debug/accel/accel*/
> > > >
> > > > I tried to reuse the existing DRM code as much as possible, while keeping it
> > > > readable and maintainable.
> > >
> > > Wouldn't something like this:
> > > https://patchwork.freedesktop.org/series/109575/
> > > Be simpler and provide better backwards compatibility for existing
> > > non-gfx devices in the drm subsystem as well as newer devices?
> >
> > As Greg said, see the summary. The consensus in the LPC session was
> > that we need to clearly separate accel devices from existing gpu
> > devices (whether they use primary and/or render nodes). That is the
> > main guideline according to which I wrote the patches. I don't think I
> > want to change this decision.
> >
> > Also, there was never any intention to provide backward compatibility
> > for existing non-gfx devices. Why would we want that ? We are mainly
> > talking about drivers that are currently trying to get upstream, and
> > the habana driver.
>
> If someone already has a non-gfx device which uses the drm subsystem,
> should they be converted to the new accel stuff?  What about new
> devices that utilize the same driver?  SHould they use accel or
> continue to use drm?
My baseline assumption was that this subsystem is mainly (but not
solely) for new drivers that are now trying to get upstreamed and for
the habana driver.
imo we should not force existing drivers to convert their entire
driver just because we created a new subsystem. If they want to do it,
they are more than welcomed.
But that's only my opinion and other maintainers might think otherwise.

> For the sake of the rest of the stack drm would
> make more sense, but if accel grows a bunch of stuff that all accel
> drivers should be using what do we do?
First of all, as I wrote in another email, I don't think accel core
code will be very large. Otherwise, I probably would have tried to
convince people that the accel stuff should be totally independent of
drm.
You can see I tried to make the code tightly-coupled with drm (too
much according to the reviews) and I did that because I believe most
core code will be common to drm and accel. So I'm not worried about
this aspect.

Second, yes, if for some reason there will be accel-only features that
devices want to use, they will need to create an accel device that
will have this functionality and be connected via auxiliary bus to
their main driver (which can be drm or other subsystem, e.g. nvme).
For example, to utilize Ethernet and RDMA features, habana is now
writing Ethernet and RDMA drivers that will be upstreamed and they
will be connected to the main/compute driver via auxiliary bus.

> Also using render nodes also
> makes the devices compatible with all of the existing user space tools
> that use the existing drm device nodes like libdrm, etc.  I'm failing
> to see what advantage accel brings other than requiring userspace to
> support two very similar device nodes.
This is exactly what we are trying to avoid here :) We want to make
sure that all existing user space tools that use drm devices will NOT
work with the accel devices.
Accel devices are not GPUs. The h/w ip might be a part of a GPU ASIC,
but the specific functionality is not related to the
drm/mesa/x-server/wayland/opengl/vulkan stack.
I don't want them to expose render nodes that Chrome or some other
application tries to open because it thinks it is a GPU...

So it was the majority opinion of the people in LPC that we should
make a clear separation. If there is no separation, then I don't see
the point in doing an accel subsystem, let's just continue to do drm.

Thanks,
Oded

>
> Alex
>
> >
> > Oded
> > >
> > > Alex
> > >
> > > >
> > > > One thing that is missing from this series is defining a namespace for the
> > > > new accel subsystem, while I'll add in the next iteration of this patch-set,
> > > > after I will receive feedback from the community.
> > > >
> > > > As for drivers, once this series will be accepted (after adding the namespace),
> > > > I will start working on migrating the habanalabs driver to the new accel
> > > > subsystem. I have talked about it with Dave and we agreed that it will be
> > > > a good start to simply move the driver as-is with minimal changes, and then
> > > > start working on the driver's individual features that will be either added
> > > > to the accel core code (with or without changes), or will be removed and
> > > > instead the driver will use existing DRM code.
> > > >
> > > > In addition, I know of at least 3 or 4 drivers that were submitted for review
> > > > and are good candidates to be included in this new subsystem, instead of being
> > > > a drm render node driver or a misc driver.
> > > >
> > > > [1] https://lkml.org/lkml/2022/7/31/83
> > > > [2] https://airlied.blogspot.com/2022/09/accelerators-bof-outcomes-summary.html
> > > >
> > > > Thanks,
> > > > Oded
> > > >
> > > > Oded Gabbay (3):
> > > >   drivers/accel: add new kconfig and update MAINTAINERS
> > > >   drm: define new accel major and register it
> > > >   drm: add dedicated minor for accelerator devices
> > > >
> > > >  Documentation/admin-guide/devices.txt |   5 +
> > > >  MAINTAINERS                           |   8 +
> > > >  drivers/Kconfig                       |   2 +
> > > >  drivers/accel/Kconfig                 |  24 +++
> > > >  drivers/gpu/drm/drm_drv.c             | 214 +++++++++++++++++++++-----
> > > >  drivers/gpu/drm/drm_file.c            |  69 ++++++---
> > > >  drivers/gpu/drm/drm_internal.h        |   5 +-
> > > >  drivers/gpu/drm/drm_sysfs.c           |  81 +++++++++-
> > > >  include/drm/drm_device.h              |   3 +
> > > >  include/drm/drm_drv.h                 |   8 +
> > > >  include/drm/drm_file.h                |  21 ++-
> > > >  include/drm/drm_ioctl.h               |   1 +
> > > >  12 files changed, 374 insertions(+), 67 deletions(-)
> > > >  create mode 100644 drivers/accel/Kconfig
> > > >
> > > > --
> > > > 2.34.1
> > > >

Powered by blists - more mailing lists