lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191120164525.GH22515@ziepe.ca>
Date:   Wed, 20 Nov 2019 12:45:25 -0400
From:   Jason Gunthorpe <jgg@...pe.ca>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     Jason Wang <jasowang@...hat.com>,
        Parav Pandit <parav@...lanox.com>,
        Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        Dave Ertman <david.m.ertman@...el.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "nhorman@...hat.com" <nhorman@...hat.com>,
        "sassmann@...hat.com" <sassmann@...hat.com>,
        Kiran Patil <kiran.patil@...el.com>,
        Alex Williamson <alex.williamson@...hat.com>,
        "Bie, Tiwei" <tiwei.bie@...el.com>
Subject: Re: [net-next v2 1/1] virtual-bus: Implementation of Virtual Bus

On Wed, Nov 20, 2019 at 09:57:17AM -0500, Michael S. Tsirkin wrote:
> On Wed, Nov 20, 2019 at 10:30:54AM -0400, Jason Gunthorpe wrote:
> > On Wed, Nov 20, 2019 at 08:43:20AM -0500, Michael S. Tsirkin wrote:
> > > On Wed, Nov 20, 2019 at 09:03:19AM -0400, Jason Gunthorpe wrote:
> > > > On Wed, Nov 20, 2019 at 02:38:08AM -0500, Michael S. Tsirkin wrote:
> > > > > > > I don't think that extends as far as actively encouraging userspace
> > > > > > > drivers poking at hardware in a vendor specific way.  
> > > > > > 
> > > > > > Yes, it does, if you can implement your user space requirements using
> > > > > > vfio then why do you need a kernel driver?
> > > > > 
> > > > > People's requirements differ. You are happy with just pass through a VF
> > > > > you can already use it. Case closed. There are enough people who have
> > > > > a fixed userspace that people have built virtio accelerators,
> > > > > now there's value in supporting that, and a vendor specific
> > > > > userspace blob is not supporting that requirement.
> > > > 
> > > > I have no idea what you are trying to explain here. I'm not advocating
> > > > for vfio pass through.
> > > 
> > > You seem to come from an RDMA background, used to userspace linking to
> > > vendor libraries to do basic things like push bits out on the network,
> > > because users live on the performance edge and rebuild their
> > > userspace often anyway.
> > > 
> > > Lots of people are not like that, they would rather have the
> > > vendor-specific driver live in the kernel, with userspace being
> > > portable, thank you very much.
> > 
> > You are actually proposing a very RDMA like approach with a split
> > kernel/user driver design. Maybe the virtio user driver will turn out
> > to be 'portable'.
> > 
> > Based on the last 20 years of experience, the kernel component has
> > proven to be the larger burden and drag than the userspace part. I
> > think the high interest in DPDK, SPDK and others show this is a common
> > principle.
> 
> And I guess the interest in BPF shows the opposite?

There is room for both, I wouldn't discount either approach entirely
out of hand.

> > At the very least for new approaches like this it makes alot of sense
> > to have a user space driver until enough HW is available that a
> > proper, well thought out kernel side can be built.
> 
> But hardware is available, driver has been posted by Intel.
> Have you looked at that?

I'm not sure pointing at that driver is so helpful, it is very small
and mostly just reflects virtio ops into some undocumented register
pokes.

There is no explanation at all for the large scale architecture
choices:
 - Why vfio
 - Why mdev without providing a device IOMMU
 - Why use GUID lifecycle management for singlton function PF/VF
   drivers
 - Why not use devlink
 - Why not use vfio-pci with a userspace driver

These are legitimate questions and answers like "because we like it
this way" or "this is how the drivers are written today" isn't very
satisfying at all.

> > For instance, this VFIO based approach might be very suitable to the
> > intel VF based ICF driver, but we don't yet have an example of non-VF
> > HW that might not be well suited to VFIO.
>
> I don't think we should keep moving the goalposts like this.

It is ABI, it should be done as best we can as we have to live with it
for a long time. Right now HW is just starting to come to market with
VDPA and it feels rushed to design a whole subsystem style ABI around
one, quite simplistic, driver example.

> If people write drivers and find some infrastruture useful,
> and it looks more or less generic on the outset, then I don't
> see why it's a bad idea to merge it.

Because it is userspace ABI, caution is always justified when defining
new ABI.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ