lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151001210358-mutt-send-email-mst@redhat.com>
Date:	Thu, 1 Oct 2015 21:25:40 +0300
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Stephen Hemminger <stephen@...workplumber.org>
Cc:	hjk@...sjkoch.de, gregkh@...ux-foundation.org,
	linux-kernel@...r.kernel.org, dev@...k.org
Subject: Re: [PATCH 2/2] uio: new driver to support PCI MSI-X

On Thu, Oct 01, 2015 at 10:26:19AM -0700, Stephen Hemminger wrote:
> On Thu, 1 Oct 2015 19:31:08 +0300
> "Michael S. Tsirkin" <mst@...hat.com> wrote:
> 
> > On Thu, Oct 01, 2015 at 11:33:06AM +0300, Michael S. Tsirkin wrote:
> > > On Wed, Sep 30, 2015 at 03:28:58PM -0700, Stephen Hemminger wrote:
> > > > This driver allows using PCI device with Message Signalled Interrupt
> > > > from userspace. The API is similar to the igb_uio driver used by the DPDK.
> > > > Via ioctl it provides a mechanism to map MSI-X interrupts into event
> > > > file descriptors similar to VFIO.
> > > >
> > > > VFIO is a better choice if IOMMU is available, but often userspace drivers
> > > > have to work in environments where IOMMU support (real or emulated) is
> > > > not available.  All UIO drivers that support DMA are not secure against
> > > > rogue userspace applications programming DMA hardware to access
> > > > private memory; this driver is no less secure than existing code.
> > > > 
> > > > Signed-off-by: Stephen Hemminger <stephen@...workplumber.org>
> > > 
> > > I don't think copying the igb_uio interface is a good idea.
> > > What DPDK is doing with igb_uio (and indeed uio_pci_generic)
> > > is abusing the sysfs BAR access to provide unlimited
> > > access to hardware.
> > > 
> > > MSI messages are memory writes so any generic device capable
> > > of MSI is capable of corrupting kernel memory.
> > > This means that a bug in userspace will lead to kernel memory corruption
> > > and crashes.  This is something distributions can't support.
> > > 
> > > uio_pci_generic is already abused like that, mostly
> > > because when I wrote it, I didn't add enough protections
> > > against using it with DMA capable devices,
> > > and we can't go back and break working userspace.
> > > But at least it does not bind to VFs which all of
> > > them are capable of DMA.
> > > 
> > > The result of merging this driver will be userspace abusing the
> > > sysfs BAR access with VFs as well, and we do not want that.
> > > 
> > > 
> > > Just forwarding events is not enough to make a valid driver.
> > > What is missing is a way to access the device in a safe way.
> > > 
> > > On a more positive note:
> > > 
> > > What would be a reasonable interface? One that does the following
> > > in kernel:
> > > 
> > > 1. initializes device rings (can be in pinned userspace memory,
> > >    but can not be writeable by userspace), brings up interface link
> > > 2. pins userspace memory (unless using e.g. hugetlbfs)
> > > 3. gets request, make sure it's valid and belongs to
> > >    the correct task, put it in the ring
> > > 4. in the reverse direction, notify userspace when buffers
> > >    are available in the ring
> > > 5. notify userspace about MSI (what this driver does)
> > > 
> > > What userspace can be allowed to do:
> > > 
> > > 	format requests (e.g. transmit, receive) in userspace
> > > 	read ring contents
> > > 
> > > What userspace can't be allowed to do:
> > > 
> > > 	access BAR
> > > 	write rings
> > > 
> > > 
> > > This means that the driver can not be a generic one,
> > > and there will be a system call overhead when you
> > > write the ring, but that's the price you have to
> > > pay for ability to run on systems without an IOMMU.
> > > 
> > 
> > 
> > The device specific parts can be taken from John Fastabend's patches
> > BTW:
> > 
> > https://patchwork.ozlabs.org/patch/396713/
> > 
> > IIUC what was missing there was exactly the memory protection
> > we are looking for here.
> 
> The bifuricated drivers are interesting from an architecture
> point of view, but do nothing to solve the immediate use case.
> The problem is not on bare metal environment, most of those already have IOMMU.
> The issues are on environments like VMWare with SRIOV or vmxnet3,
> neither of those are really helped by bifirucated driver or VFIO.

Two points I tried to make (and apparently failed, so I'm trying again,
more verbosely):
- bifurcated drivers do DMA into both kernel and userspace
  memory from same pci address (bus/dev/fn). As IOMMU uses this
  source address to validated accesses, there is no way to
  have IOMMU prevent userspace from accessing kernel memory.
  If you are prepared to use dynamic mappings for kernel
  memory, it might be possible to limit the harm that userspace
  can do, but this will slow down kernel networking
  (changing IOMMU mappings is expensive) and userspace will
  likely be able to at least disrupt kernel networking.
  So what I am discussing might still have value there.

- bifurcated drivers have code to bring up link and map rings into
  userspace (they also map other rings into kernel, and tweak rx filter
  in hardware, that might not be necessary for this usecase).
  What I proposed above can use that code, with
  the twist that the RX ring is made RO for userspace, and a system call
  to safely copy from userspace ring there is supported.
  In other words, here's the device specific part in
  kernel that you wanted to use, which will only
  need some tweaks.

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ