lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190913110544.htmslqt4yzejugs4@sirius.home.kraxel.org>
Date:   Fri, 13 Sep 2019 13:05:44 +0200
From:   Gerd Hoffmann <kraxel@...hat.com>
To:     Tomasz Figa <tfiga@...omium.org>
Cc:     David Airlie <airlied@...ux.ie>, Daniel Vetter <daniel@...ll.ch>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        virtualization@...ts.linux-foundation.org,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        stevensd@...omium.org,
        Stéphane Marchesin <marcheu@...omium.org>,
        Zach Reizner <zachr@...omium.org>,
        Keiichi Watanabe <keiichiw@...omium.org>,
        Pawel Osciak <posciak@...omium.org>
Subject: Re: [RFC PATCH] drm/virtio: Export resource handles via DMA-buf API

  Hi,

> > No.  DMA master address space in virtual machines is pretty much the
> > same it is on physical machines.  So, on x86 without iommu, identical
> > to (guest) physical address space.  You can't re-define it like that.
> 
> That's not true. Even on x86 without iommu the DMA address space can
> be different from the physical address space.

On a standard pc (like the ones emulated by qemu) that is the case.
It's different on !x86, it also changes with a iommu being present.

But that is not the main point here.  The point is the dma master
address already has a definition and you can't simply change that.

> That could be still just
> a simple addition/subtraction by constant, but still, the two are
> explicitly defined without any guarantees about a simple mapping
> between one or another existing.

Sure.

> "A CPU cannot reference a dma_addr_t directly because there may be
> translation between its physical
> address space and the DMA address space."

Also note that dma address space is device-specific.  In case a iommu
is present in the system you can have *multiple* dma address spaces,
separating (groups of) devices from each other.  So passing a dma
address from one device to another isn't going to work.

> > > However, we could as well introduce a separate DMA address
> > > space if resource handles are not the right way to refer to the memory
> > > from other virtio devices.
> >
> > s/other virtio devices/other devices/
> >
> > dma-bufs are for buffer sharing between devices, not limited to virtio.
> > You can't re-define that in some virtio-specific way.
> >
> 
> We don't need to limit this to virtio devices only. In fact I actually
> foresee this having a use case with the emulated USB host controller,
> which isn't a virtio device.

What exactly?

> That said, I deliberately referred to virtio to keep the scope of the
> problem in control. If there is a solution that could work without
> such assumption, I'm more than open to discuss it, of course.

But it might lead to taking virtio-specific (or virtualization-specific)
shortcuts which will hurt in the long run ...

> As per my understanding of the DMA address, anything that lets the DMA
> master access the target memory would qualify and there would be no
> need for an IOMMU in between.

Yes.  But that DMA address is already defined by the platform, you can't
freely re-define it.  Well, you sort-of can when you design your own
virtual iommu for qemu.  But even then you can't just pass around magic
cookies masqueraded as dma address.  You have to make sure they actually
form an address space, without buffers overlapping, ...

> Exactly. The very specific first scenario that we want to start with
> is allocating host memory through virtio-gpu and using that memory
> both as output of a video decoder and as input (texture) to Virgil3D.
> The memory needs to be specifically allocated by the host as only the
> host can know the requirements for memory allocation of the video
> decode accelerator hardware.

So you probably have some virtio-video-decoder.  You allocate a gpu
buffer, export it as dma-buf, import it into the decoder, then let the
video decoder render to it.  Right?

Using dmabufs makes sense for sure.  But we need an separate field in
struct dma_buf for an (optional) host dmabuf identifier, we can't just
hijack the dma address.

cheers,
  Gerd

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ