[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YCFxiTB//Iz6aIhk@Konrads-MacBook-Pro.local>
Date: Mon, 8 Feb 2021 12:14:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To: Christoph Hellwig <hch@....de>
Cc: Martin Radev <martin.b.radev@...il.com>, m.szyprowski@...sung.com,
robin.murphy@....com, iommu@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, joro@...tes.org,
kirill.shutemov@...ux.intel.com, thomas.lendacky@....com,
robert.buhren@...t.tu-berlin.de, file@...t.tu-berlin.de,
mathias.morbitzer@...ec.fraunhofer.de,
virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org
Subject: Re: [PATCH] swiotlb: Validate bounce size in the sync/unmap path
On Fri, Feb 05, 2021 at 06:58:52PM +0100, Christoph Hellwig wrote:
> On Wed, Feb 03, 2021 at 02:36:38PM -0500, Konrad Rzeszutek Wilk wrote:
> > > So what? If you guys want to provide a new capability you'll have to do
> > > work. And designing a new protocol based around the fact that the
> > > hardware/hypervisor is not trusted and a copy is always required makes
> > > a lot of more sense than throwing in band aids all over the place.
> >
> > If you don't trust the hypervisor, what would this capability be in?
>
> Well, they don't trust the hypervisor to not attack the guest somehow,
> except through the data read. I never really understood the concept,
> as it leaves too many holes.
>
> But the point is that these schemes want to force bounce buffering
> because they think it is more secure. And if that is what you want
> you better have protocol build around the fact that each I/O needs
> to use bounce buffers, so you make those buffers the actual shared
> memory use for communication, and build the protocol around it.
Right. That is what the SWIOTLB pool ends up being as it is allocated at
bootup where the guest tells the hypervisor - these are shared and
clear-text.
> E.g. you don't force the ridiculous NVMe PRP offset rules on the block
> layer, just to make a complicated swiotlb allocation that needs to
> preserve the alignment just do I/O. But instead you have a trivial
I agree that NVMe is being silly. It could have allocated the coherent
pool and use that and do its own offset within that. That would in
essence carve out a static pool within the SWIOTLB static one..
TTM does that - it has its own DMA machinery on top of DMA API to deal
with its "passing" buffers from one application to another and the fun
of keeping track of that.
> ring buffer or whatever because you know I/O will be copied anyway
> and none of all the hard work higher layers do to make the I/O suitable
> for a normal device apply.
I lost you here. Sorry, are you saying have a simple ring protocol
(like NVME has), where the ring entries (SG or DMA phys) are statically
allocated and whenever NVME driver gets data from user-space it
would copy it in there?
Powered by blists - more mailing lists