[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240108153818.GK50406@nvidia.com>
Date: Mon, 8 Jan 2024 11:38:18 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: Yan Zhao <yan.y.zhao@...el.com>, wanpengli@...cent.com,
kvm@...r.kernel.org, dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, kraxel@...hat.com, maz@...nel.org,
joro@...tes.org, zzyiwei@...gle.com, yuzenghui@...wei.com,
olvaffe@...il.com, kevin.tian@...el.com, suzuki.poulose@....com,
alex.williamson@...hat.com, yongwei.ma@...el.com,
zhiyuan.lv@...el.com, gurchetansingh@...omium.org,
jmattson@...gle.com, zhenyu.z.wang@...el.com, seanjc@...gle.com,
ankita@...dia.com, oliver.upton@...ux.dev, james.morse@....com,
pbonzini@...hat.com, vkuznets@...hat.com
Subject: Re: [PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices
On Mon, Jan 08, 2024 at 04:25:02PM +0100, Daniel Vetter wrote:
> On Mon, Jan 08, 2024 at 10:02:50AM -0400, Jason Gunthorpe wrote:
> > On Mon, Jan 08, 2024 at 02:02:57PM +0800, Yan Zhao wrote:
> > > On Fri, Jan 05, 2024 at 03:55:51PM -0400, Jason Gunthorpe wrote:
> > > > On Fri, Jan 05, 2024 at 05:12:37PM +0800, Yan Zhao wrote:
> > > > > This series allow user space to notify KVM of noncoherent DMA status so as
> > > > > to let KVM honor guest memory types in specified memory slot ranges.
> > > > >
> > > > > Motivation
> > > > > ===
> > > > > A virtio GPU device may want to configure GPU hardware to work in
> > > > > noncoherent mode, i.e. some of its DMAs do not snoop CPU caches.
> > > >
> > > > Does this mean some DMA reads do not snoop the caches or does it
> > > > include DMA writes not synchronizing the caches too?
> > > Both DMA reads and writes are not snooped.
> >
> > Oh that sounds really dangerous.
>
> So if this is an issue then we might already have a problem, because with
> many devices it's entirely up to the device programming whether the i/o is
> snooping or not. So the moment you pass such a device to a guest, whether
> there's explicit support for non-coherent or not, you have a
> problem.
No, the iommus (except Intel and only for Intel integrated GPU, IIRC)
prohibit the use of non-coherent DMA entirely from a VM.
Eg AMD systems 100% block non-coherent DMA in VMs at the iommu level.
> _If_ there is a fundamental problem. I'm not sure of that, because my
> assumption was that at most the guest shoots itself and the data
> corruption doesn't go any further the moment the hypervisor does the
> dma/iommu unmapping.
Who fixes the cache on the unmapping? I didn't see anything..
Jason
Powered by blists - more mailing lists