lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZZwTzsZqx-XSTKma@phenom.ffwll.local>
Date: Mon, 8 Jan 2024 16:25:02 +0100
From: Daniel Vetter <daniel@...ll.ch>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Yan Zhao <yan.y.zhao@...el.com>, wanpengli@...cent.com,
	kvm@...r.kernel.org, dri-devel@...ts.freedesktop.org,
	linux-kernel@...r.kernel.org, kraxel@...hat.com, maz@...nel.org,
	joro@...tes.org, zzyiwei@...gle.com, yuzenghui@...wei.com,
	olvaffe@...il.com, kevin.tian@...el.com, suzuki.poulose@....com,
	alex.williamson@...hat.com, yongwei.ma@...el.com,
	zhiyuan.lv@...el.com, gurchetansingh@...omium.org,
	jmattson@...gle.com, zhenyu.z.wang@...el.com, seanjc@...gle.com,
	ankita@...dia.com, oliver.upton@...ux.dev, james.morse@....com,
	pbonzini@...hat.com, vkuznets@...hat.com
Subject: Re: [PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices

On Mon, Jan 08, 2024 at 10:02:50AM -0400, Jason Gunthorpe wrote:
> On Mon, Jan 08, 2024 at 02:02:57PM +0800, Yan Zhao wrote:
> > On Fri, Jan 05, 2024 at 03:55:51PM -0400, Jason Gunthorpe wrote:
> > > On Fri, Jan 05, 2024 at 05:12:37PM +0800, Yan Zhao wrote:
> > > > This series allow user space to notify KVM of noncoherent DMA status so as
> > > > to let KVM honor guest memory types in specified memory slot ranges.
> > > > 
> > > > Motivation
> > > > ===
> > > > A virtio GPU device may want to configure GPU hardware to work in
> > > > noncoherent mode, i.e. some of its DMAs do not snoop CPU caches.
> > > 
> > > Does this mean some DMA reads do not snoop the caches or does it
> > > include DMA writes not synchronizing the caches too?
> > Both DMA reads and writes are not snooped.
> 
> Oh that sounds really dangerous.

So if this is an issue then we might already have a problem, because with
many devices it's entirely up to the device programming whether the i/o is
snooping or not. So the moment you pass such a device to a guest, whether
there's explicit support for non-coherent or not, you have a problem.

_If_ there is a fundamental problem. I'm not sure of that, because my
assumption was that at most the guest shoots itself and the data
corruption doesn't go any further the moment the hypervisor does the
dma/iommu unmapping.

Also, there's a pile of x86 devices where this very much applies, x86
being dma-coherent is not really the true ground story.

Cheers, Sima

> > > > This is generally for performance consideration.
> > > > In certain platform, GFX performance can improve 20+% with DMAs going to
> > > > noncoherent path.
> > > > 
> > > > This noncoherent DMA mode works in below sequence:
> > > > 1. Host backend driver programs hardware not to snoop memory of target
> > > >    DMA buffer.
> > > > 2. Host backend driver indicates guest frontend driver to program guest PAT
> > > >    to WC for target DMA buffer.
> > > > 3. Guest frontend driver writes to the DMA buffer without clflush stuffs.
> > > > 4. Hardware does noncoherent DMA to the target buffer.
> > > > 
> > > > In this noncoherent DMA mode, both guest and hardware regard a DMA buffer
> > > > as not cached. So, if KVM forces the effective memory type of this DMA
> > > > buffer to be WB, hardware DMA may read incorrect data and cause misc
> > > > failures.
> > > 
> > > I don't know all the details, but a big concern would be that the
> > > caches remain fully coherent with the underlying memory at any point
> > > where kvm decides to revoke the page from the VM.
> > Ah, you mean, for page migration, the content of the page may not be copied
> > correctly, right?
> 
> Not just migration. Any point where KVM revokes the page from the
> VM. Ie just tearing down the VM still has to make the cache coherent
> with physical or there may be problems.
>  
> > Currently in x86, we have 2 ways to let KVM honor guest memory types:
> > 1. through KVM memslot flag introduced in this series, for virtio GPUs, in
> >    memslot granularity.
> > 2. through increasing noncoherent dma count, as what's done in VFIO, for
> >    Intel GPU passthrough, for all guest memory.
> 
> And where does all this fixup the coherency problem?
> 
> > This page migration issue should not be the case for virtio GPU, as both host
> > and guest are synced to use the same memory type and actually the pages
> > are not anonymous pages.
> 
> The guest isn't required to do this so it can force the cache to
> become incoherent.
> 
> > > If you allow an incoherence of cache != physical then it opens a
> > > security attack where the observed content of memory can change when
> > > it should not.
> > 
> > In this case, will this security attack impact other guests?
> 
> It impacts the hypervisor potentially. It depends..
> 
> Jason

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ