lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 13 Apr 2023 10:19:12 -0300
From:   Jason Gunthorpe <jgg@...dia.com>
To:     Marc Zyngier <maz@...nel.org>
Cc:     ankita@...dia.com, alex.williamson@...hat.com,
        naoya.horiguchi@....com, oliver.upton@...ux.dev,
        aniketa@...dia.com, cjia@...dia.com, kwankhede@...dia.com,
        targupta@...dia.com, vsethi@...dia.com, acurrid@...dia.com,
        apopple@...dia.com, jhubbard@...dia.com, danw@...dia.com,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org
Subject: Re: [PATCH v3 0/6] Expose GPU memory as coherently CPU accessible

On Thu, Apr 13, 2023 at 10:52:10AM +0100, Marc Zyngier wrote:

> > IMHO, from the mm perspective, the bug is using pfn_is_map_memory() to
> > determine the cachability or device memory status of a PFN in a
> > VMA. That is not what that API is for.
> 
> It is the right API for what KVM/arm64 has been designed for. RAM gets
> a normal memory mapping, and everything else gets device. 

The MM has a pretty flexible definition of "RAM" these days. For
instance, I don't think pfn_is_map_memory() works correctly for all
the cases we can do now with devm_memremap_pages().

> That may not suit your *new* use case, but that doesn't make it
> broken.

I've now spent alot of time working on improving VFIO and the related
ecosystem. I would to get to a point where we have a consistent VFIO
experience on all the platforms.

Currently, real NIC and GPU HW with wide VFIO deployments on x86 do
not work fully correctly on KVM/arm64. write-combining in the VM is
the big problem for existing HW, and this new CXL-like stuff has
problems with cachability.

I don't really care what we call it, as long as we can agree that VFIO
devices not working fully in VMs is a problem that should be fixed.

> Only if you insist on not losing coherency between the two aliases
> used at the same time (something that would seem pretty improbable).

This is VFIO so there is DMA involved. My understanding has been that
the SMMU is allowed to pull data out of the cache. So if the
hypervisor cachable side has pulled a line into cache and the VM
uncached side dirtied the physical memory, it is allowed that SMMU
will read stale cache data? Thus the VM will experience data
corruption on its DMAs.

With VFIO live migration I expect the hypervisor qemu side to be
actively reading from the cachable memory while the VM is running to
migrate it, so it does not seem improbable.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ