lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210908171241.63b0b89c.alex.williamson@redhat.com>
Date:   Wed, 8 Sep 2021 17:12:41 -0600
From:   Alex Williamson <alex.williamson@...hat.com>
To:     Kishon Vijay Abraham I <kishon@...com>
Cc:     Cornelia Huck <cohuck@...hat.com>, <kvm@...r.kernel.org>,
        Ohad Ben-Cohen <ohad@...ery.com>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        Mathieu Poirier <mathieu.poirier@...aro.org>,
        linux-remoteproc <linux-remoteproc@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Vutla, Lokesh" <lokeshvutla@...com>,
        Vignesh Raghavendra <vigneshr@...com>,
        "Strashko, Grygorii" <grygorii.strashko@...com>
Subject: Re: [QUERY] Flushing cache from userspace using VFIO

Hi Kishon,

On Mon, 6 Sep 2021 21:22:15 +0530
Kishon Vijay Abraham I <kishon@...com> wrote:

> Hi Alex, Cornelia,
> 
> I'm trying to see if I can use VFIO (Versatile Framework for userspace I/O
> [1]) for communication between two cores within the same SoC. I've tried to put
> down a picture like below which tries to communicate between ARM64 (running
> Linux) and CORTEX R5 (running firmware). It uses rpmsg/remoteproc for the
> control messages and the actual data buffers are directly accessed from the
> userspace. The location of the data buffers can be informed to the userspace via
> rpmsg_vfio (which has to be built as a rpmsg endpoint).

In the vfio model, the user gets access to a device that's a member of
an IOMMU isolation group whose IOMMU context is managed by a vfio
container.  What "device" is the user getting access to here and is an
IOMMU involved?

> My question is after the userspace application in ARM64 writes to a buffer in
> the SYSTEM MEMORY, can it flush it (through a VFIO IOCTL) before handing the
> buffer to the CORTEX R5.

No such vfio ioctl currently exists.  Now you're starting to get into
KVM space if userspace requires elevated privileges to flush memory.
See for example the handling of wbinvd (write-back-invalidate) in x86
KVM based on an assigned device and coherency model supported by the
IOMMU.  vfio is only facilitating isolated access to the device. 
 
> If it's implemented within kernel either we use dma_alloc_coherent() for
> allocating coherent memory or streaming DMA APIs like
> dma_map_single()/dma_unmap_single() for flushing/invalidate the cache.

In vfio, DMA is mapped to userspace buffers.  The user allocates a
buffer and maps it for device access.  The IOMMU restricts the device to
only allow access to those buffers.  Accessing device memory in vfio is
done via regions on the device file descriptor, a device specific
region could allow a user to mmap that buffer, but the fact that this
buffer actually lives in host memory per your model and requires DMA
programming for the cortex core makes that really troubling.

For a vfio model to work, I think userspace would need to allocate the
buffers and the cortex core would need to be represented as a device
that supports isolation via an IOMMU.  Otherwise I'm not sure what
benefit you're getting from vfio.
 
> Trying to see if that is already supported in VFIO or if not, would it be
> acceptable to implement it.

vfio provides a user with privilege to access an isolated device, what
you're proposing could possibly be mangled to fit that model, but it
seems pretty awkward and there are existing solutions such as KVM for
processor virtualization if userspace needs elevated privileges to
handle CPU/RAM coherency.  Thanks,

Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ