[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <yq5afre68j8p.fsf@kernel.org>
Date: Tue, 05 Aug 2025 10:20:30 +0530
From: Aneesh Kumar K.V <aneesh.kumar@...nel.org>
To: dan.j.williams@...el.com, linux-coco@...ts.linux.dev,
kvmarm@...ts.linux.dev
Cc: linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org, aik@....com,
lukas@...ner.de, Samuel Ortiz <sameo@...osinc.com>,
Xu Yilun <yilun.xu@...ux.intel.com>,
Jason Gunthorpe <jgg@...pe.ca>,
Suzuki K Poulose <Suzuki.Poulose@....com>,
Steven Price <steven.price@....com>,
Catalin Marinas <catalin.marinas@....com>,
Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>,
Oliver Upton <oliver.upton@...ux.dev>
Subject: Re: [RFC PATCH v1 00/38] ARM CCA Device Assignment support
<dan.j.williams@...el.com> writes:
> Aneesh Kumar K.V (Arm) wrote:
>> This patch series implements support for Device Assignment in the ARM CCA
>> architecture. The code changes are based on Alp12 specification published here
>> [1].
>>
>> The code builds on the TSM framework patches posted at [2]. We add extension to
>> that framework so that TSM is now used in both the host and the guest.
>>
>> A DA workflow can be summarized as below:
>>
>> Host:
>> step 1.
>> echo ${DEVICE} > /sys/bus/pci/devices/${DEVICE}/driver/unbind
>> echo vfio-pci > /sys/bus/pci/devices/${DEVICE}/driver_override
>> echo ${DEVICE} > /sys/bus/pci/drivers_probe
>>
>> step 2.
>> echo 1 > /sys/bus/pci/devices/$DEVICE/tsm/connect
>
> Just for my own understanding... presumably there is no ordering
> constraint for ARM CCA between step1 and step2, right? I.e. The connect
> state is independent of the bind state.
>
> In the v4 PCI/TSM scheme the connect command is now:
>
> echo $tsm_dev > /sys/bus/pci/devices/$DEVICE/tsm/connect
>
>> Now in the guest we follow the below steps
>
> I assume a signifcant amount of kvmtool magic happens here to get the
> TDI into a "bind capable" state, can you share that command?
>
lkvm run --realm -c 2 -m 256 -k /kselftest/Image -p "$KERNEL_PARAMS" -d ./rootfs-guest.ext2 --iommufd-vdevice --vfio-pci $DEVICE1 --vfio-pci $DEVICE2
> I had been assuming that everyone was prototyping with QEMU. Not a
> problem per se, but the memory management for shared device assignment /
> bounce buffering has had a quite of bit of work on the QEMU side, so
> just curious about the difference in approach here. Like, does kvmtool
> support operating the device in shared mode with bounce buffering and
> page conversion (shared <=> private) support? In any event, happy to see
> mutiple simultaneous consumers of this new kernel infrastructure.
>
-aneesh
Powered by blists - more mailing lists