[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1e61b0f21794e67fb4e87dc41fab90829d3c7cd6.camel@sipsolutions.net>
Date: Fri, 18 Mar 2022 21:09:02 +0100
From: Johannes Berg <johannes@...solutions.net>
To: Vincent Whitchurch <vincent.whitchurch@...s.com>,
Brendan Higgins <brendanhiggins@...gle.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
kernel <kernel@...s.com>,
"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
"linux-um@...ts.infradead.org" <linux-um@...ts.infradead.org>,
"shuah@...nel.org" <shuah@...nel.org>,
"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
"jic23@...nel.org" <jic23@...nel.org>,
"linux-iio@...r.kernel.org" <linux-iio@...r.kernel.org>,
"lgirdwood@...il.com" <lgirdwood@...il.com>,
"broonie@...nel.org" <broonie@...nel.org>,
"a.zummo@...ertech.it" <a.zummo@...ertech.it>,
"alexandre.belloni@...tlin.com" <alexandre.belloni@...tlin.com>,
"linux-rtc@...r.kernel.org" <linux-rtc@...r.kernel.org>,
"corbet@....net" <corbet@....net>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>
Subject: Re: [RFC v1 07/10] iio: light: opt3001: add roadtest
On Fri, 2022-03-18 at 16:49 +0100, Vincent Whitchurch wrote:
>
> It should be possible, but upstream QEMU doesn't have everything that we
> need so some work is needed there. Also, of course work is need to
> provide user space for running the tests and communicating between the
> virtual machine and the backend:
>
> - We need user space, so build scripts would need to be provided to
> cross-compile busybox and Python (and whatever libraries it needs) for
> the target architecture.
You could possibly use some nix recipes for all of this, but that's a
fairly arcane thing (we use it, but ...)
> - We also use UML's hostfs feature to make things transparent to the
> user and to avoid having to set up things like networking for
> communication between the host and the backend. I think QEMU's 9pfs
> support can be used as a rootfs too but it's not something I've
> personally tested.
That works just fine, yes. We used to do exactly this in the wireless
test suite before we switched to UML, but the switch to UML was due to
the "time-travel" feature.
https://w1.fi/cgit/hostap/tree/tests/hwsim/vm
has support for both UML and qemu/kvm.
> - We use virtio-i2c and virtio-gpio and use virtio-uml which uses the
> vhost-user API to communicate from UML to the backend. The latest
> version of QEMU has support for vhost-user-i2c, but vhost-user-gpio
> doesn't seem to have been merged yet, so work is needed on the QEMU
> side. This will also be true for other buses in the future, if they
> are implemented with new virtio devices.
>
> - For MMIO, UML has virtio-mmio which allows implementing any PCIe
> device (and by extension any platform device) outside of UML, but last
> I checked, upstream QEMU did not have something similar.
I think you have this a bit fuzzy.
The virtio_uml[.c] you speak of is the "bus" driver for virtio in UML.
Obviously, qemu has support for virtio, so you don't need those bits.
Now, virtio_uml is actually the virtio (bus) driver inside the kernel,
like you'd have virtio-mmio/virtio-pci in qemu. However, virtio_uml
doesn't implement the devices in the hypervisor, where most qemu devices
are implemented, but uses vhost-user to run the device implementation in
a separate userspace. [1]
Now we're talking about vhost-user to talk to the device, and qemu
supports this as well, in fact the vhost-user spec is part of qemu:
https://git.qemu.org/?p=qemu.git;a=blob;f=docs/system/devices/vhost-user.rst;h=86128114fa3788a73679f0af38e141021087c828;hb=1d60bb4b14601e38ed17384277aa4c30c57925d3
https://www.qemu.org/docs/master/interop/vhost-user.html
The docs on how to use it are here:
https://www.qemu.org/docs/master/system/devices/vhost-user.html
So once you have a device implementation (regardless of whether it's for
use with any of the virtio-i2c, arch/um/drivers/virt-pci.c, virtio-gpio,
virtio-net, ... drivers) you can actually connect it to virtual machines
running as UML or in qemu.
(Actually, that's not strictly true today since it's
arch/um/drivers/virt-pci.c and I didn't get a proper device ID assigned
etc since it was for experimentation, I guess if we make this more
commonly used then we should move it to drivers/pci/controller/virtio-
pci.c and actually specify it in the OASIS virtio spec., at the very
least it'd have to be possible to compile this and lib/logic_iomem.c on
x86, but that's possible. Anyway I think PCI(e) is probably low on your
list of things ...)
> - Also, some paths in this driver needs a modification to be tested
> under roadtest. It uses wait_event_timeout() with a fixed value, but
> we cannot guarantee that this constraint is met in the test
> environment since it depends on things like CPU load on the host.
>
> (Also, we use UML's "time travel" feature which essentially
> fast-forwards through idle time, so the constraint can never be met
> in practice.)
Wohoo! This makes me very happy, finally somebody else who uses it :-)
[1] As an aside, you might be interested in usfstl (which you can find
at https://github.com/linux-test-project/usfstl) which is one way you
could implement the device side - though the focus here is on making a
device implementation easy while under "time-travel" mode.
If you ever want to use time-travel with multiple machines or actually
with virtio devices, it also contains the necessary controller program
to glue the entire simulation together. We use this very successfully to
test the (real but compiled for x86) wifi firmware for iwlwifi together
with the real driver actually seeing a PCIe device in UML, under time-
travel :)
johannes
Powered by blists - more mailing lists