[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.22.394.2307061449160.761183@ubuntu-linux-20-04-desktop>
Date: Thu, 6 Jul 2023 14:49:20 -0700 (PDT)
From: Stefano Stabellini <sstabellini@...nel.org>
To: Roger Pau Monné <roger.pau@...rix.com>
cc: Stefano Stabellini <sstabellini@...nel.org>,
Oleksandr Tyshchenko <olekstysh@...il.com>,
Juergen Gross <jgross@...e.com>,
Marek Marczykowski-Górecki
<marmarek@...isiblethingslab.com>,
Oleksandr Tyshchenko <Oleksandr_Tyshchenko@...m.com>,
Petr Pavlu <petr.pavlu@...e.com>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
vikram.garhwal@....com
Subject: Re: [PATCH 2/2] xen/virtio: Avoid use of the dom0 backend in dom0
On Thu, 6 Jul 2023, Roger Pau Monné wrote:
> On Wed, Jul 05, 2023 at 03:41:10PM -0700, Stefano Stabellini wrote:
> > On Wed, 5 Jul 2023, Roger Pau Monné wrote:
> > > On Tue, Jul 04, 2023 at 08:14:59PM +0300, Oleksandr Tyshchenko wrote:
> > > > **********
> > > >
> > > > Part 1 (intro):
> > > >
> > > > We could reuse a PCI config space register to expose the backend id.
> > > > However this solution requires a backend change (QEMU) to expose the
> > > > backend id via an emulated register for each emulated device.
> > > >
> > > > To avoid having to introduce a special config space register in all
> > > > emulated PCI devices (virtio-net, virtio-block, etc) I wonder if we
> > > > could add a special PCI config space register at the emulated PCI Root
> > > > Complex level.
> > > >
> > > > Basically the workflow would be as follow:
> > > >
> > > > - Linux recognizes the PCI Root Complex as a Xen PCI Root Complex
> > > > - Linux writes to special PCI config space register of the Xen PCI Root
> > > > Complex the PCI device id (basically the BDF)
> > > > - The Xen PCI Root Complex emulated by Xen answers by writing back to
> > > > the same location the backend id (domid of the backend)
> > > > - Linux reads back the same PCI config space register of the Xen PCI
> > > > Root Complex and learn the relevant domid
> > >
> > > IMO this seems awfully complex. I'm not familiar with the VirtIO
> > > spec, but I see there's a Vendor data capability, could we possibly
> > > expose Xen-specific information on that capability?
> >
> > That is also a possibility too. Also we could use a PCI conf register
> > which is known to be unused in the Virtio spec to expose the grant
> > capability and backend domid.
>
> Capabilities don't have a fixed config space register, they are a
> linked list, and so capabilities end up at different positions
> depending on the specific device layout. The only fixed part is the
> range from [0, 0x3F), and that's fully defined in the specification.
>
> Trying to define a fixed address for Xen use after the 3f boundary
> seems like a bad idea, as it's going to be hard to make sure that such
> address is not used on all possible devices. IMO the only way is to
> place such information in a capability, whether that's an existing
> capability or a new one I don't really know.
That seems like a good idea
> > > > Part 2 (clarification):
> > > >
> > > > I think using a special config space register in the root complex would
> > > > not be terrible in terms of guest changes because it is easy to
> > > > introduce a new root complex driver in Linux and other OSes. The root
> > > > complex would still be ECAM compatible so the regular ECAM driver would
> > > > still work. A new driver would only be necessary if you want to be able
> > > > to access the special config space register.
> > >
> > > I'm slightly worry of this approach, we end up modifying a root
> > > complex emulation in order to avoid modifying a PCI device emulation
> > > on QEMU, not sure that's a good trade off.
> > >
> > > Note also that different architectures will likely have different root
> > > complex, and so you might need to modify several of them, plus then
> > > arrange the PCI layout correctly in order to have the proper hierarchy
> > > so that devices belonging to different driver domains are assigned to
> > > different bridges.
> >
> > I do think that adding something to the PCI conf register somewhere is
> > the best option because it is not dependent on ACPI and it is not
> > dependent on xenstore both of which are very undesirable.
> >
> > I am not sure where specifically is the best place. These are 3 ideas
> > we came up with:
> > 1. PCI root complex
> > 2. a register on the device itself
> > 3. a new capability of the device
> > 4. add one extra dummy PCI device for the sole purpose of exposing the
> > grants capability
> >
> >
> > Looking at the spec, there is a way to add a vendor-specific capability
> > (cap_vndr = 0x9). Could we use that? It doesn't look like it is used
> > today, Linux doesn't parse it.
>
> I did wonder the same from a quick look at the spec. There's however
> a text in the specification that says:
>
> "The driver SHOULD NOT use the Vendor data capability except for
> debugging and reporting purposes."
>
> So we would at least need to change that because the capability would
> then be used by other purposes different than debugging and reporting.
>
> Seems like a minor adjustment, so might we worth asking upstream about
> their opinion, and to get a conversation started.
Wait, wouldn't this use-case fall under "reporting" ? It is exactly what
we are doing, right?
> > > >
> > > >
> > > > **********
> > > > What do you think about it? Are there any pitfalls, etc? This also requires
> > > > system changes, but at least without virtio spec changes.
> > >
> > > Why are we so reluctant to add spec changes? I understand this might
> > > take time an effort, but it's the only way IMO to build a sustainable
> > > VirtIO Xen implementation. Did we already attempt to negotiate with
> > > Oasis Xen related spec changes and those where refused?
> >
> > That's because spec changes can be very slow. This is a bug that we need
> > a relatively quick solution for and waiting 12-24 months for a spec
> > update is not realistic.
> >
> > I think a spec change would be best as a long term solution. We also
> > need a short term solution. The short term solution doesn't have to be
> > ideal but it has to work now.
>
> My fear with such approach is that once a bodge is in place people
> move on to other stuff and this never gets properly fixed.
>
> I know this might not be a well received opinion, but it would be
> better if such bodge is kept in each interested party patchqueue for
> the time being, until a proper solution is implemented. That way
> there's an interest from parties into properly fixing it upstream.
Unfortunately we are in the situation where we have an outstanding
upstream bug, so we have to take action one way or the other.
Powered by blists - more mailing lists