[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200627220852.13b3fa7f@x1.home>
Date: Sat, 27 Jun 2020 22:08:52 -0600
From: Alex Williamson <alex.williamson@...hat.com>
To: "Wang, Haiyue" <haiyue.wang@...el.com>
Cc: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"maxime.coquelin@...hat.com" <maxime.coquelin@...hat.com>,
David Marchand <david.marchand@...hat.com>,
Kevin Traynor <ktraynor@...hat.com>
Subject: Re: [PATCH] vfio/pci: Fix SR-IOV VF handling with MMIO blocking
On Sun, 28 Jun 2020 03:12:12 +0000
"Wang, Haiyue" <haiyue.wang@...el.com> wrote:
> > -----Original Message-----
> > From: kvm-owner@...r.kernel.org <kvm-owner@...r.kernel.org> On Behalf Of Alex Williamson
> > Sent: Friday, June 26, 2020 00:57
> > To: alex.williamson@...hat.com
> > Cc: kvm@...r.kernel.org; linux-kernel@...r.kernel.org; maxime.coquelin@...hat.com
> > Subject: [PATCH] vfio/pci: Fix SR-IOV VF handling with MMIO blocking
> >
> > SR-IOV VFs do not implement the memory enable bit of the command
> > register, therefore this bit is not set in config space after
> > pci_enable_device(). This leads to an unintended difference
> > between PF and VF in hand-off state to the user. We can correct
> > this by setting the initial value of the memory enable bit in our
> > virtualized config space. There's really no need however to
> > ever fault a user on a VF though as this would only indicate an
> > error in the user's management of the enable bit, versus a PF
> > where the same access could trigger hardware faults.
> >
> > Fixes: abafbc551fdd ("vfio-pci: Invalidate mmaps and block MMIO access on disabled memory")
> > Signed-off-by: Alex Williamson <alex.williamson@...hat.com>
> > ---
> > drivers/vfio/pci/vfio_pci_config.c | 17 ++++++++++++++++-
> > 1 file changed, 16 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
> > index 8746c943247a..d98843feddce 100644
> > --- a/drivers/vfio/pci/vfio_pci_config.c
> > +++ b/drivers/vfio/pci/vfio_pci_config.c
> > @@ -398,9 +398,15 @@ static inline void p_setd(struct perm_bits *p, int off, u32 virt, u32 write)
> > /* Caller should hold memory_lock semaphore */
> > bool __vfio_pci_memory_enabled(struct vfio_pci_device *vdev)
> > {
> > + struct pci_dev *pdev = vdev->pdev;
> > u16 cmd = le16_to_cpu(*(__le16 *)&vdev->vconfig[PCI_COMMAND]);
> >
> > - return cmd & PCI_COMMAND_MEMORY;
> > + /*
> > + * SR-IOV VF memory enable is handled by the MSE bit in the
> > + * PF SR-IOV capability, there's therefore no need to trigger
> > + * faults based on the virtual value.
> > + */
> > + return pdev->is_virtfn || (cmd & PCI_COMMAND_MEMORY);
>
> Hi Alex,
>
> After set up the initial copy of config space for memory enable bit for VF, is it worth
> to trigger SIGBUS into the bad user space process which intentionally try to disable the
> memory access command (even it is VF) then access the memory to trigger CVE-2020-12888 ?
We're essentially only trying to catch the user in mismanaging the
enable bit if we trigger a fault based on the virtualized enabled bit,
right? There's no risk that the VF would trigger a UR based on the
state of our virtual enable bit. So is it worth triggering a user
fault when, for instance, the user might be aware that the device is a
VF and know that the memory enable bit is not relative to the physical
device? Thanks,
Alex
> > }
> >
> > /*
> > @@ -1728,6 +1734,15 @@ int vfio_config_init(struct vfio_pci_device *vdev)
> > vconfig[PCI_INTERRUPT_PIN]);
> >
> > vconfig[PCI_INTERRUPT_PIN] = 0; /* Gratuitous for good VFs */
> > +
> > + /*
> > + * VFs do no implement the memory enable bit of the COMMAND
> > + * register therefore we'll not have it set in our initial
> > + * copy of config space after pci_enable_device(). For
> > + * consistency with PFs, set the virtual enable bit here.
> > + */
> > + *(__le16 *)&vconfig[PCI_COMMAND] |=
> > + cpu_to_le16(PCI_COMMAND_MEMORY);
> > }
> >
> > if (!IS_ENABLED(CONFIG_VFIO_PCI_INTX) || vdev->nointx)
>
Powered by blists - more mailing lists