[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zgf3pfd1.fsf@redhat.com>
Date: Tue, 13 Sep 2022 15:34:50 +0200
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Ajay Kaher <akaher@...are.com>
Cc: "x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
Srivatsa Bhat <srivatsab@...are.com>,
"srivatsa@...il.mit.edu" <srivatsa@...il.mit.edu>,
Alexey Makhalov <amakhalov@...are.com>,
Vasavi Sirnapalli <vsirnapalli@...are.com>,
"er.ajay.kaher@...il.com" <er.ajay.kaher@...il.com>,
"willy@...radead.org" <willy@...radead.org>,
Nadav Amit <namit@...are.com>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"jailhouse-dev@...glegroups.com" <jailhouse-dev@...glegroups.com>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"acrn-dev@...ts.projectacrn.org" <acrn-dev@...ts.projectacrn.org>,
"helgaas@...nel.org" <helgaas@...nel.org>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
Alexander Graf <graf@...zon.com>
Subject: Re: [PATCH v2] x86/PCI: Prefer MMIO over PIO on all hypervisor
Ajay Kaher <akaher@...are.com> writes:
> Note: Corrected the Subject.
>
>> On 07/09/22, 8:50 PM, "Vitaly Kuznetsov" <vkuznets@...hat.com> wrote:
>>
>>> diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c
>>> index ddb7986..1e5a8f7 100644
>>> --- a/arch/x86/pci/common.c
>>> +++ b/arch/x86/pci/common.c
>>> @@ -20,6 +20,7 @@
>>> #include <asm/pci_x86.h>
>>> #include <asm/setup.h>
>>> #include <asm/irqdomain.h>
>>> +#include <asm/hypervisor.h>
>>>
>>> unsigned int pci_probe = PCI_PROBE_BIOS | PCI_PROBE_CONF1 | PCI_PROBE_CONF2 |
>>> PCI_PROBE_MMCONF;
>>> @@ -57,14 +58,58 @@ int raw_pci_write(unsigned int domain, unsigned int bus, unsigned int devfn,
>>> return -EINVAL;
>>> }
>>>
>>> +#ifdef CONFIG_HYPERVISOR_GUEST
>>> +static int vm_raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn,
>>> + int reg, int len, u32 *val)
>>> +{
>>> + if (raw_pci_ext_ops)
>>> + return raw_pci_ext_ops->read(domain, bus, devfn, reg, len, val);
>>> + if (domain == 0 && reg < 256 && raw_pci_ops)
>>> + return raw_pci_ops->read(domain, bus, devfn, reg, len, val);
>>> + return -EINVAL;
>>> +}
>>> +
>>> +static int vm_raw_pci_write(unsigned int domain, unsigned int bus, unsigned int devfn,
>>> + int reg, int len, u32 val)
>>> +{
>>> + if (raw_pci_ext_ops)
>>> + return raw_pci_ext_ops->write(domain, bus, devfn, reg, len, val);
>>> + if (domain == 0 && reg < 256 && raw_pci_ops)
>>> + return raw_pci_ops->write(domain, bus, devfn, reg, len, val);
>>> + return -EINVAL;
>>> +}
>>
>> These look exactly like raw_pci_read()/raw_pci_write() but with inverted
>> priority. We could've added a parameter but to be more flexible, I'd
>> suggest we add a 'priority' field to 'struct pci_raw_ops' and make
>> raw_pci_read()/raw_pci_write() check it before deciding what to use
>> first. To be on the safe side, you can leave raw_pci_ops's priority
>> higher than raw_pci_ext_ops's by default and only tweak it in
>> arch/x86/kernel/cpu/vmware.c
>
> Thanks Vitaly for your response.
>
> 1. we have multiple objects of struct pci_raw_ops, 2. adding 'priority' field to struct pci_raw_ops
> doesn't seems to be appropriate as need to take decision which object of struct pci_raw_ops has
> to be used, not something with-in struct pci_raw_ops.
I'm not sure I follow, you have two instances of 'struct pci_raw_ops'
which are called 'raw_pci_ops' and 'raw_pci_ext_ops'. What if you do
something like (completely untested):
diff --git a/arch/x86/include/asm/pci_x86.h b/arch/x86/include/asm/pci_x86.h
index 70533fdcbf02..fb8270fa6c78 100644
--- a/arch/x86/include/asm/pci_x86.h
+++ b/arch/x86/include/asm/pci_x86.h
@@ -116,6 +116,7 @@ extern void (*pcibios_disable_irq)(struct pci_dev *dev);
extern bool mp_should_keep_irq(struct device *dev);
struct pci_raw_ops {
+ int rating;
int (*read)(unsigned int domain, unsigned int bus, unsigned int devfn,
int reg, int len, u32 *val);
int (*write)(unsigned int domain, unsigned int bus, unsigned int devfn,
diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c
index ddb798603201..e9965fd11576 100644
--- a/arch/x86/pci/common.c
+++ b/arch/x86/pci/common.c
@@ -40,7 +40,8 @@ const struct pci_raw_ops *__read_mostly raw_pci_ext_ops;
int raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn,
int reg, int len, u32 *val)
{
- if (domain == 0 && reg < 256 && raw_pci_ops)
+ if (domain == 0 && reg < 256 && raw_pci_ops &&
+ (!raw_pci_ext_ops || raw_pci_ext_ops->rating <= raw_pci_ops->rating))
return raw_pci_ops->read(domain, bus, devfn, reg, len, val);
if (raw_pci_ext_ops)
return raw_pci_ext_ops->read(domain, bus, devfn, reg, len, val);
@@ -50,7 +51,8 @@ int raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn,
int raw_pci_write(unsigned int domain, unsigned int bus, unsigned int devfn,
int reg, int len, u32 val)
{
- if (domain == 0 && reg < 256 && raw_pci_ops)
+ if (domain == 0 && reg < 256 && raw_pci_ops &&
+ (!raw_pci_ext_ops || raw_pci_ext_ops->rating <= raw_pci_ops->rating))
return raw_pci_ops->write(domain, bus, devfn, reg, len, val);
if (raw_pci_ext_ops)
return raw_pci_ext_ops->write(domain, bus, devfn, reg, len, val);
and then somewhere in Vmware hypervisor initialization code
(arch/x86/kernel/cpu/vmware.c) you do
raw_pci_ext_ops->rating = 100;
why wouldn't it work?
(diclaimer: completely untested, raw_pci_ops/raw_pci_ext_ops
initialization has to be checked so 'rating' is not garbage).
>
> It's a generic solution for all hypervisor (sorry for earlier wrong
> Subject), not specific to VMware. Further looking for feedback if it's
> impacting to any hypervisor.
That's the tricky part. We can check modern hypervisor versions, but
what about all other versions in existence? How can we know that there's
no QEMU/Hyper-V/... version out there where MMIO path is broken? I'd
suggest we limit the change to Vmware hypervisor, other hypervisors may
use the same mechanism (like the one above) later (but the person
suggesting the patch is always responsible for the research why it is
safe to do so).
--
Vitaly
Powered by blists - more mailing lists