[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E0E503E-64E1-4B0A-B96A-0CD554A67107@vmware.com>
Date: Mon, 11 Jul 2022 06:31:58 +0000
From: Ajay Kaher <akaher@...are.com>
To: Nadav Amit <namit@...are.com>, Matthew Wilcox <willy@...radead.org>
CC: Bjorn Helgaas <helgaas@...nel.org>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
Srivatsa Bhat <srivatsab@...are.com>,
"srivatsa@...il.mit.edu" <srivatsa@...il.mit.edu>,
Alexey Makhalov <amakhalov@...are.com>,
Anish Swaminathan <anishs@...are.com>,
Vasavi Sirnapalli <vsirnapalli@...are.com>,
"er.ajay.kaher@...il.com" <er.ajay.kaher@...il.com>
Subject: Re: [PATCH] MMIO should have more priority then IO
On 09/07/22, 1:19 AM, "Nadav Amit" <namit@...are.com> wrote:
> On Jul 8, 2022, at 11:43 AM, Matthew Wilcox <willy@...radead.org> wrote:
>> I have no misconceptions about whatever you want to call the mechanism
>> for communicating with the hypervisor at a higher level than "prod this
>> byte". For example, one of the more intensive things we use config
>> space for is sizing BARs. If we had a hypercall to siz a BAR, that
>> would eliminate:
>>
>> - Read current value from BAR
>> - Write all-ones to BAR
>> - Read new value from BAR
>> - Write original value back to BAR
>>
>> Bingo, one hypercall instead of 4 MMIO or 8 PIO accesses.
To improve further we can have following mechanism:
Map (as read only) the 'virtual device config i.e. 4KB ECAM' to
VM MMIO. VM will have direct read access using MMIO but
not using PIO.
Virtual Machine test result with above mechanism:
1 hundred thousand read using raw_pci_read() took:
PIO: 12.809 Sec.
MMIO: 0.010 Sec.
And while VM booting, PCI scan and initialization time have been
reduced by ~65%. In our case it reduced to ~18 mSec from ~55 mSec.
Thanks Matthew, for sharing history and your views on this patch.
As you mentioned ordering change may impact some Hardware, so
it's better to have this change for VMware hypervisor or generic to
all hypervisor.
- Ajay
> Back to the issue at hand: I think that a new paravirtual interface is a
> possible solution, with some serious drawbacks. Xen did something similar,
> IIRC, to a certain extent.
>
> More reasonable, I think, based on what you said before, is to check if we
> run on a hypervisor, and update raw_pci_ops accordingly. There is an issue
> of whether hypervisor detection might take place too late, but I think this
> can be relatively easily resolved. The question is whether assigned devices
> might still be broken. Based on the information that you provided - I do not
> know.
>
> If you can answer this question, that would be helpful. Let’s also wait for
> Ajay to give some numbers about boot time with this change.
Powered by blists - more mailing lists