[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20220503105944.nezlg26jfxv4fqha@liuwe-devbox-debian-v2>
Date: Tue, 3 May 2022 10:59:44 +0000
From: Wei Liu <wei.liu@...nel.org>
To: Dexuan Cui <decui@...rosoft.com>
Cc: wei.liu@...nel.org, kys@...rosoft.com, haiyangz@...rosoft.com,
sthemmin@...rosoft.com, lorenzo.pieralisi@....com,
bhelgaas@...gle.com, linux-hyperv@...r.kernel.org,
linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org,
mikelley@...rosoft.com, robh@...nel.org, kw@...ux.com,
helgaas@...nel.org, alex.williamson@...hat.com,
boqun.feng@...il.com, Boqun.Feng@...rosoft.com, jakeo@...rosoft.com
Subject: Re: [PATCH v2] PCI: hv: Do not set PCI_COMMAND_MEMORY to reduce VM
boot time
On Mon, May 02, 2022 at 12:42:55AM -0700, Dexuan Cui wrote:
> Currently when the pci-hyperv driver finishes probing and initializing the
> PCI device, it sets the PCI_COMMAND_MEMORY bit; later when the PCI device
> is registered to the core PCI subsystem, the core PCI driver's BAR detection
> and initialization code toggles the bit multiple times, and each toggling of
> the bit causes the hypervisor to unmap/map the virtual BARs from/to the
> physical BARs, which can be slow if the BAR sizes are huge, e.g., a Linux VM
> with 14 GPU devices has to spend more than 3 minutes on BAR detection and
> initialization, causing a long boot time.
>
> Reduce the boot time by not setting the PCI_COMMAND_MEMORY bit when we
> register the PCI device (there is no need to have it set in the first place).
> The bit stays off till the PCI device driver calls pci_enable_device().
> With this change, the boot time of such a 14-GPU VM is reduced by almost
> 3 minutes.
>
> Link: https://lore.kernel.org/lkml/20220419220007.26550-1-decui@microsoft.com/
> Tested-by: Boqun Feng (Microsoft) <boqun.feng@...il.com>
> Signed-off-by: Dexuan Cui <decui@...rosoft.com>
> Reviewed-by: Michael Kelley <mikelley@...rosoft.com>
> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@....com>
> Cc: Jake Oshins <jakeo@...rosoft.com>
Applied to hyperv-next. Thanks.
Powered by blists - more mailing lists