[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPJCdB=3hjZiC4P3G9T0G5XFnkxRvfpx_+3Qj5AQESAG-kpbEw@mail.gmail.com>
Date: Thu, 10 Sep 2020 10:18:22 +0800
From: Jiang Biao <benbjiang@...il.com>
To: Bjorn Helgaas <helgaas@...nel.org>
Cc: Bjorn Helgaas <bhelgaas@...gle.com>, linux-pci@...r.kernel.org,
linux-kernel <linux-kernel@...r.kernel.org>,
Jiang Biao <benbjiang@...cent.com>,
Bin Lai <robinlai@...cent.com>
Subject: Re: [PATCH] driver/pci: reduce the single block time in pci_read_config
Hi,
On Thu, 10 Sep 2020 at 09:59, Bjorn Helgaas <helgaas@...nel.org> wrote:
>
> On Thu, Sep 10, 2020 at 09:54:02AM +0800, Jiang Biao wrote:
> > Hi,
> >
> > On Thu, 10 Sep 2020 at 09:25, Bjorn Helgaas <helgaas@...nel.org> wrote:
> > >
> > > On Mon, Aug 24, 2020 at 01:20:25PM +0800, Jiang Biao wrote:
> > > > From: Jiang Biao <benbjiang@...cent.com>
> > > >
> > > > pci_read_config() could block several ms in kernel space, mainly
> > > > caused by the while loop to call pci_user_read_config_dword().
> > > > Singel pci_user_read_config_dword() loop could consume 130us+,
> > > > | pci_user_read_config_dword() {
> > > > | _raw_spin_lock_irq() {
> > > > ! 136.698 us | native_queued_spin_lock_slowpath();
> > > > ! 137.582 us | }
> > > > | pci_read() {
> > > > | raw_pci_read() {
> > > > | pci_conf1_read() {
> > > > 0.230 us | _raw_spin_lock_irqsave();
> > > > 0.035 us | _raw_spin_unlock_irqrestore();
> > > > 8.476 us | }
> > > > 8.790 us | }
> > > > 9.091 us | }
> > > > ! 147.263 us | }
> > > > and dozens of the loop could consume ms+.
> > > >
> > > > If we execute some lspci commands concurrently, ms+ scheduling
> > > > latency could be detected.
> > > >
> > > > Add scheduling chance in the loop to improve the latency.
> > >
> > > Thanks for the patch, this makes a lot of sense.
> > >
> > > Shouldn't we do the same in pci_write_config()?
> > Yes, IMHO, that could be helpful too.
>
> If it's feasible, it would be nice to actually verify that it makes a
> difference. I know config writes should be faster than reads, but
> they're certainly not as fast as a CPU can pump out data, so there
> must be *some* mechanism that slows the CPU down.
We did catch 5ms+ latency caused by pci_read_config() triggered by
concurrent lspcis, and latency disappeared after this patch.
For pci_write_config path, we have not met the case actually.:)
I'll have some tries to verify that, and would send another patch if verified.
Thx.
Regards,
Jiang
Powered by blists - more mailing lists