[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190423102253.4fd9a019@x1.home>
Date: Tue, 23 Apr 2019 10:22:53 -0600
From: Alex Williamson <alex.williamson@...hat.com>
To: Alex G <mr.nuke.me@...il.com>
Cc: bhelgaas@...gle.com, helgaas@...nel.org, linux-pci@...r.kernel.org,
austin_bolen@...l.com, alex_gagniuc@...lteam.com,
keith.busch@...el.com, Shyam_Iyer@...l.com, lukas@...ner.de,
okaya@...nel.org, torvalds@...ux-foundation.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] PCI/LINK: Account for BW notification in vector
calculation
On Tue, 23 Apr 2019 11:03:04 -0500
Alex G <mr.nuke.me@...il.com> wrote:
> On 4/23/19 10:34 AM, Alex Williamson wrote:
> > On Tue, 23 Apr 2019 09:33:53 -0500
> > Alex G <mr.nuke.me@...il.com> wrote:
> >
> >> On 4/22/19 7:33 PM, Alex Williamson wrote:
> >>> On Mon, 22 Apr 2019 19:05:57 -0500
> >>> Alex G <mr.nuke.me@...il.com> wrote:
> >>>> echo 0000:07:00.0:pcie010 |
> >>>> sudo tee /sys/bus/pci_express/drivers/pcie_bw_notification/unbind
> >>>
> >>> That's a bad solution for users, this is meaningless tracking of a
> >>> device whose driver is actively managing the link bandwidth for power
> >>> purposes.
> >>
> >> 0.5W savings on a 100+W GPU? I agree it's meaningless.
> >
> > Evidence? Regardless, I don't have control of the driver that's making
> > these changes, but the claim seems unfounded and irrelevant.
>
> The number of 5mW/Gb/lane doesn't ring a bell? [1] [2]. Your GPU
> supports 5Gb/s, so likely using an older, more power hungry process. I
> suspect it's still within the same order of magnitude.
This doesn't necessarily imply the overall power savings to the
endpoint as a whole though, and it's still irrelevant to the discussion
here. The driver is doing something reasonable that's generating host
dmesg spam.
> > I'm assigning a device to a VM [snip]
> > I can see why we might want to be notified of degraded links due to signal issues,
> > but what I'm reporting is that there are also entirely normal reasons
> > [snip] we can't seem to tell the difference
>
> Unfortunately, there is no way in PCI-Express to distinguish between an
> expected link bandwidth change and one due to error.
Then assuming every link speed change is an error seems like the wrong
approach. Should we instead have a callback that drivers can
optionally register to receive link change notifications? If a driver
doesn't register such a callback then a generic message can be posted,
but if they do, the driver can decide whether this is an error.
> If you're using virt-manager to configure the VM, then virt-manager
> could have a checkbox to disable link bandwidth management messages. I'd
What makes us think that this is the only case where such link speed
changes will occur? Hand waving that a userspace management utility
should go unbind drivers that over-zealously report errors is a poor
solution.
> rather we avoid kernel-side heuristics (like Lukas suggested). If you're
> confident that your link will operate as intended, and don't want
> messages about it, that's your call as a user -- we shouldn't decide
> this in the kernel.
Nor should pci-core decide what link speed changes are intended or
errors. Minimally we should be enabling drivers to receive this
feedback. Thanks,
Alex
Powered by blists - more mailing lists