[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ae5b191d-ffc6-4d40-a44b-d08e04cac6be@linux.ibm.com>
Date: Wed, 1 Oct 2025 10:12:03 -0700
From: Farhan Ali <alifm@...ux.ibm.com>
To: Benjamin Block <bblock@...ux.ibm.com>
Cc: linux-s390@...r.kernel.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-pci@...r.kernel.org,
alex.williamson@...hat.com, helgaas@...nel.org, clg@...hat.com,
schnelle@...ux.ibm.com, mjrosato@...ux.ibm.com
Subject: Re: [PATCH v4 01/10] PCI: Avoid saving error values for config space
On 10/1/2025 8:15 AM, Benjamin Block wrote:
> On Wed, Sep 24, 2025 at 10:16:19AM -0700, Farhan Ali wrote:
>> @@ -1792,6 +1798,14 @@ static void pci_restore_pcix_state(struct pci_dev *dev)
>> int pci_save_state(struct pci_dev *dev)
>> {
>> int i;
>> + u32 val;
>> +
>> + pci_read_config_dword(dev, PCI_COMMAND, &val);
>> + if (PCI_POSSIBLE_ERROR(val)) {
>> + pci_warn(dev, "Device config space inaccessible, will only be partially restored\n");
>> + return -EIO;
> Should it set `dev->state_saved` to `false`, to be on the save side?
> Not sure whether we run a risk of restoring an old, outdated state otherwise.
AFAIU if the state_saved flag was set to true then any state that we
have saved should be valid and should be okay to be restored from. We
just want to avoid saving any invalid data.
>
>> + }
>> +
>> /* XXX: 100% dword access ok here? */
>> for (i = 0; i < 16; i++) {
>> pci_read_config_dword(dev, i * 4, &dev->saved_config_space[i]);
>> @@ -1854,6 +1868,14 @@ static void pci_restore_config_space_range(struct pci_dev *pdev,
>>
>> static void pci_restore_config_space(struct pci_dev *pdev)
>> {
>> + if (!pdev->state_saved) {
>> + pci_warn(pdev, "No saved config space, restoring BARs\n");
>> + pci_restore_bars(pdev);
>> + pci_write_config_word(pdev, PCI_COMMAND,
>> + PCI_COMMAND_MEMORY | PCI_COMMAND_IO);
> Is this really something that ought to be universally enabled? I thought this
> depends on whether attached resources are IO and/or MEM?
>
> int pci_enable_resources(struct pci_dev *dev, int mask)
> {
> ...
> pci_dev_for_each_resource(dev, r, i) {
> ...
> if (r->flags & IORESOURCE_IO)
> cmd |= PCI_COMMAND_IO;
> if (r->flags & IORESOURCE_MEM)
> cmd |= PCI_COMMAND_MEMORY;
> }
> ...
> }
>
> Also IIRC, especially on s390, we never have IO resources?
>
> int zpci_setup_bus_resources(struct zpci_dev *zdev)
> {
> ...
> for (i = 0; i < PCI_STD_NUM_BARS; i++) {
> ...
> /* only MMIO is supported */
> flags = IORESOURCE_MEM;
> if (zdev->bars[i].val & 8)
> flags |= IORESOURCE_PREFETCH;
> if (zdev->bars[i].val & 4)
> flags |= IORESOURCE_MEM_64;
> ...
> }
> ...
> }
>
> So I guess this would have to have some form of the same logic as in
> `pci_enable_resources()`, after restoring the BARs.
>
> Or am I missing something?
As per my understanding of the spec, setting both I/O Space and Memory
Space should be safe. The spec also mentions if a function doesn't
support IO/Memory space access it could hardwire the bit to zero. We
could add the logic to iterate through all the resources and set the
bits accordingly, but in this case trying a best effort restoration it
should be fine?
Also I didn't see any issues testing on s390x with the NVMe, RoCE and
NETD devices, but I could have missed something.
Thanks
Farhan
>
>> + return;
>> + }
Powered by blists - more mailing lists