[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4212dca6-233b-4d6c-0182-1e7912a05b3f@gmail.com>
Date: Tue, 8 Dec 2020 18:11:09 +0100
From: Heiner Kallweit <hkallweit1@...il.com>
To: Kai-Heng Feng <kai.heng.feng@...onical.com>, bhelgaas@...gle.com
Cc: "Saheed O. Bolarinwa" <refactormyself@...il.com>,
Mika Westerberg <mika.westerberg@...ux.intel.com>,
Chris Packham <chris.packham@...iedtelesis.co.nz>,
Xiongfeng Wang <wangxiongfeng2@...wei.com>,
Yicong Yang <yangyicong@...ilicon.com>,
"open list:PCI SUBSYSTEM" <linux-pci@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] PCI/ASPM: Store disabled ASPM states
Am 08.12.2020 um 09:25 schrieb Kai-Heng Feng:
> If we use sysfs to disable L1 ASPM, then enable one L1 ASPM substate
> again, all other substates will also be enabled too:
>
> link# grep . *
> clkpm:1
> l0s_aspm:1
> l1_1_aspm:1
> l1_1_pcipm:1
> l1_2_aspm:1
> l1_2_pcipm:1
> l1_aspm:1
>
> link# echo 0 > l1_aspm
>
> link# grep . *
> clkpm:1
> l0s_aspm:1
> l1_1_aspm:0
> l1_1_pcipm:0
> l1_2_aspm:0
> l1_2_pcipm:0
> l1_aspm:0
>
> link# echo 1 > l1_2_aspm
>
> link# grep . *
> clkpm:1
> l0s_aspm:1
> l1_1_aspm:1
> l1_1_pcipm:1
> l1_2_aspm:1
> l1_2_pcipm:1
> l1_aspm:1
>
> This is because disabled ASPM states weren't saved, so enable any of the
> substate will also enable others.
>
> So store the disabled ASPM states for consistency.
>
> Signed-off-by: Kai-Heng Feng <kai.heng.feng@...onical.com>
> ---
> drivers/pci/pcie/aspm.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
> index ac0557a305af..2ea9fddadfad 100644
> --- a/drivers/pci/pcie/aspm.c
> +++ b/drivers/pci/pcie/aspm.c
> @@ -658,6 +658,8 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
> /* Setup initial capable state. Will be updated later */
> link->aspm_capable = link->aspm_support;
>
> + link->aspm_disable = link->aspm_capable & ~link->aspm_default;
> +
This makes sense only in combination with patch 2. However I think patch 1
should be independent of patch 2. Especially if we consider patch 1 a fix
that is applied to stable whilst patch 2 is an improvement for next.
> /* Get and check endpoint acceptable latencies */
> list_for_each_entry(child, &linkbus->devices, bus_list) {
> u32 reg32, encoding;
> @@ -1226,11 +1228,15 @@ static ssize_t aspm_attr_store_common(struct device *dev,
> mutex_lock(&aspm_lock);
>
> if (state_enable) {
> - link->aspm_disable &= ~state;
> /* need to enable L1 for substates */
> if (state & ASPM_STATE_L1SS)
> - link->aspm_disable &= ~ASPM_STATE_L1;
> + state |= ASPM_STATE_L1;
> +
> + link->aspm_disable &= ~state;
I don't see what this part of the patch changes. Can you elaborate on why
this is needed?
> } else {
> + if (state == ASPM_STATE_L1)
> + state |= ASPM_STATE_L1SS;
> +
I think this part should be sufficient to fix the behavior. because what
I think currently happens:
1. original status: policy powersupersave, nothing disabled -> L1 + L1SS active
2. disable L1: L1 disabled, pcie_config_aspm_link() disabled L1 and L1SS
w/o adding L1SS to link-> aspm_disabled
3. enable one L1SS state: aspm_attr_store_common() removes L1 from
link->aspm_disabled -> link->aspm_disabled is empty, resulting in
L1 + L1SS being active
> link->aspm_disable |= state;
> }
>
>
Powered by blists - more mailing lists