lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47977dc7-d53a-427c-bbaa-9c665287cb47@molgen.mpg.de>
Date:   Wed, 7 Aug 2019 16:55:47 +0200
From:   Paul Menzel <pmenzel@...gen.mpg.de>
To:     Sasha Neftin <sasha.neftin@...el.com>,
        Jeff Kirsher <jeffrey.t.kirsher@...el.com>
Cc:     Mario Limonciello <mario.limonciello@...l.com>,
        intel-wired-lan@...ts.osuosl.org, linux-kernel@...r.kernel.org
Subject: Re: [Intel-wired-lan] MDI errors during resume from ACPI S3 (suspend
 to ram)


Dear Sasha,


On 07.08.19 09:23, Neftin, Sasha wrote:
> On 8/6/2019 18:53, Mario.Limonciello@...l.com wrote:
>>> -----Original Message-----
>>> From: Paul Menzel <pmenzel@...gen.mpg.de>
>>> Sent: Tuesday, August 6, 2019 10:36 AM
>>> To: Jeff Kirsher
>>> Cc: intel-wired-lan@...ts.osuosl.org; Linux Kernel Mailing List; Limonciello, Mario
>>> Subject: MDI errors during resume from ACPI S3 (suspend to ram)
>>>
>>> Dear Linux folks,
>>>
>>>
>>> Trying to decrease the resume time of Linux 5.3-rc3 on the Dell OptiPlex
>>> 5040 with the device below
>>>
>>>      $ lspci -nn -s 00:1f.6
>>>      00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2)
>>> I219-V [8086:15b8] (rev 31)
>>>
>>> pm-graph’s script `sleepgraph.py` shows, that the driver *e1000e* takes
>>> around 400 ms, which is quite a lot. The call graph trace shows that
>>> `e1000e_read_phy_reg_mdic()` is responsible for a lot of those. From
>>> `drivers/net/ethernet/intel/e1000e/phy.c` [1]:
>>>
>>>          for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) {
>>>                  udelay(50);
>>>                  mdic = er32(MDIC);
>>>                  if (mdic & E1000_MDIC_READY)
>>>                          break;
>>>          }
>>>          if (!(mdic & E1000_MDIC_READY)) {
>>>                  e_dbg("MDI Read did not complete\n");
>>>                  return -E1000_ERR_PHY;
>>>          }
>>>          if (mdic & E1000_MDIC_ERROR) {
>>>                  e_dbg("MDI Error\n");
>>>                  return -E1000_ERR_PHY;
>>>          }
>>>
>>> Unfortunately, errors are not logged if dynamic debug is disabled,
>>> so rebuilding the Linux kernel with `CONFIG_DYNAMIC_DEBUG`, and
>>>
>>>      echo "file drivers/net/ethernet/* +p" | sudo tee
>>> /sys/kernel/debug/dynamic_debug/control
>>>
>>> I got the messages below.
>>>
>>>      [ 4159.204192] e1000e 0000:00:1f.6 net00: MDI Error
>>>      [ 4160.267950] e1000e 0000:00:1f.6 net00: MDI Write did not complete
>>>      [ 4160.359855] e1000e 0000:00:1f.6 net00: MDI Error
>>>
>>> Can you please shed a little more light into these errors? Please
>>> find the full log attached.

>>> [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/intel/e1000e/phy.c#n206
>>
>> Strictly as a reference point you may consider trying the out-of-tree driver to see if these
>> behaviors persist.
>>
>> https://sourceforge.net/projects/e1000/

I can try that in the next days.

> We are using external PHY. Required ~200 ms to complete MDIC
> transaction (depended on the project).

Are you referring to the out-of-tree driver?

> You need to take to consider this time before access to the PHY. I do
> not recommend decrease timer in a 'e1000e_read_phy_reg_mdic()'
> method. We could hit on wrong MDI access.
My point was more, if you know that more time is needed, before the MDI
setting(?) will succeed, why try it anyway and go into the error paths?
Isn’t there some polling possible to find out, when MDI can be set up?


Kind regards,

Paul


Download attachment "smime.p7s" of type "application/pkcs7-signature" (5174 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ