lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 29 Aug 2018 17:05:00 +0200
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Saeed Mahameed <saeedm@...lanox.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Cc:     brouer@...hat.com, Tariq Toukan <tariqt@...lanox.com>,
        Eran Ben Elisha <eranbe@...lanox.com>
Subject: mlx5 driver loading failing on v4.19 / net-next / bpf-next

Hi Saeed,

I'm having issues loading mlx5 driver on v4.19 kernels (tested both
net-next and bpf-next), while kernel v4.18 seems to work.  It happens
with a Mellanox ConnectX-5 NIC (and also a CX4-Lx but I removed that
from the system now).

One pain point is very long boot-time, caused by some timeout code in
the driver. The kernel console log (dmesg) says:

[    5.763330] mlx5_core 0000:03:00.0: firmware version: 16.22.1002
[    5.769367] mlx5_core 0000:03:00.0: 126.016 Gb/s available PCIe bandwidth, limited by 8 GT/s x16 link at 0000:00:02.0 (capable of 252.048 Gb/s with 16 GT/s x16 link)

(...) other drivers loading

[   66.816635] mlx5_core 0000:03:00.0: wait_func:964:(pid 112): ENABLE_HCA(0x104) timeout. Will cause a leak of a command resource
[   66.828123] mlx5_core 0000:03:00.0: enable hca failed
[   66.845516] mlx5_core 0000:03:00.0: mlx5_load_one failed with error code -110
[   66.852802] mlx5_core: probe of 0000:03:00.0 failed with error -110

[   66.859347] mlx5_core 0000:03:00.1: firmware version: 16.22.1002
[   66.865388] mlx5_core 0000:03:00.1: 126.016 Gb/s available PCIe bandwidth, limited by 8 GT/s x16 link at 0000:00:02.0 (capable of 252.048 Gb/s with 16 GT/s x16 link)

[  125.787395] XFS (sda3): Mounting V5 Filesystem
[  125.848509] XFS (sda3): Ending clean mount
[  127.984784] mlx5_core 0000:03:00.1: wait_func:964:(pid 5): ENABLE_HCA(0x104) timeout. Will cause a leak of a command resource
[  127.996090] mlx5_core 0000:03:00.1: enable hca failed
[  128.013819] mlx5_core 0000:03:00.1: mlx5_load_one failed with error code -110
[  128.021076] mlx5_core: probe of 0000:03:00.1 failed with error -110


Do you have any idea what could be causing this?

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ