lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <799a7d7a-0d27-a27a-4222-caa4998438c8@mellanox.com> Date: Thu, 30 Aug 2018 11:35:50 +0300 From: Tariq Toukan <tariqt@...lanox.com> To: Jesper Dangaard Brouer <brouer@...hat.com>, Saeed Mahameed <saeedm@...lanox.com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org> Cc: Eran Ben Elisha <eranbe@...lanox.com> Subject: Re: mlx5 driver loading failing on v4.19 / net-next / bpf-next On 29/08/2018 6:05 PM, Jesper Dangaard Brouer wrote: > Hi Saeed, > > I'm having issues loading mlx5 driver on v4.19 kernels (tested both > net-next and bpf-next), while kernel v4.18 seems to work. It happens > with a Mellanox ConnectX-5 NIC (and also a CX4-Lx but I removed that > from the system now). > Hi Jesper, Thanks for your report! We are working to analyze and debug the issue. > One pain point is very long boot-time, caused by some timeout code in > the driver. The kernel console log (dmesg) says: > > [ 5.763330] mlx5_core 0000:03:00.0: firmware version: 16.22.1002 > [ 5.769367] mlx5_core 0000:03:00.0: 126.016 Gb/s available PCIe bandwidth, limited by 8 GT/s x16 link at 0000:00:02.0 (capable of 252.048 Gb/s with 16 GT/s x16 link) > > (...) other drivers loading > > [ 66.816635] mlx5_core 0000:03:00.0: wait_func:964:(pid 112): ENABLE_HCA(0x104) timeout. Will cause a leak of a command resource > [ 66.828123] mlx5_core 0000:03:00.0: enable hca failed > [ 66.845516] mlx5_core 0000:03:00.0: mlx5_load_one failed with error code -110 > [ 66.852802] mlx5_core: probe of 0000:03:00.0 failed with error -110 > > [ 66.859347] mlx5_core 0000:03:00.1: firmware version: 16.22.1002 > [ 66.865388] mlx5_core 0000:03:00.1: 126.016 Gb/s available PCIe bandwidth, limited by 8 GT/s x16 link at 0000:00:02.0 (capable of 252.048 Gb/s with 16 GT/s x16 link) > > [ 125.787395] XFS (sda3): Mounting V5 Filesystem > [ 125.848509] XFS (sda3): Ending clean mount > [ 127.984784] mlx5_core 0000:03:00.1: wait_func:964:(pid 5): ENABLE_HCA(0x104) timeout. Will cause a leak of a command resource > [ 127.996090] mlx5_core 0000:03:00.1: enable hca failed > [ 128.013819] mlx5_core 0000:03:00.1: mlx5_load_one failed with error code -110 > [ 128.021076] mlx5_core: probe of 0000:03:00.1 failed with error -110 > > > Do you have any idea what could be causing this? > We'll update regarding any progress. Thanks!
Powered by blists - more mailing lists