lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170216165730.00004b1b@dev.mellanox.co.il>
Date:   Thu, 16 Feb 2017 16:57:30 +0200
From:   jackm <jackm@....mellanox.co.il>
To:     Dexuan Cui <decui@...rosoft.com>
Cc:     Yishai Hadas <yishaih@...lanox.com>,
        Tariq Toukan <tariqt@...lanox.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: Mellanox ConnectX-3 VF driver can't work with 16 CPUs?

On Thu, 9 Feb 2017 10:03:20 +0000
Dexuan Cui <decui@...rosoft.com> wrote:

> Hi, 
> While trying SR-IOV with a Linux guest running on Hyper-V, I found
> this issue: the VF driver can't work if the guest has 16 virtual CPUs
> (less vCPUs, e.g. 8,  can work fine):
> 
> [    9.927820] mlx4_core: Mellanox ConnectX core driver v2.2-1 (Feb,
> 2014) [    9.927882] mlx4_core: Initializing b961:00:02.0
> [    9.970994] mlx4_core b961:00:02.0: Detected virtual function -
> running in slave mode [    9.976783] mlx4_core b961:00:02.0: Sending
> reset [    9.985858] mlx4_core b961:00:02.0: Sending vhcr0
> [   10.004855] mlx4_core b961:00:02.0: HCA minimum page size:512
> [   10.010465] mlx4_core b961:00:02.0: Timestamping is not supported
> in slave mode [   10.203065] mlx4_core b961:00:02.0: Failed to
> initialize event queue table, aborting [   10.226728] mlx4_core:
> probe of b961:00:02.0 failed with error -12
> 
> I'm using the mainline kernel (4.10.0-rc4).
> 
> Any idea?
> 
Thanks for reporting this, Dexuan.  We indeed found a bug in the Linux
driver, which requested too many EQEs per EQ created in SRIOV mode.
This resulted in the VF exhausting its supply of MTT entries.

We will submit an upstream fix for this soon (after thorough testing).

-Jack

> Thanks,
> -- Dexuan
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ