lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 27 Jan 2020 11:43:10 +0000
From:   Claudiu Manoil <claudiu.manoil@....com>
To:     Christophe JAILLET <christophe.jaillet@...adoo.fr>,
        "davem@...emloft.net" <davem@...emloft.net>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "kernel-janitors@...r.kernel.org" <kernel-janitors@...r.kernel.org>
Subject: RE: [PATCH] gianfar: Allocate the correct number of rx queues in
 'gfar_of_init()'

>-----Original Message-----
>From: Christophe JAILLET <christophe.jaillet@...adoo.fr>
>Sent: Sunday, January 26, 2020 11:20 AM
>To: Claudiu Manoil <claudiu.manoil@....com>; davem@...emloft.net
>Cc: netdev@...r.kernel.org; linux-kernel@...r.kernel.org; kernel-
>janitors@...r.kernel.org; Christophe JAILLET
><christophe.jaillet@...adoo.fr>
>Subject: [PATCH] gianfar: Allocate the correct number of rx queues in
>'gfar_of_init()'
>
>We can get values for rx and tx queues from "fsl,num_rx_queues" and
>"fsl,num_tx_queues". However, when 'alloc_etherdev_mq()' is called, the
>value for "tx" is used for both.
>
>Use 'alloc_etherdev_mqs()' instead.
>
>Signed-off-by: Christophe JAILLET <christophe.jaillet@...adoo.fr>
>---
>WARNING: This patch is purely speculative!
>
>I don't fully understand the code, and tx and rx queues seem to be
>allocated by 'gfar_alloc_[rt]x_queues()' and handled with priv-> fields.
>I don't know the relationship between queues provided by the core, and the
>ones specificly handled in this driver.
>
>'netif_set_real_num_rx_queues()' a few lines below is also spurious to me.
>If "fsl,num_rx_queues" > "fsl,num_tx_queues" it will return an error and
>things then look out of synch (i.e. 'priv->num_rx_queues' is set to a value
>bigger than what is allocated by core, that is to say the one from
>'priv->num_tx_queues')
>
>If my assumptions are correct, I guess that the call to
>'netif_set_real_num_rx_queues()' is useless
>
>
>Sorry for the noise if I'm completly wrong.
>In such a case, some explanation would be appreciated.

Your patch is reasonable, you rightly noticed that something is amiss, 
we could reasonably say that:
Fixes: fba4ed030cfa ("gianfar: Add Multiple Queue Support")

But the fix doesn't change the behavior of the current mainline code.  That's because
in the current mainline code num_rx_qs is always equal to num_tx_qs, and both can be
either 1 or 2, depending on whether the platform has 1 or 2 CPUs.  The GFAR_MQ_POLLING
mode option is never set, not on the mainline at least (you can look up this define in the code
for comments).  As it is now, the GFAR_MQ_POLLING option can be activated by adding an extra
if statement in the driver, to select it by a special device tree compatibility string for example.

So, the problem is that this option to support more than 2 Rx and 2 Tx queues in the mainline code
cannot be accessed without modifying the driver.  Supporting more than 2 queues has shown
considerable overhead in the past.  So the decision was made to support only one pair of 
Rx/Tx queues per CPU by default.  However there's no easy way to change these defaults at
runtime, and switch to GFAR_MQ_POLLING.  And so far I received no feedback/ request to support
GFAR_MQ_POLLING upstream.  So I see several options here: 1) remove the GFAR_MQ_POLLING
code (something I was about to do 6 years ago, but I was prompted to leave it there), 2) come up
with a way to activate it at runtime, 3) add support to activate it at probe time via new device tree
properties /strings (not ideal).  Any suggestion?

Thanks,
Claudiu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ