lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 6 Feb 2019 15:36:28 -0800
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Cong Wang <xiyou.wangcong@...il.com>, netdev@...r.kernel.org
Cc:     Saeed Mahameed <saeedm@...lanox.com>,
        Tariq Toukan <tariqt@...lanox.com>
Subject: Re: [Patch net-next v2] mlx5: use RCU lock in mlx5_eq_cq_get()



On 02/06/2019 03:00 PM, Cong Wang wrote:
> mlx5_eq_cq_get() is called in IRQ handler, the spinlock inside
> gets a lot of contentions when we test some heavy workload
> with 60 RX queues and 80 CPU's, and it is clearly shown in the
> flame graph.
> 
> In fact, radix_tree_lookup() is perfectly fine with RCU read lock,
> we don't have to take a spinlock on this hot path. This is pretty
> much similar to commit 291c566a2891
> ("net/mlx4_core: Fix racy CQ (Completion Queue) free"). Slow paths
> are still serialized with the spinlock, and with synchronize_irq()
> it should be safe to just move the fast path to RCU read lock.
> 
> This patch itself reduces the latency by about 50% for our memcached
> workload on a 4.14 kernel we test. In upstream, as pointed out by Saeed,
> this spinlock gets some rework in commit 02d92f790364
> ("net/mlx5: CQ Database per EQ"), so the difference could be smaller.
> 
> Cc: Saeed Mahameed <saeedm@...lanox.com>
> Cc: Tariq Toukan <tariqt@...lanox.com>
> Acked-by: Saeed Mahameed <saeedm@...lanox.com>
> Signed-off-by: Cong Wang <xiyou.wangcong@...il.com>
> ---
>  drivers/net/ethernet/mellanox/mlx5/core/eq.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> index ee04aab65a9f..7092457705a2 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> @@ -114,11 +114,11 @@ static struct mlx5_core_cq *mlx5_eq_cq_get(struct mlx5_eq *eq, u32 cqn)
>  	struct mlx5_cq_table *table = &eq->cq_table;
>  	struct mlx5_core_cq *cq = NULL;
>  
> -	spin_lock(&table->lock);
> +	rcu_read_lock();
>  	cq = radix_tree_lookup(&table->tree, cqn);
>  	if (likely(cq))
>  		mlx5_cq_hold(cq);

I suspect that you need a variant that makes sure refcount is not zero.

( Typical RCU rules apply )

if (cq && !refcount_inc_not_zero(&cq->refcount))
	cq = NULL;


See commit 6fa19f5637a6c22bc0999596bcc83bdcac8a4fa6 rds: fix refcount bug in rds_sock_addref
for a similar issue I fixed recently.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ