[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpUy0y_NqT82htx_D-3G-wpo4mfguvxk2SPt4d2+KjXetA@mail.gmail.com>
Date: Wed, 6 Feb 2019 16:04:44 -0800
From: Cong Wang <xiyou.wangcong@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Linux Kernel Network Developers <netdev@...r.kernel.org>,
Saeed Mahameed <saeedm@...lanox.com>,
Tariq Toukan <tariqt@...lanox.com>
Subject: Re: [Patch net-next v2] mlx5: use RCU lock in mlx5_eq_cq_get()
On Wed, Feb 6, 2019 at 3:36 PM Eric Dumazet <eric.dumazet@...il.com> wrote:
>
>
>
> On 02/06/2019 03:00 PM, Cong Wang wrote:
> > mlx5_eq_cq_get() is called in IRQ handler, the spinlock inside
> > gets a lot of contentions when we test some heavy workload
> > with 60 RX queues and 80 CPU's, and it is clearly shown in the
> > flame graph.
> >
> > In fact, radix_tree_lookup() is perfectly fine with RCU read lock,
> > we don't have to take a spinlock on this hot path. This is pretty
> > much similar to commit 291c566a2891
> > ("net/mlx4_core: Fix racy CQ (Completion Queue) free"). Slow paths
> > are still serialized with the spinlock, and with synchronize_irq()
> > it should be safe to just move the fast path to RCU read lock.
> >
> > This patch itself reduces the latency by about 50% for our memcached
> > workload on a 4.14 kernel we test. In upstream, as pointed out by Saeed,
> > this spinlock gets some rework in commit 02d92f790364
> > ("net/mlx5: CQ Database per EQ"), so the difference could be smaller.
> >
> > Cc: Saeed Mahameed <saeedm@...lanox.com>
> > Cc: Tariq Toukan <tariqt@...lanox.com>
> > Acked-by: Saeed Mahameed <saeedm@...lanox.com>
> > Signed-off-by: Cong Wang <xiyou.wangcong@...il.com>
> > ---
> > drivers/net/ethernet/mellanox/mlx5/core/eq.c | 12 ++++++------
> > 1 file changed, 6 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > index ee04aab65a9f..7092457705a2 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > @@ -114,11 +114,11 @@ static struct mlx5_core_cq *mlx5_eq_cq_get(struct mlx5_eq *eq, u32 cqn)
> > struct mlx5_cq_table *table = &eq->cq_table;
> > struct mlx5_core_cq *cq = NULL;
> >
> > - spin_lock(&table->lock);
> > + rcu_read_lock();
> > cq = radix_tree_lookup(&table->tree, cqn);
> > if (likely(cq))
> > mlx5_cq_hold(cq);
>
> I suspect that you need a variant that makes sure refcount is not zero.
>
> ( Typical RCU rules apply )
>
> if (cq && !refcount_inc_not_zero(&cq->refcount))
> cq = NULL;
>
>
> See commit 6fa19f5637a6c22bc0999596bcc83bdcac8a4fa6 rds: fix refcount bug in rds_sock_addref
> for a similar issue I fixed recently.
synchronize_irq() is called before mlx5_cq_put(), so I don't
see why readers could get 0 refcnt.
For the rds you mentioned, it doesn't wait for readers, this
is why it needs to check against 0 and why it is different from
this one.
Thanks.
Powered by blists - more mailing lists