lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 6 Feb 2019 09:15:32 -0800
From:   Cong Wang <xiyou.wangcong@...il.com>
To:     Saeed Mahameed <saeedm@...lanox.com>
Cc:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Tariq Toukan <tariqt@...lanox.com>
Subject: Re: [Patch net-next] mlx5: use RCU lock in mlx5_eq_cq_get()

On Wed, Feb 6, 2019 at 8:55 AM Saeed Mahameed <saeedm@...lanox.com> wrote:
> Hi Cong,
>
> The patch is ok to me, but i really doubt that you can hit a contention
> on latest upstream driver, since we already have spinlock per EQ, which
> means spinlock per core,  each EQ (core) msix handler can only access
> one spinlock (its own), so I am surprised how you got the contention,
> Maybe you are not running on latest upstream driver ?

We are running 4.14 stable release. Which commit changes the game
here? We can consider to backport it unless it is complicated.

Also, if you don't like this patch, we are happy to carry it for our own,
sometimes it isn't worth the time to push into upstream.

>
> what is the workload ?
>

It's a memcached RPC performance test, that is all I can tell.
(Apparently I have almost zero knowledge about memcached.)


> > > In fact, radix_tree_lookup() is perfectly fine with RCU read lock,
> > > we don't have to take a spinlock on this hot path. It is pretty
> > > much
> > > similar to commit 291c566a2891
> > > ("net/mlx4_core: Fix racy CQ (Completion Queue) free"). Slow paths
> > > are still serialized with the spinlock, and with synchronize_irq()
> > > it should be safe to just move the fast path to RCU read lock.
> > >
> > > This patch itself reduces the latency by about 50% with our
> > > workload.
> > >
> > > Cc: Saeed Mahameed <saeedm@...lanox.com>
> > > Cc: Tariq Toukan <tariqt@...lanox.com>
> > > Signed-off-by: Cong Wang <xiyou.wangcong@...il.com>
> > > ---
> > >   drivers/net/ethernet/mellanox/mlx5/core/eq.c | 12 ++++++------
> > >   1 file changed, 6 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > > b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > > index ee04aab65a9f..7092457705a2 100644
> > > --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > > @@ -114,11 +114,11 @@ static struct mlx5_core_cq
> > > *mlx5_eq_cq_get(struct mlx5_eq *eq, u32 cqn)
> > >     struct mlx5_cq_table *table = &eq->cq_table;
> > >     struct mlx5_core_cq *cq = NULL;
> > >
> > > -   spin_lock(&table->lock);
> > > +   rcu_read_lock();
> > >     cq = radix_tree_lookup(&table->tree, cqn);
> > >     if (likely(cq))
> > >             mlx5_cq_hold(cq);
> > > -   spin_unlock(&table->lock);
> > > +   rcu_read_unlock();
> >
> > Thanks for you patch.
> >
> > I think we can improve it further, by taking the if statement out of
> > the
> > critical section.
> >
>
> No, mlx5_cq_hold must stay under RCU read, otherwise cq might get freed
> before the irq gets a change to increment ref count on it.
>

Agreed.


Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ