lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1484350827.13165.54.camel@edumazet-glaptop3.roam.corp.google.com>
Date:   Fri, 13 Jan 2017 15:40:27 -0800
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Alexander Duyck <alexander.duyck@...il.com>
Cc:     David Miller <davem@...emloft.net>,
        netdev <netdev@...r.kernel.org>,
        Erez Shitrit <erezsh@...lanox.com>,
        Eugenia Emantayev <eugenia@...lanox.com>,
        Tariq Toukan <tariqt@...lanox.com>
Subject: Re: [PATCH net] mlx4: do not call napi_schedule() without care

On Fri, 2017-01-13 at 15:07 -0800, Alexander Duyck wrote:
> On Fri, Jan 13, 2017 at 8:39 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> > From: Eric Dumazet <edumazet@...gle.com>
> >
> > Disable BH around the call to napi_schedule() to avoid following warning
> >
> > [   52.095499] NOHZ: local_softirq_pending 08
> > [   52.421291] NOHZ: local_softirq_pending 08
> > [   52.608313] NOHZ: local_softirq_pending 08
> >
> > Fixes: 8d59de8f7bb3 ("net/mlx4_en: Process all completions in RX rings after port goes up")
> > Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> > Cc: Erez Shitrit <erezsh@...lanox.com>
> > Cc: Eugenia Emantayev <eugenia@...lanox.com>
> > Cc: Tariq Toukan <tariqt@...lanox.com>
> > ---
> >  drivers/net/ethernet/mellanox/mlx4/en_netdev.c |    5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> > index 4910d9af19335d4b97d39760c163b41eecc26242..761f8b12399cab245abccc0f7d7f84fde742c14d 100644
> > --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> > +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> > @@ -1748,8 +1748,11 @@ int mlx4_en_start_port(struct net_device *dev)
> >         /* Process all completions if exist to prevent
> >          * the queues freezing if they are full
> >          */
> > -       for (i = 0; i < priv->rx_ring_num; i++)
> > +       for (i = 0; i < priv->rx_ring_num; i++) {
> > +               local_bh_disable();
> >                 napi_schedule(&priv->rx_cq[i]->napi);
> > +               local_bh_enable();
> > +       }
> 
> Couldn't you save yourself a ton of trouble by wrapping the loop
> inside of the local_bh_disable/enable instead of wrapping them up
> inside the loop?  It just seems like it might be more efficient to
> schedule them and then process them as a block instead of doing it one
> at a time.

What kind of troubles ?

Given the problem might be happening under flood, I believe it is much
safer to do as I did.

Otherwise, we will have to process a ton of messages at the
local_bh_enable() time and lock the {softirq}IRQ on one cpu.

I chose to do this on purpose.

Batching can be dangerous, and this is exactly the point we do not want
batching, with say 64 queues.

This code is driver starts, hardly fast path.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ