[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160718190323.GB9198@gmail.com>
Date: Mon, 18 Jul 2016 12:03:25 -0700
From: Brenden Blanco <bblanco@...mgrid.com>
To: Tom Herbert <tom@...bertland.com>
Cc: Thomas Graf <tgraf@...g.ch>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
Jamal Hadi Salim <jhs@...atatu.com>,
Saeed Mahameed <saeedm@....mellanox.co.il>,
Martin KaFai Lau <kafai@...com>, Ari Saha <as754m@....com>,
Or Gerlitz <gerlitz.or@...il.com>,
john fastabend <john.fastabend@...il.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
Daniel Borkmann <daniel@...earbox.net>
Subject: Re: [PATCH v8 04/11] net/mlx4_en: add support for fast rx drop bpf
program
On Mon, Jul 18, 2016 at 01:39:02PM +0200, Tom Herbert wrote:
> On Mon, Jul 18, 2016 at 11:10 AM, Thomas Graf <tgraf@...g.ch> wrote:
> > On 07/15/16 at 10:49am, Tom Herbert wrote:
[...]
> >> To me, an XDP program is just another attribute of an RX queue, it's
> >> really not special!. We already have a very good infrastructure for
> >> managing multiqueue and pretty much everything in the receive path
> >> operates at the queue level not the device level-- we should follow
> >> that model.
> >
> > I agree with that but I would like to keep the current per net_device
> > atomic properties.
>
> I don't see that see that there is any synchronization guarantees
> using xchg. For instance, if the pointer is set right after being read
> by a thread for one queue and right before being read by a thread for
> another queue, this could result in the old and new program running
> concurrently or old one running after new. If we need to synchronize
> the operation across all queues then sequence
> ifdown,modify-config,ifup will work.
The case you mentioned is a valid criticism. The reason I wanted to keep this
fast xchg around is because the full stop/start operation on mlx4 is a second
or longer of downtime. I think something like the following should suffice to
have a clean cut between programs without bringing the whole port down, buffers
and all:
{
struct bpf_prog *old_prog;
bool port_up;
int i;
mutex_lock(&mdev->state_lock);
port_up = priv->port_up;
priv->port_up = false;
for (i = 0; i < priv->rx_ring_num; i++)
napi_synchronize(&priv->rx_cq[i]->napi);
old_prog = xchg(&priv->prog, prog);
if (old_prog)
bpf_prog_put(old_prog);
priv->port_up = port_up;
mutex_unlock(&mdev->state_lock);
}
Thoughts?
>
> Tom
Powered by blists - more mailing lists