lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 18 Jul 2016 19:45:10 -0700 From: Alexei Starovoitov <alexei.starovoitov@...il.com> To: Tom Herbert <tom@...bertland.com> Cc: Thomas Graf <tgraf@...g.ch>, Jesper Dangaard Brouer <brouer@...hat.com>, Brenden Blanco <bblanco@...mgrid.com>, "David S. Miller" <davem@...emloft.net>, Linux Kernel Network Developers <netdev@...r.kernel.org>, Jamal Hadi Salim <jhs@...atatu.com>, Saeed Mahameed <saeedm@....mellanox.co.il>, Martin KaFai Lau <kafai@...com>, Ari Saha <as754m@....com>, Or Gerlitz <gerlitz.or@...il.com>, john fastabend <john.fastabend@...il.com>, Hannes Frederic Sowa <hannes@...essinduktion.org>, Daniel Borkmann <daniel@...earbox.net> Subject: Re: [PATCH v8 04/11] net/mlx4_en: add support for fast rx drop bpf program On Mon, Jul 18, 2016 at 03:07:01PM +0200, Tom Herbert wrote: > On Mon, Jul 18, 2016 at 2:48 PM, Thomas Graf <tgraf@...g.ch> wrote: > > On 07/18/16 at 01:39pm, Tom Herbert wrote: > >> On Mon, Jul 18, 2016 at 11:10 AM, Thomas Graf <tgraf@...g.ch> wrote: > >> > I agree with that but I would like to keep the current per net_device > >> > atomic properties. > >> > >> I don't see that see that there is any synchronization guarantees > >> using xchg. For instance, if the pointer is set right after being read > >> by a thread for one queue and right before being read by a thread for > >> another queue, this could result in the old and new program running > >> concurrently or old one running after new. If we need to synchronize > >> the operation across all queues then sequence > >> ifdown,modify-config,ifup will work. > > > > Right, there are no synchronization guarantees between threads and I > > don't think that's needed. The guarantee that is provided is that if > > I replace a BPF program, the replace either succeeds in which case > > all packets have been either processed by the old or new program. Or > > the replace failed in which case the old program was left intact and > > all packets are still going through the old program. > > > > This is a nice atomic replacement principle which would be nice to > > preserve. > > Sure, if replace operation fails then old program should remain in > place. But xchg can't fail, so it seems like part is just giving a > false sense of security that program replacement is somehow > synchronized across queues. good point. we do read_once at the beginning of napi, so we can process a bunch of packets in other cpus even after xchg is all done. Then I guess we can have a prog pointers in rings and it only marginally increases the race. Why not if it doesn't increase the patch complexity... btw we definitely want to avoid drain/start/stop or any slow operation during prog xchg. When xdp is used for dos, the prog swap needs to be fast.
Powered by blists - more mailing lists