lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160715033057.GA98180@ast-mbp.thefacebook.com>
Date:	Thu, 14 Jul 2016 20:30:59 -0700
From:	Alexei Starovoitov <alexei.starovoitov@...il.com>
To:	Jesper Dangaard Brouer <brouer@...hat.com>
Cc:	Brenden Blanco <bblanco@...mgrid.com>, davem@...emloft.net,
	netdev@...r.kernel.org, Jamal Hadi Salim <jhs@...atatu.com>,
	Saeed Mahameed <saeedm@....mellanox.co.il>,
	Martin KaFai Lau <kafai@...com>, Ari Saha <as754m@....com>,
	Or Gerlitz <gerlitz.or@...il.com>, john.fastabend@...il.com,
	hannes@...essinduktion.org, Thomas Graf <tgraf@...g.ch>,
	Tom Herbert <tom@...bertland.com>,
	Daniel Borkmann <daniel@...earbox.net>
Subject: Re: [PATCH v8 04/11] net/mlx4_en: add support for fast rx drop bpf
 program

On Thu, Jul 14, 2016 at 09:25:43AM +0200, Jesper Dangaard Brouer wrote:
> 
> I would really really like to see the XDP program associated with the
> RX ring queues, instead of a single XDP program covering the entire NIC.
> (Just move the bpf_prog pointer to struct mlx4_en_rx_ring)
> 
> So, why is this so important? It is a fundamental architectural choice.
> 
> With a single XDP program per NIC, then we are not better than DPDK,
> where a single application monopolize the entire NIC.  Recently netmap
> added support for running on a single specific queue[1].  This is the
> number one argument our customers give, for not wanting to run DPDK,
> because they need to dedicate an entire NIC per high speed application.
> 
> As John Fastabend says, his NICs have thousands of queues, and he want
> to bind applications to the queues.  This idea of binding queues to
> applications, goes all the way back to Van Jacobson's 2006
> netchannels[2].  Where creating an application channel allow for lock
> free single producer single consumer (SPSC) queue directly into the
> application.  A XDP program "locked" to a single RX queue can make
> these optimizations, a global XDP programm cannot.
> 
> 
> Why this change now, why can't this wait?
> 
> I'm starting to see more and more code assuming that a single global
> XDP program owns the NIC.  This will be harder and harder to cleanup.
> I'm fine with the first patch iteration only supports setting the XDP
> program on all RX queue (e.g. returns ENOSUPPORT on specific
> queues). Only requesting that this is moved to struct mlx4_en_rx_ring,
> and appropriate refcnt handling is done.

attaching program to all rings at once is a fundamental part for correct
operation. As was pointed out in the past the bpf_prog pointer
in the ring design loses atomicity of the update. While the new program is
being attached the old program is still running on other rings.
That is not something user space can compensate for.
So for current 'one prog for all rings' we cannot do what you're suggesting,
yet it doesn't mean we won't do prog per ring tomorrow. To do that the other
aspects need to be agreed upon before we jump into implementation:
- what is the way for the program to know which ring it's running on?
  if there is no such way, then attaching the same prog to multiple
  ring is meaningless.
  we can easily extend 'struct xdp_md' in the future if we decide
  that it's worth doing.
- should we allow different programs to attach to different rings?
  we certainly can, but at this point there are only two XDP programs
  ILA router and L4 load balancer. Both require single program on all rings.
  Before we add new feature, we need to have real use case for it.
- if program knows the rx ring, should it be able to specify tx ring?
  It's doable, but it requires locking and performs will tank.

> I'm starting to see more and more code assuming that a single global
> XDP program owns the NIC.  This will be harder and harder to cleanup.

Two xdp programs in the world today want to see all rings at once.
We don't need extra comlexity of figuring out number of rings and
struggling with lack of atomicity.
There is nothing to 'cleanup' at this point.

The reason netmap/dpdk want to run on a given hw ring come from
the problem that they cannot share the nic with stack.
XDP is different. It natively integrates with the stack.
XDP_PASS is a programmable way to indicate that the packet is a control
plane packet and should be passed into the stack and further into applications.
netmap/dpdk don't have such ability, so they have to resort to
bifurcated driver model.
At this point I don't see a _real_ use case where we want to see
different bpf programs running on different rings, but as soon as
it comes we can certainly add support for it.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ