lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Sep 2018 21:19:51 +0200
From:   Pablo Neira Ayuso <pablo@...filter.org>
To:     netdev@...r.kernel.org
Cc:     davem@...emloft.net, thomas.lendacky@....com, f.fainelli@...il.com,
        ariel.elior@...ium.com, michael.chan@...adcom.com,
        santosh@...lsio.com, madalin.bucur@....com,
        yisen.zhuang@...wei.com, salil.mehta@...wei.com,
        jeffrey.t.kirsher@...el.com, tariqt@...lanox.com,
        saeedm@...lanox.com, jiri@...lanox.com, idosch@...lanox.com,
        ganeshgr@...lsio.com, jakub.kicinski@...ronome.com,
        linux-net-drivers@...arflare.com, peppe.cavallaro@...com,
        alexandre.torgue@...com, joabreu@...opsys.com,
        grygorii.strashko@...com, andrew@...n.ch,
        vivien.didelot@...oirfairelinux.com
Subject: [PATCH RFC,net-next 00/10] add flow_rule infrastructure

Hi,

This patchset spins over the existing kernel representation for network
driver offloads based on the existing cls_flower dissector use for the
rule matching side and the TC action infrastructure to represent the
action side.

The proposed object that represent rules looks like this:

	struct flow_rule {
	        struct flow_match       match;
	        struct flow_action      action;
	};

The flow_match structure wraps Jiri Pirko's existing representation
available in cls_flower based on dissectors to represent the matching
side:

	struct flow_match {
		struct flow_dissector   *dissector;
		void                    *mask;
		void                    *key;
	};

The mask and key layouts are opaque, given the dissector object provides
the used_keys flags - to check for rule selectors that are being used -
and the offset to the corresponding key and mask in the opaque
container structures.

Then, the actions to be performs on the matching packets is represented
through the flow_action object:

	struct flow_action {
		int                     num_keys;
		struct flow_action_key  *keys;
	};

This object comes with a num_keys field that specifies the number of
actions - this supports an arbitrary number of actions, the driver
will impose its own restrictions on this - and the array that stores
flow_action_key structures (keys). Each flow action key looks like this:

	struct flow_action_key {
	        enum flow_action_key_id         id;
	        union {
        	        u32                     chain_index;    /* FLOW_ACTION_KEY_GOTO */
                	struct net_device       *dev;           /* FLOW_ACTION_KEY_REDIRECT */
	                struct {                                /* FLOW_ACTION_KEY_VLAN */
        	                u16             vid;
	                        __be16          proto;
	                        u8              prio;
	                } vlan;
	                struct {                                /* FLOW_ACTION_KEY_PACKET_EDIT */
	                        enum flow_act_mangle_base htype;
	                        u32             offset;
	                        u32             mask;
	                        u32             val;
	                } mangle;
			const struct ip_tunnel_info *tunnel;    /* FLOW_ACTION_KEY_TUNNEL_ENCAP */
	                u32                     csum_flags;     /* FLOW_ACTION_KEY_CSUM */
                	u32                     mark;         	/* FLOW_ACTION_KEY_MARK */
	                u32                     queue_index;    /* FLOW_ACTION_KEY_QUEUE */
		};
	};

Possible actions in this patchset are:

	enum flow_action_key_id {
	        FLOW_ACTION_KEY_ACCEPT          = 0,
	        FLOW_ACTION_KEY_DROP,
	        FLOW_ACTION_KEY_TRAP,
	        FLOW_ACTION_KEY_GOTO,
	        FLOW_ACTION_KEY_REDIRECT,
	        FLOW_ACTION_KEY_MIRRED,
	        FLOW_ACTION_KEY_VLAN_PUSH,
	        FLOW_ACTION_KEY_VLAN_POP,
	        FLOW_ACTION_KEY_VLAN_MANGLE,
	        FLOW_ACTION_KEY_TUNNEL_ENCAP,
	        FLOW_ACTION_KEY_TUNNEL_DECAP,
	        FLOW_ACTION_KEY_MANGLE,
	        FLOW_ACTION_KEY_ADD,
	        FLOW_ACTION_KEY_CSUM,
	        FLOW_ACTION_KEY_MARK,
	        FLOW_ACTION_KEY_WAKE,
	        FLOW_ACTION_KEY_QUEUE,
	};

which are based on what existing drivers can do from the existing TC
actions.

Common code pattern from the drivers to populate the hardware
intermediate representation looks like this:

	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV4_ADDRS)) {
		struct flow_match_ipv4_addrs match;

		flow_rule_match_ipv4_addrs(rule, &match);
		flow->l3_key.ipv4.daddr.s_addr = match.key->dst;
                flow->l3_mask.ipv4.daddr.s_addr = match.mask->dst;
                flow->l3_key.ipv4.saddr.s_addr = match.key->src;
                flow->l3_mask.ipv4.saddr.s_addr = match.mask->src;
	}

Then, flow action code parser should look like:

	flow_action_for_each(i, act, flow_action) {
               switch (act->id) {
               case FLOW_ACTION_KEY_DROP:
                        actions->flags |= DRIVER_XYZ_ACTION_FLAG_DROP;
			break;
	       case ...:
			break;
	       default:
			return -EOPNOTSUPP;
	       }
        }

A quick description of the patches:

Patch #1 introduces the flow_match structure and two new interfaces to
         check for rule selectors that are used and to fetch the key
         and the mask with one single function call. This patch also
	 introduces the flow_rule structure to avoid a follow up patch
	 that would very much update the same LoCs.

Patch #2 Preparation patch for mlx5e driver, for the packet edit parser.

Patch #3 Introduce flow_action infrastructure, as described above.

Patch #4 Add function to translate TC action to flow_action from
         cls_flower.

Patch #5 Add infrastructure to fetch statistics into container structure
         and synchronize them to TC actions from cls_flower. Another
         preparation patch before patch #7.

Patch #6 Use flow_action infrastructure from drivers.

Patch #7 Do not expose TC actions to drivers anymore, now that drivers
	 have been converted to use the flow_action infrastructure after
	 patch #5.

Patch #8 Support to wake-up-on-lan and queue actions for the flow_action
         infrastructure, which are another two common actions supported
	 by network interfaces, from the ethtool_rx_flow interface.

Patch #9 Add a function to translate from ethtool_rx_flow_spec structure
         to the flow_action structure. This is a simple one, basic
	 enough for the first client, the bcm_sf2 driver. It can
	 be easily extended to support more selectors in case driver
	 needs it.

Patch #10 Update bcm_sf2 to use this new translator function and
          update codebase to configure hardware IR using the
	  flow_action representation. This will allow later development
	  of cls_flower using the same backend code from the driver.

This patchset is adding a new layer between drivers and the existing
software frontends, so it's a bit more code, but it is core
infrastructure common to everyone and this comes with benefits for
driver developers:

1) No need to make ad-hoc driver code for each supported front-end,
   one single codebase to populate the native hardware intermediate
   representation for each existing specific purpose packet classifiers
   such as TC, OVS, ethtool-rx-flow and net...

2) Independent API for drivers offloads, the use of the existing TC
   action infrastructure makes it slightly hard to visibilize what drivers
   are currently supporting in terms of features. Moreover, no more
   exposure of TC software representation to drivers, so future
   software changes to TC won't need to be propagated to drivers, which
   might accidentally break existing hardware offload codebase in case
   the software frontend gets extended with some new features.

There is still room for more future work, such as introduce more common
infrastructure: Most drivers use a hashtable to represent the hardware
table indexed by the cookie, it should be possible to share this.

I made basic testing for this initial RFC, so more extensive to ensure
things don't break might be needed still. That would happen in a later
stage if feedback is positive.

Comments welcome as usual, thanks.

Pablo Neira Ayuso (10):
  flow_dissector: add flow_rule and flow_match structures and use them
  net/mlx5e: allow two independent packet edit actions
  flow_dissector: add flow action infrastructure
  cls_flower: add translator to flow_action representation
  cls_flower: add statistics retrieval infrastructure and use it
  drivers: net: use flow action infrastructure
  cls_flower: don't expose TC actions to drivers anymore
  flow_dissector: add wake-up-on-lan and queue to flow_action
  flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure translator
  dsa: bcm_sf2: use flow_rule infrastructure

 drivers/net/dsa/bcm_sf2_cfp.c                      | 311 ++++-----
 drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c       | 256 +++-----
 .../net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c   | 451 ++++++-------
 drivers/net/ethernet/intel/i40e/i40e_main.c        | 178 ++---
 drivers/net/ethernet/intel/i40evf/i40evf_main.c    | 194 +++---
 drivers/net/ethernet/intel/igb/igb_main.c          |  64 +-
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c    | 725 ++++++++++-----------
 drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c |   2 +-
 .../net/ethernet/mellanox/mlxsw/spectrum_flower.c  | 260 ++++----
 drivers/net/ethernet/netronome/nfp/flower/action.c | 194 +++---
 drivers/net/ethernet/netronome/nfp/flower/match.c  | 416 ++++++------
 .../net/ethernet/netronome/nfp/flower/offload.c    | 150 ++---
 drivers/net/ethernet/qlogic/qede/qede_filter.c     |  93 ++-
 include/net/flow_dissector.h                       | 185 ++++++
 include/net/pkt_cls.h                              |  20 +-
 net/core/flow_dissector.c                          | 341 ++++++++++
 net/sched/cls_flower.c                             | 151 ++++-
 17 files changed, 2199 insertions(+), 1792 deletions(-)

--
2.11.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ