[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <155057184028.20935.11848804617158437103.stgit@firesoul>
Date: Tue, 19 Feb 2019 11:24:00 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: netdev@...r.kernel.org, Daniel Borkmann <borkmann@...earbox.net>,
Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Jesper Dangaard Brouer <brouer@...hat.com>
Subject: [PATCH bpf-next] bpf: add skb->queue_mapping write access from tc
clsact
The skb->queue_mapping already have read access, via __sk_buff->queue_mapping.
This patch allow BPF tc qdisc clsact write access to the queue_mapping via
tc_cls_act_is_valid_access.
It is already possible to change this via TC filter action skbedit
tc-skbedit(8). Due to the lack of TC examples, lets show one:
# tc qdisc add dev ixgbe1 handle ffff: ingress
# tc filter add dev ixgbe1 parent ffff: matchall action skbedit queue_mapping 5
# tc filter list dev ixgbe1 parent ffff:
The most common mistake is that XPS (Transmit Packet Steering) takes
precedence over setting skb->queue_mapping. XPS is configured per DEVICE
via /sys/class/net/DEVICE/queues/tx-*/xps_cpus via a CPU hex mask. To
disable set mask=00.
The purpose of changing skb->queue_mapping is to influence the selection of
the net_device "txq" (struct netdev_queue), which influence selection of
the qdisc "root_lock" (via txq->qdisc->q.lock) and txq->_xmit_lock. When
using the MQ qdisc the txq->qdisc points to different qdiscs and associated
locks, and HARD_TX_LOCK (txq->_xmit_lock), allowing for CPU scalability.
Due to lack of TC examples, lets show howto attach clsact BPF programs:
# tc qdisc add dev ixgbe2 clsact
# tc filter replace dev ixgbe2 egress bpf da obj XXX_kern.o sec tc_qmap2cpu
# tc filter list dev ixgbe2 egress
Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
---
net/core/filter.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/net/core/filter.c b/net/core/filter.c
index 353735575204..d05ae8d05397 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -6238,6 +6238,7 @@ static bool tc_cls_act_is_valid_access(int off, int size,
case bpf_ctx_range(struct __sk_buff, tc_classid):
case bpf_ctx_range_till(struct __sk_buff, cb[0], cb[4]):
case bpf_ctx_range(struct __sk_buff, tstamp):
+ case bpf_ctx_range(struct __sk_buff, queue_mapping):
break;
default:
return false;
@@ -6642,9 +6643,16 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
break;
case offsetof(struct __sk_buff, queue_mapping):
- *insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg, si->src_reg,
- bpf_target_off(struct sk_buff, queue_mapping, 2,
- target_size));
+ if (type == BPF_WRITE)
+ *insn++ = BPF_STX_MEM(BPF_H, si->dst_reg, si->src_reg,
+ bpf_target_off(struct sk_buff,
+ queue_mapping,
+ 2, target_size));
+ else
+ *insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg, si->src_reg,
+ bpf_target_off(struct sk_buff,
+ queue_mapping,
+ 2, target_size));
break;
case offsetof(struct __sk_buff, vlan_present):
Powered by blists - more mailing lists