lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Fri, 15 Dec 2023 15:08:27 -0300
From: Victor Nogueira <>
Subject: [PATCH RFC net-next] net: sched: act_mirred: Extend the cpu mirred nest guard with an explicit loop ttl

As pointed out by Jamal in:

Mirred is allowing for infinite loops in certain use cases, such as the

sudo ip netns add p4node
sudo ip link add p4port0 address 10:00:00:01:AA:BB type veth peer \
   port0 address 10:00:00:02:AA:BB

sudo ip link set dev port0 netns p4node
sudo ip a add dev p4port0
sudo ip neigh add dev p4port0 lladdr 10:00:00:02:aa:bb
sudo ip netns exec p4node ip a add dev port0
sudo ip netns exec p4node ip l set dev port0 up
sudo ip l set dev p4port0 up
sudo ip netns exec p4node tc qdisc add dev port0 clsact
sudo ip netns exec p4node tc filter add dev port0 ingress protocol ip \
   prio 10 matchall action mirred ingress redirect dev port0

ping -I p4port0 -c 1

To solve this, we reintroduced a ttl variable attached to the skb (in
struct tc_skb_cb) which will prevent infinite loops for use cases such as
the one described above.

The nest per cpu variable (tcf_mirred_nest_level) is now only used for
detecting whether we should call netif_rx or netif_receive_skb when
sending the packet to ingress.

Note that we do increment the ttl in every redirect/mirror so if you
have policies that redirect or mirror between devices of up to
MAX_REC_LOOP (4) with this patch that will be considered to be a loop.

Signed-off-by: Victor Nogueira <>
 include/net/pkt_sched.h | 11 +++++++++++
 net/sched/act_mirred.c  | 11 +++++++----
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
index 9fa1d0794dfa..fb8234fd5324 100644
--- a/include/net/pkt_sched.h
+++ b/include/net/pkt_sched.h
@@ -282,6 +282,7 @@ struct tc_skb_cb {
 	u8 post_ct:1;
 	u8 post_ct_snat:1;
 	u8 post_ct_dnat:1;
+	u8 ttl:3;
 	u16 zone; /* Only valid if post_ct = true */
@@ -293,6 +294,16 @@ static inline struct tc_skb_cb *tc_skb_cb(const struct sk_buff *skb)
 	return cb;
+static inline void tcf_ttl_set(struct sk_buff *skb, const u8 ttl)
+	tc_skb_cb(skb)->ttl = ttl;
+static inline u8 tcf_ttl_get(struct sk_buff *skb)
+	return tc_skb_cb(skb)->ttl;
 static inline bool tc_qdisc_stats_dump(struct Qdisc *sch,
 				       unsigned long cl,
 				       struct qdisc_walker *arg)
diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
index 0a711c184c29..42b267817f3c 100644
--- a/net/sched/act_mirred.c
+++ b/net/sched/act_mirred.c
@@ -29,7 +29,7 @@
 static LIST_HEAD(mirred_list);
 static DEFINE_SPINLOCK(mirred_list_lock);
-#define MIRRED_NEST_LIMIT    4
+#define MAX_REC_LOOP    4
 static DEFINE_PER_CPU(unsigned int, mirred_nest_level);
 static bool tcf_mirred_is_act_redirect(int action)
@@ -233,7 +233,6 @@ TC_INDIRECT_SCOPE int tcf_mirred_act(struct sk_buff *skb,
 	struct sk_buff *skb2 = skb;
 	bool m_mac_header_xmit;
 	struct net_device *dev;
-	unsigned int nest_level;
 	int retval, err = 0;
 	bool use_reinsert;
 	bool want_ingress;
@@ -243,9 +242,12 @@ TC_INDIRECT_SCOPE int tcf_mirred_act(struct sk_buff *skb,
 	int m_eaction;
 	int mac_len;
 	bool at_nh;
+	u8 ttl;
-	nest_level = __this_cpu_inc_return(mirred_nest_level);
-	if (unlikely(nest_level > MIRRED_NEST_LIMIT)) {
+	__this_cpu_inc(mirred_nest_level);
+	ttl = tcf_ttl_get(skb);
+	if (unlikely(ttl + 1 > MAX_REC_LOOP)) {
 		net_warn_ratelimited("Packet exceeded mirred recursion limit on dev %s\n",
@@ -307,6 +309,7 @@ TC_INDIRECT_SCOPE int tcf_mirred_act(struct sk_buff *skb,
 	skb2->skb_iif = skb->dev->ifindex;
 	skb2->dev = dev;
+	tcf_ttl_set(skb2, ttl + 1);
 	/* mirror is always swallowed */
 	if (is_redirect) {

Powered by blists - more mailing lists