lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aRSvnfdhO2G1DXJI@lore-desk>
Date: Wed, 12 Nov 2025 17:02:37 +0100
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: Pablo Neira Ayuso <pablo@...filter.org>
Cc: "David S. Miller" <davem@...emloft.net>,
	David Ahern <dsahern@...nel.org>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
	Simon Horman <horms@...nel.org>,
	Jozsef Kadlecsik <kadlec@...filter.org>,
	Shuah Khan <shuah@...nel.org>, Andrew Lunn <andrew+netdev@...n.ch>,
	Phil Sutter <phil@....cc>, Florian Westphal <fw@...len.de>,
	netdev@...r.kernel.org, netfilter-devel@...r.kernel.org,
	coreteam@...filter.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH nf-next v9 2/3] net: netfilter: Add IPIP flowtable tx sw
 acceleration

> Hi Lorenzo,

Hi Pablo,

> 
> On Fri, Nov 07, 2025 at 12:14:47PM +0100, Lorenzo Bianconi wrote:
> [...]
> > @@ -565,8 +622,9 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
> >  
> >  	dir = tuplehash->tuple.dir;
> >  	flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
> > +	other_tuple = &flow->tuplehash[!dir].tuple;
> >  
> > -	if (nf_flow_encap_push(skb, &flow->tuplehash[!dir].tuple) < 0)
> > +	if (nf_flow_encap_push(state->net, skb, other_tuple))
> >  		return NF_DROP;
> >  
> >  	switch (tuplehash->tuple.xmit_type) {
> > @@ -577,7 +635,9 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
> >  			flow_offload_teardown(flow);
> >  			return NF_DROP;
> >  		}
> > -		neigh = ip_neigh_gw4(rt->dst.dev, rt_nexthop(rt, flow->tuplehash[!dir].tuple.src_v4.s_addr));
> > +		dest = other_tuple->tun_num ? other_tuple->tun.src_v4.s_addr
> > +					    : other_tuple->src_v4.s_addr;
> 
> I think this can be simplified if my series use the ip_hdr(skb)->daddr
> for rt_nexthop(), see attached patch. This would be fetched _before_
> pushing the tunnel and layer 2 encapsulation headers. Then, there is
> no need to fetch other_tuple and check if tun_num is greater than
> zero.
> 
> See my sketch patch, I am going to give this a try, if this is
> correct, I would need one more iteration from you.
> diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
> index 8b74fb34998e..ff2b6c16c715 100644
> --- a/net/netfilter/nf_flow_table_ip.c
> +++ b/net/netfilter/nf_flow_table_ip.c
> @@ -427,6 +427,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
>  	struct flow_offload *flow;
>  	struct neighbour *neigh;
>  	struct rtable *rt;
> +	__be32 ip_dst;
>  	int ret;
>  
>  	tuplehash = nf_flow_offload_lookup(&ctx, flow_table, skb);
> @@ -449,6 +450,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
>  
>  	dir = tuplehash->tuple.dir;
>  	flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
> +	ip_dst = ip_hdr(skb)->daddr;

I agree this patch will simplify my series (thx :)) but I guess we should move
ip_dst initialization after nf_flow_encap_push() since we need to route the
traffic according to the tunnel dst IP address, right?

Regards,
Lorenzo

>  
>  	switch (tuplehash->tuple.xmit_type) {
>  	case FLOW_OFFLOAD_XMIT_NEIGH:
> @@ -458,7 +460,7 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
>  			flow_offload_teardown(flow);
>  			return NF_DROP;
>  		}
> -		neigh = ip_neigh_gw4(rt->dst.dev, rt_nexthop(rt, flow->tuplehash[!dir].tuple.src_v4.s_addr));
> +		neigh = ip_neigh_gw4(rt->dst.dev, rt_nexthop(rt, ip_dst));
>  		if (IS_ERR(neigh)) {
>  			flow_offload_teardown(flow);
>  			return NF_DROP;


Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ