[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241218154426.E4hsgTfF@linutronix.de>
Date: Wed, 18 Dec 2024 16:44:26 +0100
From: Sebastian Sewior <bigeasy@...utronix.de>
To: Steffen Klassert <steffen.klassert@...unet.com>
Cc: Alexei Starovoitov <alexei.starovoitov@...il.com>,
Network Development <netdev@...r.kernel.org>,
Jakub Kicinski <kuba@...nel.org>
Subject: Re: xfrm in RT
On 2024-12-18 09:32:26 [+0100], Steffen Klassert wrote:
> On Tue, Dec 17, 2024 at 04:07:16PM -0800, Alexei Starovoitov wrote:
> > Hi,
> >
> > Looks like xfrm isn't friendly to PREEMPT_RT.
Thank you for the report.
> > xfrm_input_state_lookup() is doing:
> >
> > int cpu = get_cpu();
> > ...
> > spin_lock_bh(&net->xfrm.xfrm_state_lock);
>
> We just need the cpu as a lookup key, no need to
> hold on the cpu. So we just can do put_cpu()
> directly after we fetched the value.
I would assume that the espX_gro_receive() caller is within NAPI. Can't
tell what xfrm_input() is.
However if you don't care about staying on the current CPU for the whole
time (your current get_cpu() -> put_cpu() span) you could do something
like
diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
index 67ca7ac955a37..66b108a5b87d4 100644
--- a/net/xfrm/xfrm_state.c
+++ b/net/xfrm/xfrm_state.c
@@ -1116,9 +1116,8 @@ struct xfrm_state *xfrm_input_state_lookup(struct net *net, u32 mark,
{
struct hlist_head *state_cache_input;
struct xfrm_state *x = NULL;
- int cpu = get_cpu();
- state_cache_input = per_cpu_ptr(net->xfrm.state_cache_input, cpu);
+ state_cache_input = raw_cpu_ptr(net->xfrm.state_cache_input);
rcu_read_lock();
hlist_for_each_entry_rcu(x, state_cache_input, state_cache_input) {
@@ -1150,7 +1149,6 @@ struct xfrm_state *xfrm_input_state_lookup(struct net *net, u32 mark,
out:
rcu_read_unlock();
- put_cpu();
return x;
}
EXPORT_SYMBOL(xfrm_input_state_lookup);
> I'll fix that,
Thank you.
> thanks!
Sebastian
Powered by blists - more mailing lists