[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <rq4kfr3ze5thlcqs3peuj4qktel4hv5svqwqdh7ywuvrex7xiu@vf45lxvtj4kr>
Date: Mon, 22 Apr 2024 12:48:19 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org,
Willem de Bruijn <willemb@...gle.com>, Neal Cardwell <ncardwell@...gle.com>, eric.dumazet@...il.com,
Jonathan Heathcote <jonathan.heathcote@....co.uk>, Soheil Hassas Yeganeh <soheil@...gle.com>
Subject: Re: [PATCH net] net: fix sk_memory_allocated_{add|sub} vs softirqs
On Sun, Apr 21, 2024 at 05:52:48PM +0000, Eric Dumazet wrote:
> Jonathan Heathcote reported a regression caused by blamed commit
> on aarch64 architecture.
>
> x86 happens to have irq-safe __this_cpu_add_return()
> and __this_cpu_sub(), but this is not generic.
>
> I think my confusion came from "struct sock" argument,
> because these helpers are called with a locked socket.
> But the memory accounting is per-proto (and per-cpu after
> the blamed commit). We might cleanup these helpers later
> to directly accept a "struct proto *proto" argument.
>
> Switch to this_cpu_add_return() and this_cpu_xchg()
> operations, and get rid of preempt_disable()/preempt_enable() pairs.
>
> Fast path becomes a bit faster as a result :)
>
> Many thanks to Jonathan Heathcote for his awesome report and
> investigations.
>
> Fixes: 3cd3399dd7a8 ("net: implement per-cpu reserves for memory_allocated")
> Reported-by: Jonathan Heathcote <jonathan.heathcote@....co.uk>
> Closes: https://lore.kernel.org/netdev/VI1PR01MB42407D7947B2EA448F1E04EFD10D2@VI1PR01MB4240.eurprd01.prod.exchangelabs.com/
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Cc: Soheil Hassas Yeganeh <soheil@...gle.com>
Reviewed-by: Shakeel Butt <shakeel.butt@...ux.dev>
Powered by blists - more mailing lists