lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 12 Jun 2024 13:18:29 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-kernel@...r.kernel.org, netdev@...r.kernel.org, "David S. Miller"
 <davem@...emloft.net>, Daniel Bristot de Oliveira <bristot@...nel.org>,
 Boqun Feng <boqun.feng@...il.com>, Daniel Borkmann <daniel@...earbox.net>,
 Eric Dumazet <edumazet@...gle.com>, Frederic Weisbecker
 <frederic@...nel.org>, Ingo Molnar <mingo@...hat.com>, Jakub Kicinski
 <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Peter Zijlstra
 <peterz@...radead.org>, Thomas Gleixner <tglx@...utronix.de>, Waiman Long
 <longman@...hat.com>, Will Deacon <will@...nel.org>, Ben Segall
 <bsegall@...gle.com>, Daniel Bristot de Oliveira <bristot@...hat.com>,
 Dietmar Eggemann <dietmar.eggemann@....com>, Juri Lelli
 <juri.lelli@...hat.com>, Mel Gorman <mgorman@...e.de>, Valentin Schneider
 <vschneid@...hat.com>, Vincent Guittot <vincent.guittot@...aro.org>
Subject: Re: [PATCH v6 net-next 08/15] net: softnet_data: Make
 xmit.recursion per task.

On Wed, 12 Jun 2024 18:44:34 +0200
Sebastian Andrzej Siewior <bigeasy@...utronix.de> wrote:

> Softirq is preemptible on PREEMPT_RT. Without a per-CPU lock in
> local_bh_disable() there is no guarantee that only one device is
> transmitting at a time.
> With preemption and multiple senders it is possible that the per-CPU
> recursion counter gets incremented by different threads and exceeds
> XMIT_RECURSION_LIMIT leading to a false positive recursion alert.
> 
> Instead of adding a lock to protect the per-CPU variable it is simpler
> to make the counter per-task. Sending and receiving skbs happens always
> in thread context anyway.
> 
> Having a lock to protected the per-CPU counter would block/ serialize two
> sending threads needlessly. It would also require a recursive lock to
> ensure that the owner can increment the counter further.
> 
> Make the recursion counter a task_struct member on PREEMPT_RT.

I'm curious to what would be the harm to using a per_task counter
instead of per_cpu outside of PREEMPT_RT. That way, we wouldn't have to
have the #ifdef.

-- Steve


> 
> Cc: Ben Segall <bsegall@...gle.com>
> Cc: Daniel Bristot de Oliveira <bristot@...hat.com>
> Cc: Dietmar Eggemann <dietmar.eggemann@....com>
> Cc: Juri Lelli <juri.lelli@...hat.com>
> Cc: Mel Gorman <mgorman@...e.de>
> Cc: Steven Rostedt <rostedt@...dmis.org>
> Cc: Valentin Schneider <vschneid@...hat.com>
> Cc: Vincent Guittot <vincent.guittot@...aro.org>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> ---
>  include/linux/netdevice.h | 11 +++++++++++
>  include/linux/sched.h     |  4 +++-
>  net/core/dev.h            | 20 ++++++++++++++++++++
>  3 files changed, 34 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index d20c6c99eb887..b5ec072ec2430 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -3223,7 +3223,9 @@ struct softnet_data {
>  #endif
>  	/* written and read only by owning cpu: */
>  	struct {
> +#ifndef CONFIG_PREEMPT_RT
>  		u16 recursion;
> +#endif
>  		u8  more;
>  #ifdef CONFIG_NET_EGRESS
>  		u8  skip_txqueue;
> @@ -3256,10 +3258,19 @@ struct softnet_data {
>  
>  DECLARE_PER_CPU_ALIGNED(struct softnet_data, softnet_data);
>  
> +#ifdef CONFIG_PREEMPT_RT
> +static inline int dev_recursion_level(void)
> +{
> +	return current->net_xmit_recursion;
> +}
> +
> +#else
> +
>  static inline int dev_recursion_level(void)
>  {
>  	return this_cpu_read(softnet_data.xmit.recursion);
>  }
> +#endif
>  
>  void __netif_schedule(struct Qdisc *q);
>  void netif_schedule_queue(struct netdev_queue *txq);
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 61591ac6eab6d..a9b0ca72db55f 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -975,7 +975,9 @@ struct task_struct {
>  	/* delay due to memory thrashing */
>  	unsigned                        in_thrashing:1;
>  #endif
> -
> +#ifdef CONFIG_PREEMPT_RT
> +	u8				net_xmit_recursion;
> +#endif
>  	unsigned long			atomic_flags; /* Flags requiring atomic access. */
>  
>  	struct restart_block		restart_block;
> diff --git a/net/core/dev.h b/net/core/dev.h
> index b7b518bc2be55..2f96d63053ad0 100644
> --- a/net/core/dev.h
> +++ b/net/core/dev.h
> @@ -150,6 +150,25 @@ struct napi_struct *napi_by_id(unsigned int napi_id);
>  void kick_defer_list_purge(struct softnet_data *sd, unsigned int cpu);
>  
>  #define XMIT_RECURSION_LIMIT	8
> +
> +#ifdef CONFIG_PREEMPT_RT
> +static inline bool dev_xmit_recursion(void)
> +{
> +	return unlikely(current->net_xmit_recursion > XMIT_RECURSION_LIMIT);
> +}
> +
> +static inline void dev_xmit_recursion_inc(void)
> +{
> +	current->net_xmit_recursion++;
> +}
> +
> +static inline void dev_xmit_recursion_dec(void)
> +{
> +	current->net_xmit_recursion--;
> +}
> +
> +#else
> +
>  static inline bool dev_xmit_recursion(void)
>  {
>  	return unlikely(__this_cpu_read(softnet_data.xmit.recursion) >
> @@ -165,5 +184,6 @@ static inline void dev_xmit_recursion_dec(void)
>  {
>  	__this_cpu_dec(softnet_data.xmit.recursion);
>  }
> +#endif
>  
>  #endif


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ