lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 24 Sep 2021 07:30:30 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Jakub Kicinski <kuba@...nel.org>, davem@...emloft.net
Cc:     netdev@...r.kernel.org, eric.dumazet@...il.com, weiwan@...gle.com,
        xuanzhuo@...ux.alibaba.com
Subject: Re: [PATCH net-next] net: make napi_disable() symmetric with enable



On 9/23/21 9:02 PM, Jakub Kicinski wrote:
> Commit 3765996e4f0b ("napi: fix race inside napi_enable") fixed
> an ordering bug in napi_enable() and made the napi_enable() diverge
> from napi_disable(). The state transitions done on disable are
> not symmetric to enable.
> 
> There is no known bug in napi_disable() this is just refactoring.
> 
> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> ---
> Does this look like a reasonable cleanup?
> 
> TBH my preference would be to stick to the code we have in
> disable, and refactor enable back to single ops just in the
> right order. I find the series of atomic ops far easier to read
> and cmpxchg is not really required here.

I think RT crowd does not like the cmpxchg(), but I guess now
we have them in fast path, we are a bit stuck.

> ---
>  net/core/dev.c | 18 ++++++++++++------
>  1 file changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 62ddd7d6e00d..0d297423b304 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -6900,19 +6900,25 @@ EXPORT_SYMBOL(netif_napi_add);
>  
>  void napi_disable(struct napi_struct *n)
>  {
> +	unsigned long val, new;
> +
>  	might_sleep();
>  	set_bit(NAPI_STATE_DISABLE, &n->state);
>  
> -	while (test_and_set_bit(NAPI_STATE_SCHED, &n->state))
> -		msleep(1);
> -	while (test_and_set_bit(NAPI_STATE_NPSVC, &n->state))
> -		msleep(1);
> +	do {
> +		val = READ_ONCE(n->state);
> +		if (val & (NAPIF_STATE_SCHED | NAPIF_STATE_NPSVC)) {
> +			msleep(1);

Patch seems good to me.

We also could replace this pessimistic msleep(1) with more opportunistic usleep_range(20, 200)

> +			continue;
> +		}
> +
> +		new = val | NAPIF_STATE_SCHED | NAPIF_STATE_NPSVC;
> +		new &= ~(NAPIF_STATE_THREADED | NAPIF_STATE_PREFER_BUSY_POLL);
> +	} while (cmpxchg(&n->state, val, new) != val);
>  
>  	hrtimer_cancel(&n->timer);
>  
> -	clear_bit(NAPI_STATE_PREFER_BUSY_POLL, &n->state);
>  	clear_bit(NAPI_STATE_DISABLE, &n->state);
> -	clear_bit(NAPI_STATE_THREADED, &n->state);
>  }
>  EXPORT_SYMBOL(napi_disable);
>  
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ