lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Tue, 13 Apr 2021 10:56:42 +0800
From:   Yunsheng Lin <linyunsheng@...wei.com>
To:     Hillf Danton <hdanton@...a.com>
CC:     Juergen Gross <jgross@...e.com>, <netdev@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, Jiri Kosina <JKosina@...e.com>
Subject: Re: [PATCH net v3] net: sched: fix packet stuck problem for lockless
 qdisc

On 2021/4/13 10:21, Hillf Danton wrote:
> On Mon, 12 Apr 2021 20:00:43  Yunsheng Lin wrote:
>>
>> Yes, the below patch seems to fix the data race described in
>> the commit log.
>> Then what is the difference between my patch and your patch below:)
> 
> Hehe, this is one of the tough questions over a bounch of weeks.
> 
> If a seqcount can detect the race between skb enqueue and dequeue then we
> cant see any excuse for not rolling back to the point without NOLOCK.

I am not sure I understood what you meant above.

As my understanding, the below patch is essentially the same as
your previous patch, the only difference I see is it uses qdisc->pad
instead of __QDISC_STATE_NEED_RESCHEDULE.

So instead of proposing another patch, it would be better if you
comment on my patch, and make improvement upon that.

> 
> --- a/net/sched/sch_generic.c
> +++ b/net/sched/sch_generic.c
> @@ -632,6 +632,7 @@ static int pfifo_fast_enqueue(struct sk_
>  			return qdisc_drop(skb, qdisc, to_free);
>  	}
>  
> +	qdisc->pad++;

As has been mentioned:
Doing updating in pfifo_fast_enqueue() unconditionally does not
seems to be performance friendly, which is something my patch
tries to avoid as much as possible.

>  	qdisc_update_stats_at_enqueue(qdisc, pkt_len);
>  	return NET_XMIT_SUCCESS;
>  }
> @@ -642,6 +643,7 @@ static struct sk_buff *pfifo_fast_dequeu
>  	struct sk_buff *skb = NULL;
>  	int band;
>  
> +	qdisc->pad = 0;
>  	for (band = 0; band < PFIFO_FAST_BANDS && !skb; band++) {
>  		struct skb_array *q = band2list(priv, band);
>  
> --- a/include/net/sch_generic.h
> +++ b/include/net/sch_generic.h
> @@ -176,8 +176,12 @@ static inline bool qdisc_run_begin(struc
>  static inline void qdisc_run_end(struct Qdisc *qdisc)
>  {
>  	write_seqcount_end(&qdisc->running);
> -	if (qdisc->flags & TCQ_F_NOLOCK)
> +	if (qdisc->flags & TCQ_F_NOLOCK) {
>  		spin_unlock(&qdisc->seqlock);
> +
> +		if (qdisc->pad != 0)
> +			__netif_schedule(qdisc);
> +	}
>  }
>  
>  static inline bool qdisc_may_bulk(const struct Qdisc *qdisc)
> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ