[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210419152946.3n7adsd355rfeoda@lion.mk-sys.cz>
Date: Mon, 19 Apr 2021 17:29:46 +0200
From: Michal Kubecek <mkubecek@...e.cz>
To: Yunsheng Lin <linyunsheng@...wei.com>
Cc: davem@...emloft.net, kuba@...nel.org, olteanv@...il.com,
ast@...nel.org, daniel@...earbox.net, andriin@...com,
edumazet@...gle.com, weiwan@...gle.com, cong.wang@...edance.com,
ap420073@...il.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linuxarm@...neuler.org,
mkl@...gutronix.de, linux-can@...r.kernel.org, jhs@...atatu.com,
xiyou.wangcong@...il.com, jiri@...nulli.us, andrii@...nel.org,
kafai@...com, songliubraving@...com, yhs@...com,
john.fastabend@...il.com, kpsingh@...nel.org, bpf@...r.kernel.org,
jonas.bonn@...rounds.com, pabeni@...hat.com, mzhivich@...mai.com,
johunt@...mai.com, albcamus@...il.com, kehuan.feng@...il.com,
a.fatoum@...gutronix.de, atenart@...nel.org,
alexander.duyck@...il.com, hdanton@...a.com, jgross@...e.com,
JKosina@...e.com
Subject: Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for
lockless qdisc
On Fri, Apr 16, 2021 at 09:16:48AM +0800, Yunsheng Lin wrote:
> Lockless qdisc has below concurrent problem:
> cpu0 cpu1
> . .
> q->enqueue .
> . .
> qdisc_run_begin() .
> . .
> dequeue_skb() .
> . .
> sch_direct_xmit() .
> . .
> . q->enqueue
> . qdisc_run_begin()
> . return and do nothing
> . .
> qdisc_run_end() .
>
> cpu1 enqueue a skb without calling __qdisc_run() because cpu0
> has not released the lock yet and spin_trylock() return false
> for cpu1 in qdisc_run_begin(), and cpu0 do not see the skb
> enqueued by cpu1 when calling dequeue_skb() because cpu1 may
> enqueue the skb after cpu0 calling dequeue_skb() and before
> cpu0 calling qdisc_run_end().
>
> Lockless qdisc has below another concurrent problem when
> tx_action is involved:
>
> cpu0(serving tx_action) cpu1 cpu2
> . . .
> . q->enqueue .
> . qdisc_run_begin() .
> . dequeue_skb() .
> . . q->enqueue
> . . .
> . sch_direct_xmit() .
> . . qdisc_run_begin()
> . . return and do nothing
> . . .
> clear __QDISC_STATE_SCHED . .
> qdisc_run_begin() . .
> return and do nothing . .
> . . .
> . qdisc_run_end() .
>
> This patch fixes the above data race by:
> 1. Test STATE_MISSED before doing spin_trylock().
> 2. If the first spin_trylock() return false and STATE_MISSED is
> not set before the first spin_trylock(), Set STATE_MISSED and
> retry another spin_trylock() in case other CPU may not see
> STATE_MISSED after it releases the lock.
> 3. reschedule if STATE_MISSED is set after the lock is released
> at the end of qdisc_run_end().
>
> For tx_action case, STATE_MISSED is also set when cpu1 is at the
> end if qdisc_run_end(), so tx_action will be rescheduled again
> to dequeue the skb enqueued by cpu2.
>
> Clear STATE_MISSED before retrying a dequeuing when dequeuing
> returns NULL in order to reduce the overhead of the above double
> spin_trylock() and __netif_schedule() calling.
>
> The performance impact of this patch, tested using pktgen and
> dummy netdev with pfifo_fast qdisc attached:
>
> threads without+this_patch with+this_patch delta
> 1 2.61Mpps 2.60Mpps -0.3%
> 2 3.97Mpps 3.82Mpps -3.7%
> 4 5.62Mpps 5.59Mpps -0.5%
> 8 2.78Mpps 2.77Mpps -0.3%
> 16 2.22Mpps 2.22Mpps -0.0%
>
> Fixes: 6b3ba9146fe6 ("net: sched: allow qdiscs to handle locking")
> Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
> Tested-by: Juergen Gross <jgross@...e.com>
> ---
> V4: Change STATE_NEED_RESCHEDULE to STATE_MISSED mirroring
> NAPI's NAPIF_STATE_MISSED, and add Juergen's "Tested-by"
> tag for there is only renaming and typo fixing between
> V4 and V3.
> V3: Fix a compile error and a few comment typo, remove the
> __QDISC_STATE_DEACTIVATED checking, and update the
> performance data.
> V2: Avoid the overhead of fixing the data race as much as
> possible.
As pointed out in the discussion on v3, this patch may result in
significantly higher CPU consumption with multiple threads competing on
a saturated outgoing device. I missed this submission so that I haven't
checked it yet but given the description of v3->v4 changes above, it's
quite likely that it suffers from the same problem.
Michal
Powered by blists - more mailing lists