[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 10 Oct 2019 08:27:49 +0200
From: Jonas Bonn <jonas.bonn@...rounds.com>
To: Paolo Abeni <pabeni@...hat.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
"David S . Miller" <davem@...emloft.net>,
John Fastabend <john.fastabend@...il.com>
Subject: Re: Packet gets stuck in NOLOCK pfifo_fast qdisc
Hi Paolo,
On 09/10/2019 21:14, Paolo Abeni wrote:
> On Wed, 2019-10-09 at 08:46 +0200, Jonas Bonn wrote:
>> Hi,
>>
>> The lockless pfifo_fast qdisc has an issue with packets getting stuck in
>> the queue. What appears to happen is:
>>
>> i) Thread 1 holds the 'seqlock' on the qdisc and dequeues packets.
>> ii) Thread 1 dequeues the last packet in the queue.
>> iii) Thread 1 iterates through the qdisc->dequeue function again and
>> determines that the queue is empty.
>>
>> iv) Thread 2 queues up a packet. Since 'seqlock' is busy, it just
>> assumes the packet will be dequeued by whoever is holding the lock.
>>
>> v) Thread 1 releases 'seqlock'.
>>
>> After v), nobody will check if there are packets in the queue until a
>> new packet is enqueued. Thereby, the packet enqueued by Thread 2 may be
>> delayed indefinitely.
>
> I think you are right.
>
> It looks like this possible race is present since the initial lockless
> implementation - commit 6b3ba9146fe6 ("net: sched: allow qdiscs to
> handle locking")
>
> Anyhow the racing windows looks quite tiny - I never observed that
> issue in my tests. Do you have a working reproducer?
Yes, it's reliably reproducible. We do network latency measurements and
latency spikes for these packets that get stuck in the queue.
>
> Something alike the following code - completely untested - can possibly
> address the issue, but it's a bit rough and I would prefer not adding
> additonal complexity to the lockless qdiscs, can you please have a spin
> a it?
Your change looks reasonable. I'll give it a try.
>
> Thanks,
>
> Paolo
> ---
> diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
> index 6a70845bd9ab..65a1c03330d6 100644
> --- a/include/net/pkt_sched.h
> +++ b/include/net/pkt_sched.h
> @@ -113,18 +113,23 @@ bool sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q,
> struct net_device *dev, struct netdev_queue *txq,
> spinlock_t *root_lock, bool validate);
>
> -void __qdisc_run(struct Qdisc *q);
> +int __qdisc_run(struct Qdisc *q);
>
> static inline void qdisc_run(struct Qdisc *q)
> {
> + int quota = 0;
> +
> if (qdisc_run_begin(q)) {
> /* NOLOCK qdisc must check 'state' under the qdisc seqlock
> * to avoid racing with dev_qdisc_reset()
> */
> if (!(q->flags & TCQ_F_NOLOCK) ||
> likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state)))
> - __qdisc_run(q);
> + quota = __qdisc_run(q);
> qdisc_run_end(q);
> +
> + if (quota > 0 && q->flags & TCQ_F_NOLOCK && q->ops->peek(q))
> + __netif_schedule(q);
Not sure this is relevant, but there's a subtle difference in the way
that the underlying ptr_ring peeks at the queue head and checks whether
the queue is empty.
For peek it's:
READ_ONCE(r->queue[r->consumer_head]);
For is_empty it's:
!r->queue[READ_ONCE(r->consumer_head)];
The placement of the READ_ONCE changes here. I can't get my head around
whether this difference is significant or not. If it is, then perhaps
an is_empty() method is needed on the qdisc_ops...???
/Jonas
Powered by blists - more mailing lists