[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <465a540e-5296-32e7-f6a6-79942dfe2618@netrounds.com>
Date: Fri, 11 Oct 2019 02:39:48 +0200
From: Jonas Bonn <jonas.bonn@...rounds.com>
To: Paolo Abeni <pabeni@...hat.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
"David S . Miller" <davem@...emloft.net>,
John Fastabend <john.fastabend@...il.com>
Subject: Re: Packet gets stuck in NOLOCK pfifo_fast qdisc
Hi Paolo,
On 09/10/2019 21:14, Paolo Abeni wrote:
> Something alike the following code - completely untested - can possibly
> address the issue, but it's a bit rough and I would prefer not adding
> additonal complexity to the lockless qdiscs, can you please have a spin
> a it?
We've tested a couple of variants of this patch today, but unfortunately
it doesn't fix the problem of packets getting stuck in the queue.
A couple of comments:
i) On 5.4, there is the BYPASS path that also needs the same treatment
as it's essentially replicating the behavour of qdisc_run, just without
the queue/dequeue steps
ii) We are working a lot with the 4.19 kernel so I backported to the
patch to this version and tested there. Here the solution would seem to
be more robust as the BYPASS path does not exist.
Unfortunately, in both cases we continue to see the issue of the "last
packet" getting stuck in the queue.
/Jonas
>
> Thanks,
>
> Paolo
> ---
> diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
> index 6a70845bd9ab..65a1c03330d6 100644
> --- a/include/net/pkt_sched.h
> +++ b/include/net/pkt_sched.h
> @@ -113,18 +113,23 @@ bool sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q,
> struct net_device *dev, struct netdev_queue *txq,
> spinlock_t *root_lock, bool validate);
>
> -void __qdisc_run(struct Qdisc *q);
> +int __qdisc_run(struct Qdisc *q);
>
> static inline void qdisc_run(struct Qdisc *q)
> {
> + int quota = 0;
> +
> if (qdisc_run_begin(q)) {
> /* NOLOCK qdisc must check 'state' under the qdisc seqlock
> * to avoid racing with dev_qdisc_reset()
> */
> if (!(q->flags & TCQ_F_NOLOCK) ||
> likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state)))
> - __qdisc_run(q);
> + quota = __qdisc_run(q);
> qdisc_run_end(q);
> +
> + if (quota > 0 && q->flags & TCQ_F_NOLOCK && q->ops->peek(q))
> + __netif_schedule(q);
> }
> }
>
> diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
> index 17bd8f539bc7..013480f6a794 100644
> --- a/net/sched/sch_generic.c
> +++ b/net/sched/sch_generic.c
> @@ -376,7 +376,7 @@ static inline bool qdisc_restart(struct Qdisc *q, int *packets)
> return sch_direct_xmit(skb, q, dev, txq, root_lock, validate);
> }
>
> -void __qdisc_run(struct Qdisc *q)
> +int __qdisc_run(struct Qdisc *q)
> {
> int quota = dev_tx_weight;
> int packets;
> @@ -390,9 +390,10 @@ void __qdisc_run(struct Qdisc *q)
> quota -= packets;
> if (quota <= 0 || need_resched()) {
> __netif_schedule(q);
> - break;
> + return 0;
> }
> }
> + return quota;
> }
>
> unsigned long dev_trans_start(struct net_device *dev)
>
Powered by blists - more mailing lists