[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-+RS1v5vYy5ynQLaioc-527YN4a-P_+fvJ5AxeWLdqmwA@mail.gmail.com>
Date: Tue, 14 Nov 2017 19:41:13 -0500
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: John Fastabend <john.fastabend@...il.com>
Cc: Daniel Borkmann <daniel@...earbox.net>,
Eric Dumazet <eric.dumazet@...il.com>, make0818@...il.com,
Network Development <netdev@...r.kernel.org>,
Jiří Pírko <jiri@...nulli.us>,
Cong Wang <xiyou.wangcong@...il.com>
Subject: Re: [RFC PATCH 06/17] net: sched: explicit locking in gso_cpu fallback
> /* use instead of qdisc->dequeue() for all qdiscs queried with ->peek() */
> static inline struct sk_buff *qdisc_dequeue_peeked(struct Qdisc *sch)
> {
> - struct sk_buff *skb = sch->gso_skb;
> + struct sk_buff *skb = skb_peek(&sch->gso_skb);
>
> if (skb) {
> - sch->gso_skb = NULL;
> + skb = __skb_dequeue(&sch->gso_skb);
> qdisc_qstats_backlog_dec(sch, skb);
> sch->q.qlen--;
In lockless qdiscs, can this race, so that __skb_dequeue returns NULL?
Same for its use in qdisc_peek_dequeued.
> -static inline int dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
> +static inline int __dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
> {
Perhaps dev_requeue_skb_qdisc_locked is more descriptive. Or
adding a lockdep_is_held(..) also documents that the __locked variant
below is not just a lock/unlock wrapper around this inner function.
> - q->gso_skb = skb;
> + __skb_queue_head(&q->gso_skb, skb);
> q->qstats.requeues++;
> qdisc_qstats_backlog_inc(q, skb);
> q->q.qlen++; /* it's still part of the queue */
> @@ -57,6 +56,30 @@ static inline int dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
> return 0;
> }
>
> +static inline int dev_requeue_skb_locked(struct sk_buff *skb, struct Qdisc *q)
> +{
> + spinlock_t *lock = qdisc_lock(q);
> +
> + spin_lock(lock);
> + __skb_queue_tail(&q->gso_skb, skb);
why does this requeue at the tail, unlike __dev_requeue_skb?
> + spin_unlock(lock);
> +
> + qdisc_qstats_cpu_requeues_inc(q);
> + qdisc_qstats_cpu_backlog_inc(q, skb);
> + qdisc_qstats_cpu_qlen_inc(q);
> + __netif_schedule(q);
> +
> + return 0;
> +}
> +
Powered by blists - more mailing lists