[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-Jkq3Fvqo8NkwNA4he1RWifFMVqjX11c_0z9FED9v60CQ@mail.gmail.com>
Date: Tue, 14 Nov 2017 21:04:01 -0500
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: John Fastabend <john.fastabend@...il.com>
Cc: Daniel Borkmann <daniel@...earbox.net>,
Eric Dumazet <eric.dumazet@...il.com>, make0818@...il.com,
Network Development <netdev@...r.kernel.org>,
Jiří Pírko <jiri@...nulli.us>,
Cong Wang <xiyou.wangcong@...il.com>
Subject: Re: [RFC PATCH 06/17] net: sched: explicit locking in gso_cpu fallback
>> -static inline int dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
>> +static inline int __dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
>> {
>
> Perhaps dev_requeue_skb_qdisc_locked is more descriptive. Or
> adding a lockdep_is_held(..) also documents that the __locked variant
> below is not just a lock/unlock wrapper around this inner function.
>
>> - q->gso_skb = skb;
>> + __skb_queue_head(&q->gso_skb, skb);
>> q->qstats.requeues++;
>> qdisc_qstats_backlog_inc(q, skb);
>> q->q.qlen++; /* it's still part of the queue */
>> @@ -57,6 +56,30 @@ static inline int dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
>> return 0;
>> }
>>
>> +static inline int dev_requeue_skb_locked(struct sk_buff *skb, struct Qdisc *q)
>> +{
>> + spinlock_t *lock = qdisc_lock(q);
>> +
>> + spin_lock(lock);
>> + __skb_queue_tail(&q->gso_skb, skb);
>
> why does this requeue at the tail, unlike __dev_requeue_skb?
I guess that requeue has to queue at the tail in the lockless case,
and it does not matter in the qdisc_locked case, as then there can
only ever be at most one outstanding gso_skb.
Powered by blists - more mailing lists