[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160113.112016.995229320622519258.davem@davemloft.net>
Date: Wed, 13 Jan 2016 11:20:16 -0500 (EST)
From: David Miller <davem@...emloft.net>
To: john.fastabend@...il.com
Cc: daniel@...earbox.net, eric.dumazet@...il.com, jhs@...atatu.com,
aduyck@...antis.com, brouer@...hat.com, john.r.fastabend@...el.com,
netdev@...r.kernel.org
Subject: Re: [RFC PATCH 06/12] net: sched: support qdisc_reset on NOLOCK
qdisc
From: John Fastabend <john.fastabend@...il.com>
Date: Wed, 30 Dec 2015 09:53:13 -0800
> case 2: dev_deactivate sequence. This can come from a user bringing
> the interface down which causes the gso_skb list to be flushed
> and the qlen zero'd. At the moment this is protected by the
> qdisc lock so while we clear the qlen/gso_skb fields we are
> guaranteed no new skbs are added. For the lockless case
> though this is not true. To resolve this move the qdisc_reset
> call after the new qdisc is assigned and a grace period is
> exercised to ensure no new skbs can be enqueued. Further
> the RTNL lock is held so we can not get another call to
> activate the qdisc while the skb lists are being free'd.
>
> Finally, fix qdisc_reset to handle the per cpu stats and
> skb lists.
Just wanted to note that some setups are sensitive to device
register/deregister costs. This is why we batch register and
unregister operations in the core, so that the RCU grace period
is consolidated into one when we register/unregister a lot of
net devices.
If we now will incur a new per-device unregister RCU grace period
when the qdisc is destroyed, it could cause a regression.
Powered by blists - more mailing lists