[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080812.221103.91087925.davem@davemloft.net>
Date: Tue, 12 Aug 2008 22:11:03 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: herbert@...dor.apana.org.au
Cc: jarkao2@...il.com, netdev@...r.kernel.org
Subject: Re: [PATCH] pkt_sched: Destroy gen estimators under rtnl_lock().
From: Herbert Xu <herbert@...dor.apana.org.au>
Date: Wed, 13 Aug 2008 14:30:19 +1000
> David Miller <davem@...emloft.net> wrote:
> >
> > The qdisc pointer traverses to the softirq handler, which can be run
> > in a process context (via ksoftirqd), and this pointer gets there
> > via the per-cpu ->output_queue.
>
> Here are two possible solutions:
>
> 1) The active way: smp_call_function and forcibly remove the qdiscs
> in question from each output_queue.
>
> 2) The passive way: Make dev_deactive call yield() until no qdisc's
> are on an output_queue. This assumes there is some sort of dead
> flag detection on the output_queue side so it doesn't keep going
> forever.
Yes, we'll need some kind of dead flag it seems.
Another thing we can do is, in the yield loop, grabbing the
__QDISC_STATE_RUNNING bit.
But actually, I think Jarek has a point.
The existing loop there in dev_deactivate() should work _iff_ we make
it look at the proper qdisc.
This is another case where I didn't transform things correctly. The
old code worked on dev->state since that's where we kept what used
to be __LINK_STATE_QDISC_RUNNING.
But now that state is kept in the qdisc itself. But we just zapped
the active qdisc, so the old one is in ->qdisc_sleeping not ->qdisc.
So, just like one of Jarek's patches, we should simply change
dev_queue->qdisc into dev_queue->qdisc_sleeping and that should
take care of the bulk of the issues.
Shouldn't it?
Hmmm... maybe we have to sample __QDISC_STATE_SCHED too.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists