[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070924171411.36494656@freepuppy.rosehill>
Date: Mon, 24 Sep 2007 17:14:11 -0700
From: Stephen Hemminger <shemminger@...ux-foundation.org>
To: "Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@...el.com>
Cc: <hadi@...erus.ca>, "David Miller" <davem@...emloft.net>,
<krkumar2@...ibm.com>, <johnpol@....mipt.ru>,
<herbert@...dor.apana.org.au>, <kaber@...sh.net>,
<jagana@...ibm.com>, <Robert.Olsson@...a.slu.se>,
<rick.jones2@...com>, <xma@...ibm.com>, <gaagaan@...il.com>,
<netdev@...r.kernel.org>, <rdreier@...co.com>,
<mcarlson@...adcom.com>, <jeff@...zik.org>, <mchan@...adcom.com>,
<general@...ts.openfabrics.org>, <kumarkr@...ux.ibm.com>,
<tgraf@...g.ch>, <randy.dunlap@...cle.com>, <sri@...ibm.com>
Subject: Re: [PATCH 1/4] [NET_SCHED] explict hold dev tx lock
On Mon, 24 Sep 2007 16:47:06 -0700
"Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@...el.com> wrote:
> > On Mon, 2007-24-09 at 15:57 -0700, Waskiewicz Jr, Peter P wrote:
> >
> > > I've looked at that as a candidate to use. The lock for
> > enqueue would
> > > be needed when actually placing the skb into the
> > appropriate software
> > > queue for the qdisc, so it'd be quick.
> >
> > The enqueue is easy to comprehend. The single device queue
> > lock should suffice. The dequeue is interesting:
>
> We should make sure we're symmetric with the locking on enqueue to
> dequeue. If we use the single device queue lock on enqueue, then
> dequeue will also need to check that lock in addition to the individual
> queue lock. The details of this are more trivial than the actual
> dequeue to make it efficient though.
>
> > Maybe you can point me to some doc or describe to me the
> > dequeue aspect; are you planning to have an array of txlocks
> > per, one per ring?
> > How is the policy to define the qdisc queues locked/mapped to
> > tx rings?
>
> The dequeue locking would be pushed into the qdisc itself. This is how
> I had it originally, and it did make the code more complex, but it was
> successful at breaking the heavily-contended queue_lock apart. I have a
> subqueue structure right now in netdev, which only has queue_state (for
> netif_{start|stop}_subqueue). This state is checked in sch_prio right
> now in the dequeue for both prio and rr. My approach is to add a
> queue_lock in that struct, so each queue allocated by the driver would
> have a lock per queue. Then in dequeue, that lock would be taken when
> the skb is about to be dequeued.
>
> The skb->queue_mapping field also maps directly to the queue index
> itself, so it can be unlocked easily outside of the context of the
> dequeue function. The policy would be to use a spin_trylock() in
> dequeue, so that dequeue can still do work if enqueue or another dequeue
> is busy. And the allocation of qdisc queues to device queues is assumed
> to be one-to-one (that's how the qdisc behaves now).
>
> I really just need to put my nose to the grindstone and get the patches
> together and to the list...stay tuned.
>
> Thanks,
> -PJ Waskiewicz
> -
Since we are redoing this, is there any way to make the whole TX path
more lockless? The existing model seems to be more of a monitor than
a real locking model.
--
Stephen Hemminger <shemminger@...ux-foundation.org>
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists