[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+mtBx8KfVQYcZFCJrQF__LJh7e0cRSPqSk1nn537Z_ATMjD9Q@mail.gmail.com>
Date: Thu, 17 Nov 2011 16:35:34 -0800
From: Tom Herbert <therbert@...gle.com>
To: Andy Fleming <afleming@...il.com>
Cc: Dave Taht <dave.taht@...il.com>,
Eric Dumazet <eric.dumazet@...il.com>,
Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: root_lock vs. device's TX lock
> Actually, I'm interested in circumventing *both* locks. Our SoC has
> some quite-versatile queueing infrastructure, such that (for many
> queueing setups) we can do all of the queueing in hardware, using
> per-cpu access portals. By hacking around the qdisc lock, and using a
> tx queue per core, we were able to achieve a significant speedup.
>
This was actually one of the motivations for my question. If we have
a one TX queue per core, and and use a trivial mq aware qdisc for
instance, the locking becomes mostly overhead. I don't mind taking a
lock once per TX, but right now were taking three! (root lock twice,
and device lock once).
Even without one queue per TX, I think the overhead savings may still
be present. Eric, I realize that a point of dropping the root lock in
sch_direct_xmit is to possibly allow queuing to to qdisc and device
xmit in parallel, but if you're using a trivial qdisc then the time in
qdisc may be << time in device xmit, so the overhead of locking could
mitigate the gains in parallelism. At the very least, this benefit is
hugely variable depending on the qdisc used.
Tom
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists