[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53F922D1.70409@gmail.com>
Date: Sat, 23 Aug 2014 16:25:05 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: David Miller <davem@...emloft.net>, netdev@...r.kernel.org
CC: therbert@...gle.com, jhs@...atatu.com, hannes@...essinduktion.org,
edumazet@...gle.com, jeffrey.t.kirsher@...el.com,
rusty@...tcorp.com.au
Subject: Re: [PATCH 0/3] Basic deferred TX queue flushing infrastructure.
On 08/23/2014 01:28 PM, David Miller wrote:
>
> Over time, and specifically and more recently at the Networking
> Workshop during Kernel SUmmit in Chicago, we have discussed the idea
> of having some way to optimize transmits of multiple TX packets at
> a time.
>
> There are several areas of overhead that could be amortized with such
> schemes. One has to do with locking and transactional overhead, the
> other has to do with device specific costs.
>
> This patch set here is more aimed at device specific costs.
>
> Typically a device queues up a packet in the TX queue and then has to
> do something to have the device start processing that new entry.
> Sometimes this is composed of doing an MMIO write to a "tail"
> register, and in other cases it can involve something as expensive as
> a hypervisor call.
The MMIO call isn't an issue until you encounter a locked operation, at
least on x86 architecture. So this often shows up in perf traces as a
hit on the qdisc lock right after completing a transmit. I've seen it
at around 20% of CPU utilization when I was doing routing work with ixgbe.
Thanks,
Alex
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists