[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20140521.155100.1364245684110064848.davem@davemloft.net>
Date: Wed, 21 May 2014 15:51:00 -0400 (EDT)
From: David Miller <davem@...emloft.net>
To: xii@...gle.com
Cc: netdev@...r.kernel.org, jasowang@...hat.com, mst@...hat.com,
maxk@....qualcomm.com, ncardwell@...gle.com, edumazet@...gle.com
Subject: Re: [PATCH v2] net-tun: restructure tun_do_read for better
sleep/wakeup efficiency
From: Xi Wang <xii@...gle.com>
Date: Fri, 16 May 2014 15:11:48 -0700
> tun_do_read always adds current thread to wait queue, even if a packet
> is ready to read. This is inefficient because both sleeper and waker
> want to acquire the wait queue spin lock when packet rate is high.
>
> We restructure the read function and use common kernel networking
> routines to handle receive, sleep and wakeup. With the change
> available packets are checked first before the reading thread is added
> to the wait queue.
>
> Ran performance tests with the following configuration:
>
> - my packet generator -> tap1 -> br0 -> tap0 -> my packet consumer
> - sender pinned to one core and receiver pinned to another core
> - sender send small UDP packets (64 bytes total) as fast as it can
> - sandy bridge cores
> - throughput are receiver side goodput numbers
>
> The results are
>
> baseline: 731k pkts/sec, cpu utilization at 1.50 cpus
> changed: 783k pkts/sec, cpu utilization at 1.53 cpus
>
> The performance difference is largely determined by packet rate and
> inter-cpu communication cost. For example, if the sender and
> receiver are pinned to different cpu sockets, the results are
>
> baseline: 558k pkts/sec, cpu utilization at 1.71 cpus
> changed: 690k pkts/sec, cpu utilization at 1.67 cpus
>
> Co-authored-by: Eric Dumazet <edumazet@...gle.com>
> Signed-off-by: Xi Wang <xii@...gle.com>
Applied to net-next, thanks.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists