[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52B992EE.5030401@mellanox.com>
Date: Tue, 24 Dec 2013 15:58:06 +0200
From: Hadar Hen Zion <hadarh@...lanox.com>
To: Richard Cochran <richardcochran@...il.com>
CC: Shawn Bohrer <shawn.bohrer@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Or Gerlitz <ogerlitz@...lanox.com>,
Amir Vadai <amirv@...lanox.com>, <netdev@...r.kernel.org>,
<tomk@...advisors.com>, Shawn Bohrer <sbohrer@...advisors.com>
Subject: Re: [PATCH net-next 1/2] mlx4_en: Add PTP hardware clock
On 12/23/2013 8:48 PM, Richard Cochran wrote:
> On Sun, Dec 22, 2013 at 03:13:12PM +0200, Hadar Hen Zion wrote:
>
>> 2. Adding spin lock in the data path reduce performance by 15% when
>> HW timestamping is enabled. I did some testing and replacing
>> spin_lock_irqsave with read/write_lock_irqsave prevents the
>> performance decrease.
>
> Why do the spin locks cause such a bottleneck?
>
> Is there really that much lock contention in your test?
>
> Your figure of 15% seems awfully high. How did you arrive at that
> figure?
>
> Thanks,
> Richard
>
The spin locks case such a bottleneck since I'm using multiple streams
in my performance test. RSS mechanism scattered the streams between
multiple RX rings while each RX ring is bound to a different cup.
The describe scenario cause lock contention between the different RX rings.
Performance drops from 37.8 Gbits/sec to 32.1 Gbits/sec when spin locks
are added and goes back to 37.8 Gbits/sec when using read/write locks.
Thanks,
Hadar
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists