[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <573E08E3.3020200@hpe.com>
Date: Thu, 19 May 2016 11:41:39 -0700
From: Rick Jones <rick.jones2@....com>
To: Alexander Duyck <alexander.duyck@...il.com>,
Eric Dumazet <eric.dumazet@...il.com>
Cc: netdev <netdev@...r.kernel.org>,
Alexander Duyck <aduyck@...antis.com>
Subject: Re: [RFC] net: remove busylock
On 05/19/2016 11:03 AM, Alexander Duyck wrote:
> On Thu, May 19, 2016 at 10:08 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>> With HTB qdisc, here are the numbers for 200 concurrent TCP_RR, on a host with 48 hyperthreads.
...
>>
>> That would be a 8 % increase.
>
> The main point of the busy lock is to deal with the bulk throughput
> case, not the latency case which would be relatively well behaved.
> The problem wasn't really related to lock bouncing slowing things
> down. It was the fairness between the threads that was killing us
> because the dequeue needs to have priority.
Quibbledrift... While the origins of the netperf TCP_RR test center on
measuring latency, I'm not sure I'd call 200 of them running
concurrently a latency test. Indeed it may be neither fish nor fowl,
but it will certainly be exercising the basic packet send/receive path
rather fully and is likely a reasonable proxy for aggregate small packet
performance.
happy benchmarking,
rick jones
Powered by blists - more mailing lists