[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOGi=dOLCQ4BtzpsSEO-XVepFbypvj7qJ4_d2=bcp4KYRKJ5RQ@mail.gmail.com>
Date: Mon, 23 Nov 2015 17:41:54 +0800
From: Ling Ma <ling.ma.program@...il.com>
To: Waiman Long <waiman.long@....com>
Cc: Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
linux-kernel@...r.kernel.org, Ling <ling.ml@...baba-inc.com>
Subject: Re: Improve spinlock performance by moving work to one core
Hi Longman,
Attachments include user space application thread.c and kernel patch
spinlock-test.patch based on kernel 4.3.0-rc4
we run thread.c with kernel patch, test original and new spinlock respectively,
perf top -G indicates thread.c cause cache_alloc_refill and
cache_flusharray functions to spend ~25% time on original spinlock,
after introducing new spinlock in two functions, the cost time become ~22%.
The printed data also tell us the new spinlock improves performance
by about 15%( 93841765576 / 81036259588) on E5-2699V3
Appreciate your comments.
Thanks
Ling
2015-11-07 1:38 GMT+08:00 Waiman Long <waiman.long@....com>:
>
> On 11/05/2015 11:28 PM, Ling Ma wrote:
>>
>> Longman
>>
>> Thanks for your suggestion.
>> We will look for real scenario to test, and could you please introduce
>> some benchmarks on spinlock ?
>>
>> Regards
>> Ling
>>
>>
>
> The kernel has been well optimized for most common workloads that spinlock contention is usually not a performance bottleneck. There are still corner cases where there is heavy spinlock contention.
>
> I used a spinlock loop microbenchmark like what you are doing as well as AIM7 for application level testing.
>
> Cheers,
> Longman
>
>
Download attachment "spinlock-test.patch" of type "application/octet-stream" (24652 bytes)
View attachment "thread.c" of type "text/x-csrc" (2150 bytes)
Powered by blists - more mailing lists