[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1F10D321-2EB5-4546-96BB-0ABEC7638D6E@oracle.com>
Date: Thu, 29 Jun 2023 22:19:36 +0000
From: Saeed Mirzamohammadi <saeed.mirzamohammadi@...cle.com>
To: Chen Yu <yu.c.chen@...el.com>,
Vincent Guittot <vincent.guittot@...aro.org>
CC: Ingo Molnar <mingo@...hat.com>,
"peterz@...radead.org" <peterz@...radead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"zhangqiao22@...wei.com" <zhangqiao22@...wei.com>
Subject: Re: Reporting a performance regression in sched/fair on Unixbench
Shell Scripts with commit a53ce18cacb4
> On Jun 21, 2023, at 9:41 AM, Saeed Mirzamohammadi <saeed.mirzamohammadi@...cle.com> wrote:
>
> Hi Chen, Vincent,
>
>> On Jun 13, 2023, at 11:37 PM, Chen Yu <yu.c.chen@...el.com> wrote:
>>
>> On 2023-06-13 at 19:35:55 +0000, Saeed Mirzamohammadi wrote:
>>> Hi Vincent,
>>>
>>>> On Jun 9, 2023, at 9:52 AM, Vincent Guittot <vincent.guittot@...aro.org> wrote:
>>>>
>>>> Hi Saeed,
>>>>
>>>> On Fri, 9 Jun 2023 at 00:48, Saeed Mirzamohammadi
>>>> <saeed.mirzamohammadi@...cle.com> wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> I’m reporting a regression of up to 8% with Unixbench Shell Scripts benchmarks after the following commit:
>>>>>
>>>>> Commit Data:
>>>>> commit-id : a53ce18cacb477dd0513c607f187d16f0fa96f71
>>>>> subject : sched/fair: Sanitize vruntime of entity being migrated
>>>>> author : vincent.guittot@...aro.org
>>>>> author date : 2023-03-17 16:08:10
>>>>>
>>>>>
>>>>> We have observed this on our v5.4 and v4.14 kernel and not yet tested 5.15 but I expect the same.
>>>>
>>>> It would be good to confirm that the regression is present on v6.3
>>>> where the patch has been merged originally. It can be that there is
>>>> hidden dependency with other patches introduced since v5.4
>>>
>>> Regression is present on v6.3 as well, examples:
>>> ub_gcc_224copies_Shell_Scripts_8_concurrent: ~6%
>>> ub_gcc_224copies_Shell_Scripts_16_concurrent: ~8%
>>> ub_gcc_448copies_Shell_Scripts_1_concurrent: ~2%
>
> Apologize for the confusion, I should correct the v6.3 upstream result above. v6.3 doesn’t have any regression.
> v6.3.y -> no regression
> v5.15.y -> no regression
> v5.4.y -> 5-8% regression.
A gentle reminder if there is any recommendation for v5.4.y and v4.14.y regression. Thanks!
>
>
>>>>
>>>>
>>>>>
>>>>> ub_gcc_1copy_Shell_Scripts_1_concurrent : -0.01%
>>>>> ub_gcc_1copy_Shell_Scripts_8_concurrent : -0.1%
>>>>> ub_gcc_1copy_Shell_Scripts_16_concurrent : -0.12%%
>>>>> ub_gcc_56copies_Shell_Scripts_1_concurrent : -2.29%%
>>>>> ub_gcc_56copies_Shell_Scripts_8_concurrent : -4.22%
>>>>> ub_gcc_56copies_Shell_Scripts_16_concurrent : -4.23%
>>>>> ub_gcc_224copies_Shell_Scripts_1_concurrent : -5.54%
>>>>> ub_gcc_224copies_Shell_Scripts_8_concurrent : -8%
>>>>> ub_gcc_224copies_Shell_Scripts_16_concurrent : -7.05%
>>>>> ub_gcc_448copies_Shell_Scripts_1_concurrent : -6.4%
>>>>> ub_gcc_448copies_Shell_Scripts_8_concurrent : -8.35%
>>>>> ub_gcc_448copies_Shell_Scripts_16_concurrent : -7.09%
>>>>>
>>>>> Link to unixbench:
>>>>> github.com/kdlucas/byte-unixbench
>>>>
>>>> I tried to reproduce the problem with v6.3 on my system but I don't
>>>> see any difference with or without the patch
>>>>
>>>> Do you have more details on your setup ? number of cpu and topology ?
>>>>
>>> model name : Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
>>>
>>> Topology:
>>> node 0 1
>>> 0: 10 21
>>> 1: 21 10
>>>
>>> Architecture: x86_64
>>> CPU op-mode(s): 32-bit, 64-bit
>>> CPU(s): 56
>>> On-line CPU(s) list: 0-55
>>> Thread(s) per core: 2
>>> Core(s) per socket: 14
>>> Socket(s): 2
>>> NUMA node(s): 2
>>>
>> Tested on a similar platform E5-2697 v2 @ 2.70GHz which has 2 nodes,
>> 24 cores/48 CPUs in total, however I could not reproduce the issue.
>> Since the regression was reported mainly against 224 and 448 copies case
>> on your platform, I tested unixbench shell1 with 4 x 48 = 192 copies.
>>
>>
>> a53ce18cacb477dd 213acadd21a080fc8cda8eebe6d
>> ---------------- ---------------------------
>> %stddev %change %stddev
>> \ | \
>> 21304 +0.5% 21420 unixbench.score
>> 632.43 +0.0% 632.44 unixbench.time.elapsed_time
>> 632.43 +0.0% 632.44 unixbench.time.elapsed_time.max
>> 11837046 -4.7% 11277727 unixbench.time.involuntary_context_switches
>> 864713 +0.1% 865914 unixbench.time.major_page_faults
>> 9600 +4.0% 9984 unixbench.time.maximum_resident_set_size
>> 8.433e+08 +0.6% 8.48e+08 unixbench.time.minor_page_faults
>> 4096 +0.0% 4096 unixbench.time.page_size
>> 3741 +1.1% 3783 unixbench.time.percent_of_cpu_this_job_got
>> 18341 +1.3% 18572 unixbench.time.system_time
>> 5323 +0.6% 5353 unixbench.time.user_time
>> 78197044 -3.1% 75791701 unixbench.time.voluntary_context_switches
>> 57178573 +0.4% 57399061 unixbench.workload
>>
>> There is no much difference with a53ce18cacb477dd applied or not.
>>
>>
>>
>>
>>
>> a2e90611b9f425ad 829c1651e9c4a6f78398d3e6765
>> ---------------- ---------------------------
>> %stddev %change %stddev
>> \ | \
>> 19985 +8.6% 21697 unixbench.score
>> 632.64 -0.0% 632.53 unixbench.time.elapsed_time
>> 632.64 -0.0% 632.53 unixbench.time.elapsed_time.max
>> 11453985 +3.7% 11880259 unixbench.time.involuntary_context_switches
>> 818996 +3.1% 844681 unixbench.time.major_page_faults
>> 9600 +0.0% 9600 unixbench.time.maximum_resident_set_size
>> 7.911e+08 +8.4% 8.575e+08 unixbench.time.minor_page_faults
>> 4096 +0.0% 4096 unixbench.time.page_size
>> 3767 -0.4% 3752 unixbench.time.percent_of_cpu_this_job_got
>> 18873 -2.4% 18423 unixbench.time.system_time
>> 4960 +7.1% 5313 unixbench.time.user_time
>> 75436000 +10.8% 83581483 unixbench.time.voluntary_context_switches
>> 53553404 +8.7% 58235303 unixbench.workload
>>
>> Previously with 829c1651e9c4a6f introduced, there is 8.6% improvement. And this improvement
>> remains with a53ce18cacb477dd applied.
>>
>> Can you send the full test script so I can have a try locally?
>
> Thanks for testing this. For v5.4.y kernel (not for v6.3.y or v5.15.y), there is an 8% regression with the following test: ub_gcc_448copies_Shell_Scripts_8_concurrent
> And that’s ’shell8’ with ‘-c 448’ copies passed as argument.
>
> Thanks,
> Saeed
>
>>
>> thanks,
>> Chenyu
Powered by blists - more mailing lists