[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7131ec02-39ae-fe8a-e3d6-171a6e6c8103@arm.com>
Date: Tue, 28 Jul 2020 10:09:29 +0100
From: Lukasz Luba <lukasz.luba@....com>
To: vincent.donnefort@....com
Cc: mingo@...hat.com, peterz@...radead.org, vincent.guittot@...aro.org,
linux-kernel@...r.kernel.org, dietmar.eggemann@....com,
valentin.schneider@....com
Subject: Re: [PATCH] sched/fair: provide u64 read for 32-bits arch helper
Hi Vincent,
On 7/27/20 11:59 AM, vincent.donnefort@....com wrote:
> From: Vincent Donnefort <vincent.donnefort@....com>
>
> Introducing two macro helpers u64_32read() and u64_32read_set_copy() to
> factorize the u64 vminruntime and last_update_time read on a 32-bits
> architecture. Those new helpers encapsulate smp_rmb() and smp_wmb()
> synchronization and therefore, have a small penalty in set_task_rq_fair()
> and init_cfs_rq().
>
> The choice of using a macro over an inline function is driven by the
> conditional u64 variable copy declarations.
>
> #ifndef CONFIG_64BIT
> u64 [vminruntime|last_update_time]_copy;
> #endif
>
> Signed-off-by: Vincent Donnefort <vincent.donnefort@....com>
>
I've run it on 32-bit ARM big.LITTLE platform (odroid xu3) with
the CONFIG_FAIR_GROUP_SCHED and CONFIG_CGROUP_SCHED.
I haven't observed any issues while running hackbench or booting
the kernel, the performance of affected function
set_task_rq_fair() is the same.
Function profile results after hackbench test:
--------w/ Vincent's patch + performance cpufreq gov------------
root@...oid:/sys/kernel/debug/tracing# cat trace_stat/function?
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 4068 3753.185 us 0.922
us 1.239 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 4492 4180.133 us 0.930
us 2.318 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 4208 3991.386 us 0.948
us 13.552 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 4753 4432.231 us 0.932
us 5.875 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 7980 5037.096 us 0.631
us 1.690 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 8143 5078.515 us 0.623
us 2.930 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 9721 6477.904 us 0.666
us 2.425 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 7743 4896.002 us 0.632
us 1.188 us
-----------w/o Vincent patch, performance cpufreq gov------------
root@...oid:/sys/kernel/debug/tracing# cat trace_stat/function?
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 6525 5830.450 us 0.893
us 3.213 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 6646 6069.444 us 0.913
us 9.651 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 5988 5646.133 us 0.942
us 7.685 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 5883 5390.103 us 0.916
us 29.283 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 3844 2561.186 us 0.666
us 0.933 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 5515 3491.011 us 0.633
us 9.845 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 6947 4808.522 us 0.692
us 5.822 us
Function Hit Time Avg
s^2
-------- --- ---- ---
---
set_task_rq_fair 4530 2810.000 us 0.620
us 0.554 us
Tested-by: Lukasz Luba <lukasz.luba@....com>
Regards,
Lukasz
Powered by blists - more mailing lists