[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170315021431.13107-4-andi@firstfloor.org>
Date: Tue, 14 Mar 2017 19:14:27 -0700
From: Andi Kleen <andi@...stfloor.org>
To: akpm@...ux-foundation.org
Cc: linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>,
peterz@...radead.org
Subject: [PATCH 3/7] sched: Out of line __update_load_avg
From: Andi Kleen <ak@...ux.intel.com>
This is a very complex function, which is called in multiple places.
It is unlikely that inlining or not inlining it makes any difference
for its run time.
This saves around 13k text in my kernel
text data bss dec hex filename
9083992 5367600 11116544 25568136 1862388 vmlinux-before-load-avg
9070166 5367600 11116544 25554310 185ed86 vmlinux-load-avg
Cc: peterz@...radead.org
Signed-off-by: Andi Kleen <ak@...ux.intel.com>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index dea138964b91..78ace89cd481 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2848,7 +2848,7 @@ static u32 __compute_runnable_contrib(u64 n)
* load_avg = u_0` + y*(u_0 + u_1*y + u_2*y^2 + ... )
* = u_0 + u_1*y + u_2*y^2 + ... [re-labeling u_i --> u_{i+1}]
*/
-static __always_inline int
+static int
__update_load_avg(u64 now, int cpu, struct sched_avg *sa,
unsigned long weight, int running, struct cfs_rq *cfs_rq)
{
--
2.9.3
Powered by blists - more mailing lists