[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1407778307.14059.12.camel@j-VirtualBox>
Date: Mon, 11 Aug 2014 10:31:47 -0700
From: Jason Low <jason.low2@...com>
To: bsegall@...gle.com
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
Waiman Long <Waiman.Long@...com>, Mel Gorman <mgorman@...e.de>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Rik van Riel <riel@...hat.com>,
Aswin Chandramouleeswaran <aswin@...com>,
Chegu Vinod <chegu_vinod@...com>,
Scott J Norton <scott.norton@...com>, pjt@...gle.com,
jason.low2@...com
Subject: Re: [PATCH] sched: Reduce contention in update_cfs_rq_blocked_load
On Mon, 2014-08-04 at 13:52 -0700, bsegall@...gle.com wrote:
>
> That said, it might be better to remove force_update for this function,
> or make it just reduce the minimum to /64 or something. If the test is
> easy to run it would be good to see what it's like just removing the
> force_update param for this function to see if it's worth worrying
> about or if the zero case catches ~all the perf gain.
Hi Ben,
I removed the force update in __update_cfs_rq_tg_load_contrib and it
helped reduce overhead a lot more. I saw up to a 20x reduction in system
overhead from update_cfs_rq_blocked_load when running some of the AIM7
workloads with this change.
-----
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fea7d33..7a6e18b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2352,8 +2352,7 @@ static inline u64 __synchronize_entity_decay(struct sched_entity *se)
}
#ifdef CONFIG_FAIR_GROUP_SCHED
-static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
- int force_update)
+static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq)
{
struct task_group *tg = cfs_rq->tg;
long tg_contrib;
@@ -2361,7 +2360,7 @@ static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
tg_contrib = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg;
tg_contrib -= cfs_rq->tg_load_contrib;
- if (force_update || abs(tg_contrib) > cfs_rq->tg_load_contrib / 8) {
+ if (abs(tg_contrib) > cfs_rq->tg_load_contrib / 8) {
atomic_long_add(tg_contrib, &tg->load_avg);
cfs_rq->tg_load_contrib += tg_contrib;
}
@@ -2436,8 +2435,7 @@ static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
__update_tg_runnable_avg(&rq->avg, &rq->cfs);
}
#else /* CONFIG_FAIR_GROUP_SCHED */
-static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
- int force_update) {}
+static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq) {}
static inline void __update_tg_runnable_avg(struct sched_avg *sa,
struct cfs_rq *cfs_rq) {}
static inline void __update_group_entity_contrib(struct sched_entity *se) {}
@@ -2537,7 +2535,7 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update)
cfs_rq->last_decay = now;
}
- __update_cfs_rq_tg_load_contrib(cfs_rq, force_update);
+ __update_cfs_rq_tg_load_contrib(cfs_rq);
}
/* Add the load generated by se into cfs_rq's child load-average */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists