[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120604150040.GD25126@linux.vnet.ibm.com>
Date: Mon, 4 Jun 2012 20:30:40 +0530
From: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>,
Prashanth Nageshappa <prashanth@...ux.vnet.ibm.com>,
mingo@...nel.org, LKML <linux-kernel@...r.kernel.org>,
roland@...nel.org, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH] sched: balance_cpu to consider other cpus in its group
as target of (pinned) task migration
* Mike Galbraith <efault@....de> [2012-06-04 16:41:35]:
> But high priority SCHED_OTHER tasks do not hog the CPU, they get their
> fair share as defined by the user.
Consider this case. System with 2 cores (each with 2 thread) and 3
cgroups :
A (1024) -> has 2 tasks (A0, A1)
B (2048) -> has 2 tasks (B0, B1)
C (1024) -> has 1 tasks (C0 - pinned to CPUs 1,2)
(B0, B1) collectively are eligible to consume 2 full cpus worth of
bandwidth, (A0, A1) together are eligible to consume 1 full-cpu
worth of bandwidth and finally C0 is eligible to get 1 full-cpu worth of
bandwidth.
Currently C0 is sleeping as a result of which tasks could be spread as:
CPU0 -> A0
CPU1 -> A1
CPU2 -> B0
CPU3 -> B1
Now C0 wakes up and lands on CPU2 (which was its prev_cpu).
CPU0 -> A0
CPU1 -> A1
CPU2 -> B0, C0
CPU3 -> B1
Ideally CPU1 needs to pull it C0 to itself (while A1 moves to CPU0). Do
you agree to that? I doubt that happens because of how CPU0 does load
balance on behalf of itself and CPU1 (and thus fails to pull C0 to its
core).
- vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists