[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <994de702-c934-0e6f-c811-af32f07e5b6d@de.ibm.com>
Date: Tue, 20 Sep 2016 09:43:01 +0200
From: Christian Borntraeger <borntraeger@...ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>,
Ingo Molnar <mingo@...nel.org>, Tejun Heo <tj@...nel.org>,
linux-kernel@...r.kernel.org
Subject: Re: linux-next: new scheduler messages span: 0-15 (max cpu_capacity =
589) when starting KVM guests
On 09/19/2016 03:40 PM, Peter Zijlstra wrote:
> On Mon, Sep 19, 2016 at 03:19:11PM +0200, Christian Borntraeger wrote:
>> Dietmar, Ingo, Tejun,
>>
>> since commit cd92bfd3b8cb0ec2ee825e55a3aee704cd55aea9
>> sched/core: Store maximum per-CPU capacity in root domain
>>
>> I get tons of messages from the scheduler like
>> [..]
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> [..]
>>
>
> Oh, oops ;-)
>
> Something like the below ought to cure I think.
Still trying to get some opinion from Tejun, why moving vcpus in
its cpuset causes schedule domain rebuilds, but
Acked-by: Christian Borntraeger <borntraeger@...ibm.com>
for such a patch.
>
> ---
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index f5f7b3cdf0be..fdc9e311fd29 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6990,7 +6990,7 @@ static int build_sched_domains(const struct cpumask *cpu_map,
> }
> rcu_read_unlock();
>
> - if (rq) {
> + if (rq && sched_debug_enabled) {
> pr_info("span: %*pbl (max cpu_capacity = %lu)\n",
> cpumask_pr_args(cpu_map), rq->rd->max_cpu_capacity);
> }
>
Powered by blists - more mailing lists