[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f2d2e7c-f3a4-7ffb-9e48-96866d681714@arm.com>
Date: Mon, 19 Sep 2016 14:58:54 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Peter Zijlstra <peterz@...radead.org>,
Christian Borntraeger <borntraeger@...ibm.com>
Cc: Ingo Molnar <mingo@...nel.org>, Tejun Heo <tj@...nel.org>,
linux-kernel@...r.kernel.org
Subject: Re: linux-next: new scheduler messages span: 0-15 (max cpu_capacity =
589) when starting KVM guests
On 19/09/16 14:40, Peter Zijlstra wrote:
> On Mon, Sep 19, 2016 at 03:19:11PM +0200, Christian Borntraeger wrote:
>> Dietmar, Ingo, Tejun,
>>
>> since commit cd92bfd3b8cb0ec2ee825e55a3aee704cd55aea9
>> sched/core: Store maximum per-CPU capacity in root domain
>>
>> I get tons of messages from the scheduler like
>> [..]
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> [..]
>>
>
> Oh, oops ;-)
>
> Something like the below ought to cure I think.
Haven't tested it in kvm guests with libvirt env.
This message makes sense for asymmetric compute capacities (ARM
big.LITTLE) for a setup where cpu_capacity = 1024 (a logical cpu w/o
SMT) can't be assumed for the big cpus.
I also tells you that you run in an SMT env. (2 hw threads hence 589)
but this is probably less important.
Guarding it w/ sched_debug_enabled makes sense for this.
> ---
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index f5f7b3cdf0be..fdc9e311fd29 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6990,7 +6990,7 @@ static int build_sched_domains(const struct cpumask *cpu_map,
> }
> rcu_read_unlock();
>
> - if (rq) {
> + if (rq && sched_debug_enabled) {
> pr_info("span: %*pbl (max cpu_capacity = %lu)\n",
> cpumask_pr_args(cpu_map), rq->rd->max_cpu_capacity);
> }
>
Powered by blists - more mailing lists