[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <12d8250b-e7d0-9d6f-3ab6-fdfd65a59133@de.ibm.com>
Date: Mon, 19 Sep 2016 16:01:25 +0200
From: Christian Borntraeger <borntraeger@...ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>,
Ingo Molnar <mingo@...nel.org>, Tejun Heo <tj@...nel.org>,
linux-kernel@...r.kernel.org
Subject: Re: linux-next: new scheduler messages span: 0-15 (max cpu_capacity =
589) when starting KVM guests
On 09/19/2016 03:40 PM, Peter Zijlstra wrote:
> On Mon, Sep 19, 2016 at 03:19:11PM +0200, Christian Borntraeger wrote:
>> Dietmar, Ingo, Tejun,
>>
>> since commit cd92bfd3b8cb0ec2ee825e55a3aee704cd55aea9
>> sched/core: Store maximum per-CPU capacity in root domain
>>
>> I get tons of messages from the scheduler like
>> [..]
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> [..]
>>
>
> Oh, oops ;-)
>
> Something like the below ought to cure I think.
That would certainly make the message go away. (e.g. also
good for cpu hotplug)
I am still asking myself why cgroup cpuset really needs to rebuild
the scheduling domains if a vcpu thread is moved.
>
> ---
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index f5f7b3cdf0be..fdc9e311fd29 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6990,7 +6990,7 @@ static int build_sched_domains(const struct cpumask *cpu_map,
> }
> rcu_read_unlock();
>
> - if (rq) {
> + if (rq && sched_debug_enabled) {
> pr_info("span: %*pbl (max cpu_capacity = %lu)\n",
> cpumask_pr_args(cpu_map), rq->rd->max_cpu_capacity);
> }
>
Powered by blists - more mailing lists