[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <487F1B38.3050100@cn.fujitsu.com>
Date: Thu, 17 Jul 2008 18:13:12 +0800
From: Li Zefan <lizf@...fujitsu.com>
To: Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>
CC: Paul Jackson <pj@....com>, LKML <linux-kernel@...r.kernel.org>,
Paul Menage <menage@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Lai Jiangshan <laijs@...fujitsu.com>
Subject: Re: [RFC] [PATCH] cpuset: fix wrong calculation of relax domain level
Hidetoshi Seto wrote:
> Li Zefan wrote:
>> When multiple cpusets are overlapping in their 'cpus' and hence they
>> form a single sched domain, the largest sched_relax_domain_level among
>> those should be used. But when top_cpuset's sched_load_balance is
>> set, its sched_relax_domain_level is used regardless other sub-cpusets'.
>>
>> There are several proposals to solve this:
>>
>> 1) Travel the cpuset hierarchy to find the largest relax_domain_level
>> in rebuild_sched_domains(). But cpuset avoids hierarchy travelling
>> when top_cpuset.sched_load_balance is set.
>>
>> 2) Remember the largest relax_domain_level when we update a cpuset's
>> sched_load_balance, sched_relax_domain_level and cpus. This should
>> work, but seems a bit tricky and a bit ugly. (As this patch shows)
>>
>> 3) Don't treat this as a bug, but document this behavior.
>
> I think 1) is correct way.
>
> There was a special short path for the top_cpuset's case, but now it is
> disappeared. I think there are no need to treat the top_cpuset as VIP,
> so 2) is excessive nurturing.
>
If we all agree on this, I'll send a new patch to fix this.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists