[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADZ9YHiJ4MYFkWqikKu0qHOsBYGEPYvC2JL5wE3iCUH6vTMKcA@mail.gmail.com>
Date: Wed, 24 Jul 2013 14:01:35 +0600
From: Rakib Mullick <rakib.mullick@...il.com>
To: Michael Wang <wangyun@...ux.vnet.ibm.com>
Cc: mingo@...nel.org, peterz@...radead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched: update_top_cache_domain only at the times of
building sched domain.
On Wed, Jul 24, 2013 at 9:26 AM, Michael Wang
<wangyun@...ux.vnet.ibm.com> wrote:
> Hi, Rakib
>
> On 07/24/2013 01:42 AM, Rakib Mullick wrote:
>> Currently, update_top_cache_domain() is called whenever schedule domain is built or destroyed. But, the following
>> callpath shows that they're at the same callpath and can be avoided update_top_cache_domain() while destroying schedule
>> domain and update only at the times of building schedule domains.
>>
>> partition_sched_domains()
>> detach_destroy_domain()
>> cpu_attach_domain()
>> update_top_cache_domain()
>
> IMHO, cpu_attach_domain() and update_top_cache_domain() should be
> paired, below patch will open a window which 'rq->sd == NULL' while
> 'sd_llc != NULL', isn't it?
>
> I don't think we have the promise that before we rebuild the stuff
> correctly, no one will utilize 'sd_llc'...
>
I never said it. My point is different. partition_sched_domain works as -
- destroying existing schedule domain (if previous domain and new
domain aren't same)
- building new partition
while doing the first it needs to detach all the cpus on that domain.
By detaching what it does,
it fall backs to it's root default domain. In this context (which i've
proposed to skip), by means
of updating top cache domain it takes the highest flag domain to setup
it's sd_llc_id or cpu itself.
Whatever done above gets overwritten (updating top cache domain),
while building new partition.
Then, why we did that before? Hope you understand my point.
Thanks,
Rakib.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists