[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c85e242d55da1f12419e2c2dc2bfa3fc942a848e.camel@linux.intel.com>
Date: Thu, 30 Oct 2025 13:07:38 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: K Prateek Nayak <kprateek.nayak@....com>, "Chen, Yu C"
 <yu.c.chen@...el.com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>, Juri Lelli	
 <juri.lelli@...hat.com>, Dietmar Eggemann <dietmar.eggemann@....com>,
 Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel
 Gorman <mgorman@...e.de>,  Valentin Schneider	 <vschneid@...hat.com>,
 Madadi Vineeth Reddy <vineethr@...ux.ibm.com>, Hillf Danton
 <hdanton@...a.com>, Shrikanth Hegde <sshegde@...ux.ibm.com>, Jianyong Wu	
 <jianyong.wu@...look.com>, Yangyu Chen <cyy@...self.name>, Tingyin Duan	
 <tingyin.duan@...il.com>, Vern Hao <vernhao@...cent.com>, Len Brown	
 <len.brown@...el.com>, Aubrey Li <aubrey.li@...el.com>, Zhao Liu	
 <zhao1.liu@...el.com>, Chen Yu <yu.chen.surf@...il.com>, Adam Li	
 <adamli@...amperecomputing.com>, Tim Chen <tim.c.chen@...el.com>, 
	linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
 "Gautham R . Shenoy" <gautham.shenoy@....com>, Ingo Molnar
 <mingo@...hat.com>
Subject: Re: [PATCH 15/19] sched/fair: Respect LLC preference in task
 migration and detach
On Thu, 2025-10-30 at 09:49 +0530, K Prateek Nayak wrote:
> Hello Tim,
> 
> On 10/30/2025 2:39 AM, Tim Chen wrote:
> > > > I suppose you are suggesting that the threshold for stopping task detachment
> > > > should be higher. With the above can_migrate_llc() check, I suppose we have
> > > > raised the threshold for stopping "task detachment"?
> > > 
> > > Say the LLC is under heavy load and we only have overloaded groups.
> > > can_migrate_llc() would return "mig_unrestricted" since
> > > fits_llc_capacity() would return false.
> > > 
> > > Since we are under "migrate_load", sched_balance_find_src_rq() has
> > > returned the CPU with the highest load which could very well be the
> > > CPU with with a large number of preferred LLC tasks.
> > > 
> > > sched_cache_enabled() is still true and when detach_tasks() reaches
> > > one of these preferred llc tasks (which comes at the very end of the
> > > tasks list), 
> > > we break out even if env->imbalance > 0 leaving
> > 
> > Yes, but at least one task has been removed to even the load (making forward progress) and
> > the remaining tasks all wish to stay in the current LLC and will
> > preferred not to be moved. My thought was to not even all the load out
> > in one shot and pull more tasks out of their preferred LLC.
> > If the imbalance still remain, we'll come to that in the next load balance.
> 
> In that case, can we spoof a LBF_ALL_PINNED for the case where we start
In the code chunk (with fix I mentioned in last reply):
+#ifdef CONFIG_SCHED_CACHE
+		/*
+		 * Don't detach more tasks if the remaining tasks want
+		 * to stay. We know the remaining tasks all prefer the
+		 * current LLC, because after order_tasks_by_llc(), the
+		 * tasks that prefer the current LLC are at the tail of
+		 * the list. The inhibition of detachment is to avoid too
+		 * many tasks being migrated out of the preferred LLC.
+		 */
+		if (sched_cache_enabled() && detached && p->preferred_llc != -1 &&
+		    llc_id(env->src_cpu) == p->preferred_llc &&
		    llc_id(env->dst_cpu) != p->preferred_llc)
+			break;
We have already pulled at least one task when we stop detaching because we
know that all the remaining tasks want to stay in it current LLC.
"detached" is non zero when we break. So LBF_ALL_PINNED would be cleared.
We will only exit the detach_tasks loop when there are truly no tasks
that can be moved and it is truly a LBF_ALL_PINNED case.
We should not be causing problem with the LBF_ALL_PINNED.
Tim
> hitting preferred task. That way, the main lb loop will goto redo and
> try to find another busy CPU to pull tasks from.
> 
> > 
> > Pulling tasks more slowly when we come to tasks that preferred to stay (if possible)
> > would also help to prevent tasks bouncing between LLC.
> > 
> > Tim
> > 
Powered by blists - more mailing lists
 
