[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221116095740.GA29859@blackbody.suse.cz>
Date: Wed, 16 Nov 2022 10:57:40 +0100
From: Michal Koutný <mkoutny@...e.com>
To: Josh Don <joshdon@...gle.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched: async unthrottling for cfs bandwidth
On Tue, Nov 15, 2022 at 07:01:31PM -0800, Josh Don <joshdon@...gle.com> wrote:
> After more thought, I realized that we can't reuse the throttled_list
> list_head, since that would potentially break the lockless traversal
> of a concurrent list_for_each_entry_rcu() (ie. if we removed the
> element from the throttled list and then added it to the CSD list).
I see, the concurrent RCU traversal is a valid point for the two heads.
What does it mean for SCHED_WARN_ON in __unthrottle_cfs_rq_async()?
IIUC, if the concurrency of cfs_b->throttled_cfs_rq list is
expected (hence I'm not sure about the SCHED_WARN_ON), then it may
happen that __unthrottle_cfs_rq_async is called on cfs_rq that's already
on rq->cfsb_csd_list (there's still rq lock but it's only help inside
cfs_b->throttled_cfs_rq iteration).
Thanks,
Michal
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists