[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CABk29Nu0-oKggR3MyfzJotznvrvFL-wpiSqBKyG1jhqy-wRXEw@mail.gmail.com>
Date: Wed, 16 Nov 2022 13:45:31 -0800
From: Josh Don <joshdon@...gle.com>
To: Michal Koutný <mkoutny@...e.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched: async unthrottling for cfs bandwidth
On Wed, Nov 16, 2022 at 1:57 AM Michal Koutný <mkoutny@...e.com> wrote:
>
> What does it mean for SCHED_WARN_ON in __unthrottle_cfs_rq_async()?
>
> IIUC, if the concurrency of cfs_b->throttled_cfs_rq list is
> expected (hence I'm not sure about the SCHED_WARN_ON), then it may
> happen that __unthrottle_cfs_rq_async is called on cfs_rq that's already
> on rq->cfsb_csd_list (there's still rq lock but it's only help inside
> cfs_b->throttled_cfs_rq iteration).
It catches a case where we call unthrottle_cfs_rq_async() on a given
cfs_rq again before we have a chance to process the previous call.
This should never happen, because currently we only call this from the
distribution handler, and we skip entities already queued for
unthrottle (this is the check for if
(!list_empty(&cfs_rq->throttled_csd_list))).
>
> Thanks,
> Michal
Powered by blists - more mailing lists