[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200414105220.GL20713@hirez.programming.kicks-ass.net>
Date: Tue, 14 Apr 2020 12:52:20 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Josh Don <joshdon@...gle.com>
Cc: Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Paul Turner <pjt@...gle.com>,
Huaixin Chang <changhuaixin@...ux.alibaba.com>,
Phil Auld <pauld@...head.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] sched: eliminate bandwidth race between throttling
and distribution
On Fri, Apr 10, 2020 at 03:52:07PM -0700, Josh Don wrote:
> -/* returns 0 on failure to allocate runtime */
> +/* returns 0 on failure to allocate runtime, called with cfs_b->lock held */
That's a gross mis-spelling of lockdep_assert_held(); and since I was
editing things anyway it now looks like so:
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4587,11 +4587,13 @@ static inline struct cfs_bandwidth *tg_c
return &tg->cfs_bandwidth;
}
-/* returns 0 on failure to allocate runtime, called with cfs_b->lock held */
+/* returns 0 on failure to allocate runtime */
static int __assign_cfs_rq_runtime(struct cfs_bandwidth *cfs_b,
struct cfs_rq *cfs_rq, u64 target_runtime)
{
- u64 amount = 0, min_amount;
+ u64 min_amount, amount = 0;
+
+ lockdep_assert_held(cfs_rq->lock);
/* note: this is a positive sum as runtime_remaining <= 0 */
min_amount = target_runtime - cfs_rq->runtime_remaining;
@@ -4616,12 +4618,11 @@ static int __assign_cfs_rq_runtime(struc
/* returns 0 on failure to allocate runtime */
static int assign_cfs_rq_runtime(struct cfs_rq *cfs_rq)
{
- int ret;
struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg);
+ int ret;
raw_spin_lock(&cfs_b->lock);
- ret = __assign_cfs_rq_runtime(cfs_b, cfs_rq,
- sched_cfs_bandwidth_slice());
+ ret = __assign_cfs_rq_runtime(cfs_b, cfs_rq, sched_cfs_bandwidth_slice());
raw_spin_unlock(&cfs_b->lock);
return ret;
Powered by blists - more mailing lists