[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170504173138.GA7288@htj.duckdns.org>
Date: Thu, 4 May 2017 13:31:38 -0400
From: Tejun Heo <tj@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>,
“linux-kernel@...r.kernel.org”
<linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
Chris Mason <clm@...com>,
“kernel-team@...com” <kernel-team@...com>
Subject: Re: [PATCH 1/2] sched/fair: Use task_groups instead of
leaf_cfs_rq_list to walk all cfs_rqs
Hello, Peter.
On Thu, May 04, 2017 at 03:31:22PM +0200, Peter Zijlstra wrote:
> Yes we can hit an (almost) dead cfs_rq, but poking the bandwidth
> variables thereof is harmless.
>
> unthrottle_cfs_rq() also stops doing anything much when it finds the
> cfs_rq is empty, which must be the case if we're removing it.
Yeah, if you're okay with calling the functions on dead cfs_rq's, just
wrapping with rcu_read_lock should be enough.
> I don't know Paul's opinion on RCU GPs happening while stop_machine(),
> but just in case he feels that's fair game, I did add the
> rcu_read_lock() thingies.
>
> The lockdep assert is mostly documentation, to more easily see
> it is indeed held when we get there.
>
> I left print_cfs_stats using the leaf list, no point in printing stuff
> that's empty.
>
> This way we can avoid taking all RQ locks on cgroup destruction.
Looks good to me.
Thanks.
--
tejun
Powered by blists - more mailing lists