lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170501162405.gws4fyib67q4iezx@hirez.programming.kicks-ass.net>
Date:   Mon, 1 May 2017 18:24:05 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Tejun Heo <tj@...nel.org>
Cc:     Ingo Molnar <mingo@...hat.com>,
        “linux-kernel@...r.kernel.org” 
        <linux-kernel@...r.kernel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
        Chris Mason <clm@...com>,
        “kernel-team@...com” <kernel-team@...com>
Subject: Re: [PATCH 1/2] sched/fair: Use task_groups instead of
 leaf_cfs_rq_list to walk all cfs_rqs

On Tue, Apr 25, 2017 at 05:40:39PM -0700, Tejun Heo wrote:

> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4644,23 +4644,32 @@ static void destroy_cfs_bandwidth(struct
>  
>  static void __maybe_unused update_runtime_enabled(struct rq *rq)
>  {
> -	struct cfs_rq *cfs_rq;
> +	struct task_group *tg;
>  
> -	for_each_leaf_cfs_rq(rq, cfs_rq) {
> -		struct cfs_bandwidth *cfs_b = &cfs_rq->tg->cfs_bandwidth;
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(tg, &task_groups, list) {
> +		struct cfs_bandwidth *cfs_b = &tg->cfs_bandwidth;
> +		struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> +
> +		if (!cfs_rq->online)
> +			continue;
>  
>  		raw_spin_lock(&cfs_b->lock);
>  		cfs_rq->runtime_enabled = cfs_b->quota != RUNTIME_INF;
>  		raw_spin_unlock(&cfs_b->lock);
>  	}
> +	rcu_read_unlock();
>  }
>  
>  static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
>  {
> -	struct cfs_rq *cfs_rq;
> +	struct task_group *tg;
>  
> -	for_each_leaf_cfs_rq(rq, cfs_rq) {
> -		if (!cfs_rq->runtime_enabled)
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(tg, &task_groups, list) {
> +		struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> +
> +		if (!cfs_rq->online || !cfs_rq->runtime_enabled)
>  			continue;
>  
>  		/*
> @@ -4677,6 +4686,7 @@ static void __maybe_unused unthrottle_of
>  		if (cfs_rq_throttled(cfs_rq))
>  			unthrottle_cfs_rq(cfs_rq);
>  	}
> +	rcu_read_unlock();
>  }

Note that both these are called with rq->lock held. I don't think we
need to actually add rcu_read_lock() here, as it will fully serialize
against your ->online = 0 that holds rq->lock.

Also, arguably you can keep using for_each_leaf_cfs_rq() for unthrottle,
because I don't think we can (should?) remove a throttled group from the
leaf list -- its not empty after all.

Then again, this is CPU hotplug code and nobody cares about its
performance much, and its better to be safe than sorry, so yes, use
task_groups list to make sure to reach all groups.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ