lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 14 Mar 2012 13:41:50 +0100
From:	Michal Hocko <mhocko@...e.cz>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Konstantin Khlebnikov <khlebnikov@...nvz.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 1/2] kernel: cgroup: push rcu read locking from
 css_is_ancestor() to callsite

On Tue 28-02-12 15:14:48, Johannes Weiner wrote:
> Library functions should not grab locks when the callsites can do it,
> even if the lock nests like the rcu read-side lock does.
> 
> Push the rcu_read_lock() from css_is_ancestor() to its single user,
> mem_cgroup_same_or_subtree() in preparation for another user that may
> already hold the rcu read-side lock.
> 
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>

Acked-by: Michal Hocko <mhocko@...e.cz>

> ---
>  kernel/cgroup.c |   20 ++++++++++----------
>  mm/memcontrol.c |   14 +++++++++-----
>  2 files changed, 19 insertions(+), 15 deletions(-)
> 
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index 4be474d..9003bd8 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
> @@ -4841,7 +4841,7 @@ EXPORT_SYMBOL_GPL(css_depth);
>   * @root: the css supporsed to be an ancestor of the child.
>   *
>   * Returns true if "root" is an ancestor of "child" in its hierarchy. Because
> - * this function reads css->id, this use rcu_dereference() and rcu_read_lock().
> + * this function reads css->id, the caller must hold rcu_read_lock().
>   * But, considering usual usage, the csses should be valid objects after test.
>   * Assuming that the caller will do some action to the child if this returns
>   * returns true, the caller must take "child";s reference count.
> @@ -4853,18 +4853,18 @@ bool css_is_ancestor(struct cgroup_subsys_state *child,
>  {
>  	struct css_id *child_id;
>  	struct css_id *root_id;
> -	bool ret = true;
>  
> -	rcu_read_lock();
>  	child_id  = rcu_dereference(child->id);
> +	if (!child_id)
> +		return false;
>  	root_id = rcu_dereference(root->id);
> -	if (!child_id
> -	    || !root_id
> -	    || (child_id->depth < root_id->depth)
> -	    || (child_id->stack[root_id->depth] != root_id->id))
> -		ret = false;
> -	rcu_read_unlock();
> -	return ret;
> +	if (!root_id)
> +		return false;
> +	if (child_id->depth < root_id->depth)
> +		return false;
> +	if (child_id->stack[root_id->depth] != root_id->id)
> +		return false;
> +	return true;
>  }
>  
>  void free_css_id(struct cgroup_subsys *ss, struct cgroup_subsys_state *css)
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index e4be95a..b4622fb 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1047,12 +1047,16 @@ struct lruvec *mem_cgroup_lru_move_lists(struct zone *zone,
>  static bool mem_cgroup_same_or_subtree(const struct mem_cgroup *root_memcg,
>  		struct mem_cgroup *memcg)
>  {
> -	if (root_memcg != memcg) {
> -		return (root_memcg->use_hierarchy &&
> -			css_is_ancestor(&memcg->css, &root_memcg->css));
> -	}
> +	bool ret;
>  
> -	return true;
> +	if (root_memcg == memcg)
> +		return true;
> +	if (!root_memcg->use_hierarchy)
> +		return false;
> +	rcu_read_lock();
> +	ret = css_is_ancestor(&memcg->css, &root_memcg->css);
> +	rcu_read_unlock();
> +	return ret;
>  }
>  
>  int task_in_mem_cgroup(struct task_struct *task, const struct mem_cgroup *memcg)
> -- 
> 1.7.7.6
> 

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ