lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sf2klez8.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Fri, 26 Jan 2024 15:40:27 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Gregory Price <gourry.memverge@...il.com>
Cc: linux-mm@...ck.org,  linux-kernel@...r.kernel.org,
  linux-doc@...r.kernel.org,  linux-fsdevel@...r.kernel.org,
  linux-api@...r.kernel.org,  corbet@....net,  akpm@...ux-foundation.org,
  gregory.price@...verge.com,  honggyu.kim@...com,  rakie.kim@...com,
  hyeongtak.ji@...com,  mhocko@...nel.org,  vtavarespetr@...ron.com,
  jgroves@...ron.com,  ravis.opensrc@...ron.com,  sthanneeru@...ron.com,
  emirakhur@...ron.com,  Hasan.Maruf@....com,  seungjun.ha@...sung.com,
  hannes@...xchg.org,  dan.j.williams@...el.com
Subject: Re: [PATCH v3 4/4] mm/mempolicy: change cur_il_weight to atomic and
 carry the node with it

Gregory Price <gourry.memverge@...il.com> writes:

> In the prior patch, we carry only the current weight for a weighted
> interleave round with us across calls through the allocator path.
>
> node = next_node_in(current->il_prev, pol->nodemask)
> pol->cur_il_weight <--- this weight applies to the above node
>
> This separation of data can cause a race condition.
>
> If a cgroup-initiated task migration or mems_allowed change occurs
> from outside the context of the task, this can cause the weight to
> become stale, meaning we may end using that weight to allocate
> memory on the wrong node.
>
> Example:
>   1) task A sets (cur_il_weight = 8) and (current->il_prev) to
>      node0. node1 is the next set bit in pol->nodemask
>   2) rebind event occurs, removing node1 from the nodemask.
>      node2 is now the next set bit in pol->nodemask
>      cur_il_weight is now stale.
>   3) allocation occurs, next_node_in(il_prev, nodes) returns
>      node2. cur_il_weight is now applied to the wrong node.
>
> The upper level allocator logic must still enforce mems_allowed,
> so this isn't dangerous, but it is innaccurate.
>
> Just clearing the weight is insufficient, as it creates two more
> race conditions.  The root of the issue is the separation of weight
> and node data between nodemask and cur_il_weight.
>
> To solve this, update cur_il_weight to be an atomic_t, and place the
> node that the weight applies to in the upper bits of the field:
>
> atomic_t cur_il_weight
> 	node bits 32:8
> 	weight bits 7:0
>
> Now retrieving or clearing the active interleave node and weight
> is a single atomic operation, and we are not dependent on the
> potentially changing state of (pol->nodemask) to determine what
> node the weight applies to.
>
> Two special observations:
> - if the weight is non-zero, cur_il_weight must *always* have a
>   valid node number, e.g. it cannot be NUMA_NO_NODE (-1).

IIUC, we don't need that, "MAX_NUMNODES-1" is used instead.

>   This is because we steal the top byte for the weight.
>
> - MAX_NUMNODES is presently limited to 1024 or less on every
>   architecture. This would permanently limit MAX_NUMNODES to
>   an absolute maximum of (1 << 24) to avoid overflows.
>
> Per some reading and discussion, it appears that max nodes is
> limited to 1024 so that zone type still fits in page flags, so
> this method seemed preferable compared to the alternatives of
> trying to make all or part of mempolicy RCU protected (which
> may not be possible, since it is often referenced during code
> chunks which call operations that may sleep).
>
> Signed-off-by: Gregory Price <gregory.price@...verge.com>
> ---
>  include/linux/mempolicy.h |  2 +-
>  mm/mempolicy.c            | 93 +++++++++++++++++++++++++--------------
>  2 files changed, 61 insertions(+), 34 deletions(-)
>
> diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
> index c644d7bbd396..8108fc6e96ca 100644
> --- a/include/linux/mempolicy.h
> +++ b/include/linux/mempolicy.h
> @@ -56,7 +56,7 @@ struct mempolicy {
>  	} w;
>  
>  	/* Weighted interleave settings */
> -	u8 cur_il_weight;
> +	atomic_t cur_il_weight;

If we use this field for node and weight, why not change the field name?
For example, cur_wil_node_weight.

>  };
>  
>  /*
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 5a517511658e..41b5fef0a6f5 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -321,7 +321,7 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags,
>  	policy->mode = mode;
>  	policy->flags = flags;
>  	policy->home_node = NUMA_NO_NODE;
> -	policy->cur_il_weight = 0;
> +	atomic_set(&policy->cur_il_weight, 0);
>  
>  	return policy;
>  }
> @@ -356,6 +356,7 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes)
>  		tmp = *nodes;
>  
>  	pol->nodes = tmp;
> +	atomic_set(&pol->cur_il_weight, 0);
>  }
>  
>  static void mpol_rebind_preferred(struct mempolicy *pol,
> @@ -973,8 +974,10 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
>  			*policy = next_node_in(current->il_prev, pol->nodes);
>  		} else if (pol == current->mempolicy &&
>  				(pol->mode == MPOL_WEIGHTED_INTERLEAVE)) {
> -			if (pol->cur_il_weight)
> -				*policy = current->il_prev;
> +			int cweight = atomic_read(&pol->cur_il_weight);
> +
> +			if (cweight & 0xFF)
> +				*policy = cweight >> 8;

Please define some helper functions or macros instead of operate on bits
directly.

>  			else
>  				*policy = next_node_in(current->il_prev,
>  						       pol->nodes);

If we record current node in pol->cur_il_weight, why do we still need
curren->il_prev.  Can we only use pol->cur_il_weight?  And if so, we can
even make current->il_prev a union.

--
Best Regards,
Huang, Ying

[snip]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ