lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100304045315.GP8653@laptop>
Date:	Thu, 4 Mar 2010 15:53:15 +1100
From:	Nick Piggin <npiggin@...e.de>
To:	Miao Xie <miaox@...fujitsu.com>
Cc:	David Rientjes <rientjes@...gle.com>,
	Lee Schermerhorn <lee.schermerhorn@...com>,
	Paul Menage <menage@...gle.com>,
	Linux-Kernel <linux-kernel@...r.kernel.org>,
	Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH 4/4] cpuset,mm: use rwlock to protect task->mempolicy
 and mems_allowed

On Wed, Mar 03, 2010 at 06:52:39PM +0800, Miao Xie wrote:
> if MAX_NUMNODES > BITS_PER_LONG, loading/storing task->mems_allowed or mems_allowed in
> task->mempolicy are not atomic operations, and the kernel page allocator gets an empty
> mems_allowed when updating task->mems_allowed or mems_allowed in task->mempolicy. So we
> use a rwlock to protect them to fix this probelm.

Oh, and something else I'm also concerned about:

If  MAX_NUMNODES <= BITS_PER_LONG then these locks are a noop.

> +#define read_mem_lock_irqsave(p, flags)		do { (void)(flags); } while (0)
> +
> +#define read_mem_unlock_irqrestore(p, flags)	do { (void)(flags); } while (0)
> +
> +/* Be used to protect task->mempolicy and mems_allowed when user reads them */

However you are appearing to use them for more than just atomically
loading of the nodemasks.

> @@ -2447,11 +2503,14 @@ void cpuset_unlock(void)
>  int cpuset_mem_spread_node(void)
>  {
>  	int node;
> +	unsigned long flags;
>  
> +	read_mem_lock_irqsave(current, flags);
>  	node = next_node(current->cpuset_mem_spread_rotor, current->mems_allowed);
>  	if (node == MAX_NUMNODES)
>  		node = first_node(current->mems_allowed);
>  	current->cpuset_mem_spread_rotor = node;
> +	read_mem_unlock_irqrestore(current, flags);
>  	return node;
>  }
>  EXPORT_SYMBOL_GPL(cpuset_mem_spread_node);

If you are worried about doing this kind of atomic RMW on the mask, then
you cannot make the lock a noop. So if you're nooping the lock in this
way then you really need to cuddle it neatly around loading of the mask.

Once you do that, it would be trivial to use a seqlock.

...

> @@ -1381,8 +1434,16 @@ static struct mempolicy *get_vma_policy(struct task_struct *task,
>  		} else if (vma->vm_policy)
>  			pol = vma->vm_policy;
>  	}
> +	if (!pol) {
> +		read_mem_lock_irqsave(task, irqflags);
> +		pol = task->mempolicy;
> +		mpol_get(pol);
> +		read_mem_unlock_irqrestore(task, irqflags);
> +	}
> +
>  	if (!pol)
>  		pol = &default_policy;
> +
>  	return pol;
>  }

And a couple of others. It looks like you're using it here to guarantee
existence of the mempolicy.... Did you mean read_mempolicy_lock? Or do
you have another problem (there seems to be several cases of this).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ