lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 6 Mar 2012 14:54:51 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Mel Gorman <mgorman@...e.de>
Cc:	David Rientjes <rientjes@...gle.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Christoph Lameter <cl@...ux.com>,
	Miao Xie <miaox@...fujitsu.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] cpuset: mm: Reduce large amounts of memory barrier
 related damage v2

On Tue, 6 Mar 2012 22:42:01 +0000
Mel Gorman <mgorman@...e.de> wrote:

> /*
>  * get_mems_allowed is required when making decisions involving mems_allowed
>  * such as during page allocation. mems_allowed can be updated in parallel
>  * and depending on the new value an operation can fail potentially causing
>  * process failure. A retry loop with get_mems_allowed and put_mems_allowed
>  * prevents these artificial failures.
>  */
> static inline unsigned int get_mems_allowed(void)
> {
>         return read_seqcount_begin(&current->mems_allowed_seq);
> }
> 
> /*
>  * If this returns false, the operation that took place after get_mems_allowed
>  * may have failed. It is up to the caller to retry the operation if
>  * appropriate.
>  */
> static inline bool put_mems_allowed(unsigned int seq)
> {
>         return !read_seqcount_retry(&current->mems_allowed_seq, seq);
> }
> 
> ?

lgtm ;)

> > > -static inline void put_mems_allowed(void)
> > > +/*
> > > + * If this returns false, the operation that took place after get_mems_allowed
> > > + * may have failed. It is up to the caller to retry the operation if
> > > + * appropriate
> > > + */
> > > +static inline bool put_mems_allowed(unsigned int seq)
> > >  {
> > > -	/*
> > > -	 * ensure that reading mems_allowed and mempolicy before reducing
> > > -	 * mems_allowed_change_disable.
> > > -	 *
> > > -	 * the write-side task will know that the read-side task is still
> > > -	 * reading mems_allowed or mempolicy, don't clears old bits in the
> > > -	 * nodemask.
> > > -	 */
> > > -	smp_mb();
> > > -	--ACCESS_ONCE(current->mems_allowed_change_disable);
> > > +	return !read_seqcount_retry(&current->mems_allowed_seq, seq);
> > >  }
> > >  
> > >  static inline void set_mems_allowed(nodemask_t nodemask)
> > 
> > How come set_mems_allowed() still uses task_lock()?
> >
> 
> Consistency.
> 
> The task_lock is taken by kernel/cpuset.c when updating
> mems_allowed so it is taken here. That said, it is unnecessary to take
> as the two places where set_mems_allowed is used are not going to be
> racing. In the unlikely event that set_mems_allowed() gets another user,
> there is no harm is leaving the task_lock as it is. It's not in a hot
> path of any description.

But shouldn't set_mems_allowed() bump mems_allowed_seq?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ