lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 02 Mar 2012 22:25:29 +0100
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Mel Gorman <mgorman@...e.de>
Cc:	Christoph Lameter <cl@...ux.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Miao Xie <miaox@...fujitsu.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] cpuset: mm: Remove memory barrier damage from the page
 allocator

On Fri, 2012-03-02 at 17:43 +0000, Mel Gorman wrote:
> 
> I considered using a seqlock but it isn't cheap. The read side is heavy
> with the possibility that it starts spinning and incurs a read barrier
> (looking at read_seqbegin()) here. The retry block incurs another read
> barrier so basically it would not be no better than what is there currently
> (which at a 4% performance hit, sucks) 

Use seqcount.

Also, for the write side it doesn't really matter, changing mems_allowed
should be rare and is an 'expensive' operation anyway.

For the read side you can do:

again:
  seq = read_seqcount_begin(&current->mems_seq);

  page = do_your_allocator_muck();

  if (!page && read_seqcount_retry(&current->mems_seq, seq))
    goto again;

  oom();

That way, you only have one smp_rmb() in your fath path,
read_seqcount_begin() doesn't spin, and you only incur the second
smp_rmb() when you've completely failed to allocate anything.

smp_rmb() is basicaly free on x86, other archs will incur some overhead,
but you need a barrier as Christoph pointed out.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ