lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 15 Nov 2013 18:47:45 +0100
From:	Andrea Arcangeli <aarcange@...hat.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Khalid Aziz <khalid.aziz@...cle.com>,
	Pravin Shelar <pshelar@...ira.com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Ben Hutchings <bhutchings@...arflare.com>,
	Christoph Lameter <cl@...ux.com>,
	Johannes Weiner <jweiner@...hat.com>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	Andi Kleen <andi@...stfloor.org>,
	Minchan Kim <minchan@...nel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 0/3] mm: hugetlbfs: fix hugetlbfs optimization v2

Hi,

1/3 is a bugfix so it should be applied more urgently. 1/3 is not as
fast as the current upstream code in the hugetlbfs + directio extreme
8GB/sec benchmark (but 3/3 should fill the gap later). The code is
identical to the one I posted in v1 just rebased on upstream and was
developed in collaboration with Khalid who already tested it.

2/3 and 3/3 had very little testing yet, and they're incremental
optimization. 2/3 is minor and most certainly worth applying later.

3/3 instead complicates things a bit and adds more branches to the THP
fast paths, so it should only be applied if the benchmarks of
hugetlbfs + directio show that it is very worthwhile (that has not
been verified yet). If it's not worthwhile 3/3 should be dropped (and
the gap should be filled in some other way if the gap is not caused by
the _mapcount mangling as I guessed). Ideally this should bring even
more performance than current upstream code, as current upstream code
still increased the _mapcount in gup_fast by mistake, while this
eliminates the locked op on the tail page cacheline in gup_fast too
(which is required for correctness too).

As a side note: the _mapcount refcounting on tail pages is only needed
for THP as it is a fundamental information required for
split_huge_page_refcount to be able to distribute the head refcounts
during the split. And it is done on _mapcount instead of the _count,
because the _count would screwup badly with the get_page_unless_zero
speculative pagecache accesses.

Andrea Arcangeli (3):
  mm: hugetlbfs: fix hugetlbfs optimization
  mm: hugetlb: use get_page_foll in follow_hugetlb_page
  mm: tail page refcounting optimization for slab and hugetlbfs

 include/linux/mm.h |  30 ++++++++-
 mm/hugetlb.c       |  19 +++++-
 mm/internal.h      |   3 +-
 mm/swap.c          | 187 ++++++++++++++++++++++++++++++++++-------------------
 4 files changed, 170 insertions(+), 69 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ