lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 19 Nov 2013 23:52:52 +0100
From:	Andrea Arcangeli <aarcange@...hat.com>
To:	Khalid Aziz <khalid.aziz@...cle.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, Pravin Shelar <pshelar@...ira.com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Ben Hutchings <bhutchings@...arflare.com>,
	Christoph Lameter <cl@...ux.com>,
	Johannes Weiner <jweiner@...hat.com>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	Andi Kleen <andi@...stfloor.org>,
	Minchan Kim <minchan@...nel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 0/3] mm: hugetlbfs: fix hugetlbfs optimization v2

5~Hi Khalid,

On Tue, Nov 19, 2013 at 01:27:22PM -0700, Khalid Aziz wrote:
> > Block size        3.12         3.12+patch 1      3.12+patch 1,2,3
> > ----------        ----         ------------      ----------------
> > 1M                8467           8114              7648
> > 64K               4049           4043              4175
> >
> > Performance numbers with 64K reads look good but there is further
> > deterioration with 1M reads.
> >
> > --
> > Khalid
> 
> Hi Andrea,
> 
> I found that a background task running on my test server had influenced 
> the performance numbers for 1M reads. I cleaned that problem up and 
> re-ran the test. I am seeing 8456 MB/sec with all three patches applied, 
> so 1M number is looking good as well.

Good news thanks!

1/3 should go in -mm I think as it fixes many problems.

The rest can be applied with less priority and is not as urgent.

I've also tried to optimize it further in the meantime as I thought it
wasn't fully ok yet. So I could send another patchset. I haven't
changed 1/3 and I don't plan changing it. And I kept 3/3 at the end as
it's the one with a bit more of complexity than the rest.

I basically removed a few more atomic ops for each put_page/get_page
for both hugetlbfs and slab, and the important thing is they're zero
cost changes for the non-hugetlbfs/slab fast paths so they're probably
worth it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ