lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <wbbieqyyjqy7ulbta6muzepxwxi6galwvhjdxpqaqbeljzpcer@dpeoqrbkl5p2>
Date: Mon, 12 Aug 2024 06:49:00 +0200
From: Mateusz Guzik <mjguzik@...il.com>
To: Yin Fengwei <fengwei.yin@...el.com>
Cc: David Hildenbrand <david@...hat.com>, 
	kernel test robot <oliver.sang@...el.com>, Peter Xu <peterx@...hat.com>, oe-lkp@...ts.linux.dev, 
	lkp@...el.com, linux-kernel@...r.kernel.org, 
	Andrew Morton <akpm@...ux-foundation.org>, Huacai Chen <chenhuacai@...nel.org>, 
	Jason Gunthorpe <jgg@...dia.com>, Matthew Wilcox <willy@...radead.org>, 
	Nathan Chancellor <nathan@...nel.org>, Ryan Roberts <ryan.roberts@....com>, 
	WANG Xuerui <kernel@...0n.name>, linux-mm@...ck.org, ying.huang@...el.com, feng.tang@...el.com
Subject: Re: [linus:master] [mm] c0bff412e6: stress-ng.clone.ops_per_sec
 -2.9% regression

On Mon, Aug 12, 2024 at 12:43:08PM +0800, Yin Fengwei wrote:
> Hi David,
> 
> On 8/1/24 09:44, David Hildenbrand wrote:
> > On 01.08.24 15:37, Mateusz Guzik wrote:
> > > On Thu, Aug 1, 2024 at 3:34 PM David Hildenbrand <david@...hat.com>
> > > wrote:
> > > > 
> > > > On 01.08.24 15:30, Mateusz Guzik wrote:
> > > > > On Thu, Aug 01, 2024 at 08:49:27AM +0200, David Hildenbrand wrote:
> > > > > > Yes indeed. fork() can be extremely sensitive to each
> > > > > > added instruction.
> > > > > > 
> > > > > > I even pointed out to Peter why I didn't add the
> > > > > > PageHuge check in there
> > > > > > originally [1].
> > > > > > 
> > > > > > "Well, and I didn't want to have runtime-hugetlb checks in
> > > > > > PageAnonExclusive code called on certainly-not-hugetlb code paths."
> > > > > > 
> > > > > > 
> > > > > > We now have to do a page_folio(page) and then test for hugetlb.
> > > > > > 
> > > > > >       return folio_test_hugetlb(page_folio(page));
> > > > > > 
> > > > > > Nowadays, folio_test_hugetlb() will be faster than at
> > > > > > c0bff412e6 times, so
> > > > > > maybe at least part of the overhead is gone.
> > > > > > 
> > > > > 
> > > > > I'll note page_folio expands to a call to _compound_head.
> > > > > 
> > > > > While _compound_head is declared as an inline, it ends up being big
> > > > > enough that the compiler decides to emit a real function instead and
> > > > > real func calls are not particularly cheap.
> > > > > 
> > > > > I had a brief look with a profiler myself and for single-threaded usage
> > > > > the func is quite high up there, while it manages to get out with the
> > > > > first branch -- that is to say there is definitely performance lost for
> > > > > having a func call instead of an inlined branch.
> > > > > 
> > > > > The routine is deinlined because of a call to page_fixed_fake_head,
> > > > > which itself is annotated with always_inline.
> > > > > 
> > > > > This is of course patchable with minor shoveling.
> > > > > 
> > > > > I did not go for it because stress-ng results were too unstable for me
> > > > > to confidently state win/loss.
> > > > > 
> > > > > But should you want to whack the regression, this is what I would look
> > > > > into.
> > > > > 
> > > > 
> > > > This might improve it, at least for small folios I guess:
> Do you want us to test this change? Or you have further optimization
> ongoing? Thanks.

I verified the thing below boots, I have no idea about performance. If
it helps it can be massaged later from style perspective.

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 5769fe6e4950..2d5d61ab385b 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -194,34 +194,13 @@ enum pageflags {
 #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
 DECLARE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key);
 
-/*
- * Return the real head page struct iff the @page is a fake head page, otherwise
- * return the @page itself. See Documentation/mm/vmemmap_dedup.rst.
- */
+const struct page *_page_fixed_fake_head(const struct page *page);
+
 static __always_inline const struct page *page_fixed_fake_head(const struct page *page)
 {
 	if (!static_branch_unlikely(&hugetlb_optimize_vmemmap_key))
 		return page;
-
-	/*
-	 * Only addresses aligned with PAGE_SIZE of struct page may be fake head
-	 * struct page. The alignment check aims to avoid access the fields (
-	 * e.g. compound_head) of the @page[1]. It can avoid touch a (possibly)
-	 * cold cacheline in some cases.
-	 */
-	if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
-	    test_bit(PG_head, &page->flags)) {
-		/*
-		 * We can safely access the field of the @page[1] with PG_head
-		 * because the @page is a compound page composed with at least
-		 * two contiguous pages.
-		 */
-		unsigned long head = READ_ONCE(page[1].compound_head);
-
-		if (likely(head & 1))
-			return (const struct page *)(head - 1);
-	}
-	return page;
+	return _page_fixed_fake_head(page);
 }
 #else
 static inline const struct page *page_fixed_fake_head(const struct page *page)
@@ -235,7 +214,7 @@ static __always_inline int page_is_fake_head(const struct page *page)
 	return page_fixed_fake_head(page) != page;
 }
 
-static inline unsigned long _compound_head(const struct page *page)
+static __always_inline unsigned long _compound_head(const struct page *page)
 {
 	unsigned long head = READ_ONCE(page->compound_head);
 
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 829112b0a914..3fbc00db607a 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -19,6 +19,33 @@
 #include <asm/tlbflush.h>
 #include "hugetlb_vmemmap.h"
 
+/*
+ * Return the real head page struct iff the @page is a fake head page, otherwise
+ * return the @page itself. See Documentation/mm/vmemmap_dedup.rst.
+ */
+const struct page *_page_fixed_fake_head(const struct page *page)
+{
+	/*
+	 * Only addresses aligned with PAGE_SIZE of struct page may be fake head
+	 * struct page. The alignment check aims to avoid access the fields (
+	 * e.g. compound_head) of the @page[1]. It can avoid touch a (possibly)
+	 * cold cacheline in some cases.
+	 */
+	if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
+	    test_bit(PG_head, &page->flags)) {
+		/*
+		 * We can safely access the field of the @page[1] with PG_head
+		 * because the @page is a compound page composed with at least
+		 * two contiguous pages.
+		 */
+		unsigned long head = READ_ONCE(page[1].compound_head);
+
+		if (likely(head & 1))
+			return (const struct page *)(head - 1);
+	}
+	return page;
+}
+
 /**
  * struct vmemmap_remap_walk - walk vmemmap page table
  *

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ