lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 3 Sep 2020 16:32:54 +0800
From:   Alex Shi <alex.shi@...ux.alibaba.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Anshuman Khandual <anshuman.khandual@....com>,
        David Hildenbrand <david@...hat.com>,
        Matthew Wilcox <willy@...radead.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Alexander Duyck <alexander.duyck@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 1/4] mm/pageblock: mitigation cmpxchg false sharing in
 pageblock flags



在 2020/9/3 下午3:24, Mel Gorman 写道:
> On Thu, Sep 03, 2020 at 03:01:20PM +0800, Alex Shi wrote:
>> pageblock_flags is used as long, since every pageblock_flags is just 4
>> bits, 'long' size will include 8(32bit machine) or 16 pageblocks' flags,
>> that flag setting has to sync in cmpxchg with 7 or 15 other pageblock
>> flags. It would cause long waiting for sync.
>>
>> If we could change the pageblock_flags variable as char, we could use
>> char size cmpxchg, which just sync up with 2 pageblock flags. it could
>> relief the false sharing in cmpxchg.
>>
>> Signed-off-by: Alex Shi <alex.shi@...ux.alibaba.com>
> 
> Page block types were not known to change at high frequency that would
> cause a measurable performance drop. If anything, the performance hit
> from pageblocks is the lookup paths which is a lot more frequent.

Yes, it is not hot path. But it's still a meaningful points to reduce cmpxchg
level false sharing which isn't right on logical.


> 
> What was the workload you were running that altered pageblocks at a high
> enough frequency for collisions to occur when updating adjacent
> pageblocks?
> 

I have run thpscale with 'always' defrag setting of THP. The Amean stddev is much
larger than a very little average run time reducing.

But the left patch 4 could show the cmpxchg retry reduce from thousands to hundreds
or less.

Subject: [PATCH v4 4/4] add cmpxchg tracing

Signed-off-by: Alex Shi <alex.shi@...ux.alibaba.com>
---
 include/trace/events/pageblock.h | 30 ++++++++++++++++++++++++++++++
 mm/page_alloc.c                  |  4 ++++
 2 files changed, 34 insertions(+)
 create mode 100644 include/trace/events/pageblock.h

diff --git a/include/trace/events/pageblock.h b/include/trace/events/pageblock.h
new file mode 100644
index 000000000000..003c2d716f82
--- /dev/null
+++ b/include/trace/events/pageblock.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM pageblock
+
+#if !defined(_TRACE_PAGEBLOCK_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_PAGEBLOCK_H
+
+#include <linux/tracepoint.h>
+
+TRACE_EVENT(hit_cmpxchg,
+
+	TP_PROTO(char byte),
+
+	TP_ARGS(byte),
+
+	TP_STRUCT__entry(
+		__field(char, byte)
+	),
+
+	TP_fast_assign(
+		__entry->byte = byte;
+	),
+
+	TP_printk("%d", __entry->byte)
+);
+
+#endif /* _TRACE_PAGE_ISOLATION_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8b65d83d8be6..a6d7159295bc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -509,6 +509,9 @@ static __always_inline int get_pfnblock_migratetype(struct page *page, unsigned
  * @pfn: The target page frame number
  * @mask: mask of bits that the caller is interested in
  */
+#define CREATE_TRACE_POINTS
+#include <trace/events/pageblock.h>
+
 void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
 					unsigned long pfn,
 					unsigned long mask)
@@ -532,6 +535,7 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
 		if (byte == old_byte)
 			break;
 		byte = old_byte;
+		trace_hit_cmpxchg(byte);
 	}
 }

--
1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ