lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h6ueu0ae.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date:   Tue, 21 Mar 2023 13:43:37 +0800
From:   "Huang, Ying" <ying.huang@...el.com>
To:     "Liu, Yujie" <yujie.liu@...el.com>
Cc:     lkp <lkp@...el.com>, "bharata@....com" <bharata@....com>,
        "Yin, Fengwei" <fengwei.yin@...el.com>,
        "willy@...radead.org" <willy@...radead.org>,
        "mike.kravetz@...cle.com" <mike.kravetz@...cle.com>,
        "shy828301@...il.com" <shy828301@...il.com>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "xhao@...ux.alibaba.com" <xhao@...ux.alibaba.com>,
        "Tang, Feng" <feng.tang@...el.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "oe-lkp@...ts.linux.dev" <oe-lkp@...ts.linux.dev>,
        "ziy@...dia.com" <ziy@...dia.com>,
        "zhengjun.xing@...ux.intel.com" <zhengjun.xing@...ux.intel.com>,
        "osalvador@...e.de" <osalvador@...e.de>,
        "baolin.wang@...ux.alibaba.com" <baolin.wang@...ux.alibaba.com>,
        "minchan@...nel.org" <minchan@...nel.org>,
        "42.hyeyoo@...il.com" <42.hyeyoo@...il.com>,
        "apopple@...dia.com" <apopple@...dia.com>
Subject: Re: [linus:master] [migrate_pages] 7e12beb8ca:
 vm-scalability.throughput -3.4% regression

"Liu, Yujie" <yujie.liu@...el.com> writes:

> Hi Ying,
>
> On Mon, 2023-03-20 at 15:58 +0800, Huang, Ying wrote:
>> Hi, Yujie,
>>
>> kernel test robot <yujie.liu@...el.com> writes:
>>
>> > Hello,
>> >
>> > FYI, we noticed a -3.4% regression of vm-scalability.throughput due to commit:
>> >
>> > commit: 7e12beb8ca2ac98b2ec42e0ea4b76cdc93b58654 ("migrate_pages: batch flushing TLB")
>> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>> >
>> > in testcase: vm-scalability
>> > on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz (Cascade Lake) with 128G memory
>> > with following parameters:
>> >
>> >         runtime: 300s
>> >         size: 512G
>> >         test: anon-cow-rand-mt
>> >         cpufreq_governor: performance
>> >
>> > test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
>> > test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
>> >
>> >
>> > If you fix the issue, kindly add following tag
>> > > Reported-by: kernel test robot <yujie.liu@...el.com>
>> > > Link: https://lore.kernel.org/oe-lkp/202303192325.ecbaf968-yujie.liu@intel.com
>> >
>>
>> Thanks a lot for report!  Can you try whether the debug patch as
>> below can restore the regression?
>
> We've tested the patch and found the throughput score was partially
> restored from -3.6% to -1.4%, still with a slight performance drop.
> Please check the detailed data as follows:

Good!  Thanks for your detailed data!

>       0.09 ± 17%      +1.2        1.32 ±  7%      +0.4        0.45 ± 21%  perf-profile.children.cycles-pp.flush_tlb_func

It appears that we can reduce the unnecessary TLB flushing effectively
with the previous debug patch.  But the batched flush (full flush) is
still slower than the non-batched flush (flush one page).

Can you try the debug patch as below to check whether it can restore the
regression completely?  The new debug patch can be applied on top of the
previous debug patch.

Best Regards,
Huang, Ying

---------------------------8<-----------------------------------------
>From b36b662c80652447d7374faff1142a941dc9d617 Mon Sep 17 00:00:00 2001
From: Huang Ying <ying.huang@...el.com>
Date: Mon, 20 Mar 2023 15:38:12 +0800
Subject: [PATCH] dbg, migrate_pages: don't batch flushing for single page
 migration

---
 mm/migrate.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 98f1c11197a8..7271209c1a03 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1113,8 +1113,8 @@ static void migrate_folio_done(struct folio *src,
 static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page,
 			       unsigned long private, struct folio *src,
 			       struct folio **dstp, int force, bool avoid_force_lock,
-			       enum migrate_mode mode, enum migrate_reason reason,
-			       struct list_head *ret)
+			       bool batch_flush, enum migrate_mode mode,
+			       enum migrate_reason reason, struct list_head *ret)
 {
 	struct folio *dst;
 	int rc = -EAGAIN;
@@ -1253,7 +1253,7 @@ static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page
 		/* Establish migration ptes */
 		VM_BUG_ON_FOLIO(folio_test_anon(src) &&
 			       !folio_test_ksm(src) && !anon_vma, src);
-		try_to_migrate(src, TTU_BATCH_FLUSH);
+		try_to_migrate(src, batch_flush ? TTU_BATCH_FLUSH : 0);
 		page_was_mapped = 1;
 	}
 
@@ -1641,6 +1641,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page,
 	bool nosplit = (reason == MR_NUMA_MISPLACED);
 	bool no_split_folio_counting = false;
 	bool avoid_force_lock;
+	bool batch_flush = !list_is_singular(from);
 
 retry:
 	rc_saved = 0;
@@ -1690,7 +1691,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page,
 
 			rc = migrate_folio_unmap(get_new_page, put_new_page, private,
 						 folio, &dst, pass > 2, avoid_force_lock,
-						 mode, reason, ret_folios);
+						 batch_flush, mode, reason, ret_folios);
 			/*
 			 * The rules are:
 			 *	Success: folio will be freed
@@ -1804,7 +1805,8 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page,
 	stats->nr_failed_pages += nr_retry_pages;
 move:
 	/* Flush TLBs for all unmapped folios */
-	try_to_unmap_flush();
+	if (batch_flush)
+		try_to_unmap_flush();
 
 	retry = 1;
 	for (pass = 0;
-- 
2.30.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ