lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241018035304.1050135-1-zhengyejian@huaweicloud.com>
Date: Fri, 18 Oct 2024 11:53:04 +0800
From: Zheng Yejian <zhengyejian@...weicloud.com>
To: sj@...nel.org,
	akpm@...ux-foundation.org,
	sieberf@...zon.com,
	shakeel.butt@...ux.dev,
	foersleo@...zon.de
Cc: damon@...ts.linux.dev,
	linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	zhengyejian@...weicloud.com
Subject: [PATCH] mm/damon/vaddr: Fix issue in damon_va_evenly_split_region()

According to the logic of damon_va_evenly_split_region(), currently at
least following split cases would not meet the expectation:

  Suppose DAMON_MIN_REGION=0x1000,
  Case1: Split [0x0, 0x1100) into 1 pieces, then the result would be
         acutually [0x0, 0x1000), but NOT the expected [0x0, 0x1100) !!!
  Case2: Split [0x0, 0x3000) into 2 pieces, then the result would be
         acutually 3 regions:
           [0x0, 0x1000), [0x1000, 0x2000), [0x2000, 0x3000)
         but NOT the expected 2 regions:
           [0x0, 0x1000), [0x1000, 0x3000) !!!

The root cause is that when calculating size of each split piece in
damon_va_evenly_split_region():

  `sz_piece = ALIGN_DOWN(sz_orig / nr_pieces, DAMON_MIN_REGION);`

both the dividing and the ALIGN_DOWN may cause loss of precision,
then each time split one piece of size 'sz_piece' from origin 'start' to
'end' would cause:
  1. For the above Case1, the 'end' value of the split 1 piece is
     aligned but not updated!!!
  2. For the above Case2, more pieces are split out than expected!!!

To fix it, in this patch:
- As for the expect to split 1 piece, just return 0;
- Count for each piece split and make sure no more than 'nr_pieces';
- Add above two cases into damon_test_split_evenly().

BTW, currently when running kunit test, DAMON_MIN_REGION is redefined
as 1, then above ALIGN_DOWN cases may not be test, since every int
value is ALIGN-ed to 1.

After this patch, damon-operations test passed:

 # ./tools/testing/kunit/kunit.py run damon-operations
 [...]
 ============== damon-operations (6 subtests) ===============
 [PASSED] damon_test_three_regions_in_vmas
 [PASSED] damon_test_apply_three_regions1
 [PASSED] damon_test_apply_three_regions2
 [PASSED] damon_test_apply_three_regions3
 [PASSED] damon_test_apply_three_regions4
 [PASSED] damon_test_split_evenly
 ================ [PASSED] damon-operations =================

Fixes: 3f49584b262c ("mm/damon: implement primitives for the virtual memory address spaces")
Signed-off-by: Zheng Yejian <zhengyejian@...weicloud.com>
---
 mm/damon/tests/vaddr-kunit.h |  2 ++
 mm/damon/vaddr.c             | 13 +++++++++----
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h
index a339d117150f..b9a03e4e29e5 100644
--- a/mm/damon/tests/vaddr-kunit.h
+++ b/mm/damon/tests/vaddr-kunit.h
@@ -300,6 +300,8 @@ static void damon_test_split_evenly(struct kunit *test)
 	damon_test_split_evenly_fail(test, 0, 100, 0);
 	damon_test_split_evenly_succ(test, 0, 100, 10);
 	damon_test_split_evenly_succ(test, 5, 59, 5);
+	damon_test_split_evenly_succ(test, 4, 6, 1);
+	damon_test_split_evenly_succ(test, 0, 3, 2);
 	damon_test_split_evenly_fail(test, 5, 6, 2);
 }
 
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 08cfd22b5249..1f3cebd20829 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -67,10 +67,14 @@ static int damon_va_evenly_split_region(struct damon_target *t,
 	unsigned long sz_orig, sz_piece, orig_end;
 	struct damon_region *n = NULL, *next;
 	unsigned long start;
+	int i;
 
 	if (!r || !nr_pieces)
 		return -EINVAL;
 
+	if (nr_pieces == 1)
+		return 0;
+
 	orig_end = r->ar.end;
 	sz_orig = damon_sz_region(r);
 	sz_piece = ALIGN_DOWN(sz_orig / nr_pieces, DAMON_MIN_REGION);
@@ -79,9 +83,11 @@ static int damon_va_evenly_split_region(struct damon_target *t,
 		return -EINVAL;
 
 	r->ar.end = r->ar.start + sz_piece;
+	/* origin region will be updated as the first one after splitting */
+	i = 1;
+	n = r;
 	next = damon_next_region(r);
-	for (start = r->ar.end; start + sz_piece <= orig_end;
-			start += sz_piece) {
+	for (start = r->ar.end; i < nr_pieces; start += sz_piece, i++) {
 		n = damon_new_region(start, start + sz_piece);
 		if (!n)
 			return -ENOMEM;
@@ -89,8 +95,7 @@ static int damon_va_evenly_split_region(struct damon_target *t,
 		r = n;
 	}
 	/* complement last region for possible rounding error */
-	if (n)
-		n->ar.end = orig_end;
+	n->ar.end = orig_end;
 
 	return 0;
 }
-- 
2.25.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ