lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240507023103.781816-1-linan666@huaweicloud.com>
Date: Tue,  7 May 2024 10:31:03 +0800
From: linan666@...weicloud.com
To: song@...nel.org,
	axboe@...nel.dk
Cc: linux-raid@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	linux-block@...r.kernel.org,
	linan666@...weicloud.com,
	yukuai3@...wei.com,
	yi.zhang@...wei.com,
	houtao1@...wei.com,
	yangerkun@...wei.com
Subject: [PATCH] md: Revert "md: Fix overflow in is_mddev_idle"

From: Li Nan <linan122@...wei.com>

This reverts commit 3f9f231236ce7e48780d8a4f1f8cb9fae2df1e4e.

Using 64bit for 'sync_io' is unnecessary from the gendisk side. This
overflow will not cause any functional impact, except for a UBSAN
warning. Solving this overflow requires introducing additional
calculations and checks which are not necessary. So just keep using
32bit for 'sync_io'.

Signed-off-by: Li Nan <linan122@...wei.com>
---
 drivers/md/md.h        | 4 ++--
 include/linux/blkdev.h | 2 +-
 drivers/md/md.c        | 7 +++----
 3 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/md/md.h b/drivers/md/md.h
index 029dd0491a36..ca085ecad504 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -51,7 +51,7 @@ struct md_rdev {
 
 	sector_t sectors;		/* Device size (in 512bytes sectors) */
 	struct mddev *mddev;		/* RAID array if running */
-	long long last_events;		/* IO event timestamp */
+	int last_events;		/* IO event timestamp */
 
 	/*
 	 * If meta_bdev is non-NULL, it means that a separate device is
@@ -622,7 +622,7 @@ extern void mddev_unlock(struct mddev *mddev);
 static inline void md_sync_acct(struct block_device *bdev, unsigned long nr_sectors)
 {
 	if (blk_queue_io_stat(bdev->bd_disk->queue))
-		atomic64_add(nr_sectors, &bdev->bd_disk->sync_io);
+		atomic_add(nr_sectors, &bdev->bd_disk->sync_io);
 }
 
 static inline void md_sync_acct_bio(struct bio *bio, unsigned long nr_sectors)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index c854d5a6a6fe..41e995ce4bff 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -174,7 +174,7 @@ struct gendisk {
 	struct list_head slave_bdevs;
 #endif
 	struct timer_rand_state *random;
-	atomic64_t sync_io;		/* RAID */
+	atomic_t sync_io;		/* RAID */
 	struct disk_events *ev;
 
 #ifdef CONFIG_BLK_DEV_ZONED
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 00bbafcd27bb..aff9118ff697 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -8577,7 +8577,7 @@ static int is_mddev_idle(struct mddev *mddev, int init)
 {
 	struct md_rdev *rdev;
 	int idle;
-	long long curr_events;
+	int curr_events;
 
 	idle = 1;
 	rcu_read_lock();
@@ -8587,9 +8587,8 @@ static int is_mddev_idle(struct mddev *mddev, int init)
 		if (!init && !blk_queue_io_stat(disk->queue))
 			continue;
 
-		curr_events =
-			(long long)part_stat_read_accum(disk->part0, sectors) -
-			atomic64_read(&disk->sync_io);
+		curr_events = (int)part_stat_read_accum(disk->part0, sectors) -
+			      atomic_read(&disk->sync_io);
 		/* sync IO will cause sync_io to increase before the disk_stats
 		 * as sync_io is counted when a request starts, and
 		 * disk_stats is counted when it completes.
-- 
2.39.2


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ