lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <335a26a9.11bb7.156d6db9097.Coremail.aaronlee0817@163.com>
Date:   Mon, 29 Aug 2016 23:12:08 +0800 (CST)
From:   aaronlee0817 <aaronlee0817@....com>
To:     tg <tg@...nel.org>, "jens axboe" <axboe@...nel.dk>,
        "ming lin" <mlin@...nel.org>
Cc:     cgroup <cgroup@...r.kernel.org>,
        linux-block <linux-block@...r.kernel.org>,
        "linux kernel mailing list" <linux-kernel@...r.kernel.org>,
        "shaohua li" <shli@...nel.org>,
        "yanzi.zhang" <yanzi.zhang@...sung.com>,
        "zhen1.zhang" <zhen1.zhang@...sung.com>,
        "jiale0817.li" <jiale0817.li@...sung.com>
Subject: [AFC] cgroup: Fix block throttle bio more than once

Hi Tejun,

Few months ago, we send an email to tell the problem of blkio throttle the split bio
more than once, which leads to the actual bandwidth is smaller than we have set.
And we try to fix this by adding a flag bit BIO_SPLIT in bio flag field. And these days
we did some enhance test, and found that the last patch we sent to you causes another
problem of iops throttle which throttle the iops much than what we set. Because split
bio should be count more than once of iops, not only once. 

After our testing, we propose this patch to fix both bps and iops throttle problem and 
want to ask for your comment. Thanks.

From eb1b1c754d4b267405b5a0d62f8b3f7f7b85df8d Mon Sep 17 00:00:00 2001
From: Jiale Li <aaronlee0817@....com>
Date: Mon, 29 Aug 2016 10:25:47 -0400
Subject: [PATCH] Fix block throttle bio more than once

Signed-off-by: Jiale Li <aaronlee0817@....com>
---
 block/blk-merge.c          | 1 +
 block/blk-throttle.c       | 9 ++++++++-
 include/linux/blk-cgroup.h | 6 ++++--
 include/linux/blk_types.h  | 1 +
 4 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 7b17a65..074f2bd 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -190,7 +190,7 @@ void blk_queue_split(struct request_queue *q, struct bio **bio,
 
 		bio_chain(split, *bio);
 		trace_block_split(q, split, (*bio)->bi_iter.bi_sector);
+		bio_set_flag(*bio, BIO_SPLIT);
 		generic_make_request(*bio);
 		*bio = split;
 	}
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 4ffde95..53a7d67 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -721,6 +721,12 @@ static bool tg_with_in_bps_limit(struct throtl_grp *tg, struct bio *bio,
 	u64 bytes_allowed, extra_bytes, tmp;
 	unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd;
 
+	if (bio_flagged(bio, BIO_SPLIT)) {
+		if (wait)
+			*wait = 0;
+		return true;
+	}
+
 	jiffy_elapsed = jiffy_elapsed_rnd = jiffies - tg->slice_start[rw];
 
 	/* Slice has just started. Consider one slice interval */
@@ -817,7 +823,8 @@ static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio)
 	bool rw = bio_data_dir(bio);
 
 	/* Charge the bio to the group */
-	tg->bytes_disp[rw] += bio->bi_iter.bi_size;
+	if (!bio_flagged(bio, BIO_SPLIT))
+		tg->bytes_disp[rw] += bio->bi_iter.bi_size;
 	tg->io_disp[rw]++;
 
 	/*
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index c02e669..24dc09b 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -19,6 +19,7 @@
 #include <linux/radix-tree.h>
 #include <linux/blkdev.h>
 #include <linux/atomic.h>
+#include <linux/blk_types.h>
 
 /* percpu_counter batch for blkg_[rw]stats, per-cpu drift doesn't matter */
 #define BLKG_STAT_CPU_BATCH	(INT_MAX / 2)
@@ -713,8 +714,9 @@ static inline bool blkcg_bio_issue_check(struct request_queue *q,
 
 	if (!throtl) {
 		blkg = blkg ?: q->root_blkg;
-		blkg_rwstat_add(&blkg->stat_bytes, bio->bi_rw,
-				bio->bi_iter.bi_size);
+		if (!bio_flagged(bio, BIO_SPLIT))
+			blkg_rwstat_add(&blkg->stat_bytes, bio->bi_rw,
+					bio->bi_iter.bi_size);
 		blkg_rwstat_add(&blkg->stat_ios, bio->bi_rw, 1);
 	}
 
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index b294780..e0e418f 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -120,7 +120,7 @@ struct bio {
 #define BIO_QUIET	6	/* Make BIO Quiet */
 #define BIO_CHAIN	7	/* chained bio, ->bi_remaining in effect */
 #define BIO_REFFED	8	/* bio has elevated ->bi_cnt */
+#define BIO_SPLIT       9       /* bio has been splited */
 
 /*
  * Flags starting here get preserved by bio_reset() - this includes
-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ