[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220706093146.1961598-1-yukuai1@huaweicloud.com>
Date: Wed, 6 Jul 2022 17:31:46 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: agk@...hat.com, snitzer@...nel.org, dm-devel@...hat.com,
mpatocka@...hat.com
Cc: linux-kernel@...r.kernel.org, yukuai3@...wei.com,
yukuai1@...weicloud.com, yi.zhang@...wei.com
Subject: [PATCH] dm writecache: fix inaccurate reads/writes stats
From: Yu Kuai <yukuai3@...wei.com>
Test procedures:
1) format a dm writecache device with 4k blocksize.
2) flush cache.
3) cache 1G data through write.
4) clear stats.
5) read 2G data with bs=1m.
6) read stats.
Expected result:
cache hit ratio is 50%.
Test result:
stats: 0, 1011345, 749201, 0, 263168, 262144, 0, 0, 0, 0, 0, 0, 0, 0
ratio is 99% (262144/263168)
The way that reads is accounted is different between cache hit and cache
miss:
1) If cache hit, reads will be accounted for each entry, which means reads
and read_hits will both increase 256 for each io in the above test.
2) If cache miss, reads will only account once, which means reads will only
increase 1 for each io in the above test.
The case that writes_around has the same problem, fix it by adding
appropriate reads/writes in writecache_map_remap_origin().
Fixes: e3a35d03407c ("dm writecache: add event counters")
Signed-off-by: Yu Kuai <yukuai3@...wei.com>
---
drivers/md/dm-writecache.c | 24 +++++++++++++++++++-----
1 file changed, 19 insertions(+), 5 deletions(-)
diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
index d74c5a7a0ab4..c2c6c3a023dd 100644
--- a/drivers/md/dm-writecache.c
+++ b/drivers/md/dm-writecache.c
@@ -1329,16 +1329,29 @@ enum wc_map_op {
WC_MAP_ERROR,
};
-static enum wc_map_op writecache_map_remap_origin(struct dm_writecache *wc, struct bio *bio,
- struct wc_entry *e)
+static enum wc_map_op writecache_map_remap_origin(struct dm_writecache *wc,
+ struct bio *bio,
+ struct wc_entry *e, bool read)
{
+ sector_t next_boundary;
+ unsigned long long miss_count;
+
if (e) {
- sector_t next_boundary =
+ next_boundary =
read_original_sector(wc, e) - bio->bi_iter.bi_sector;
if (next_boundary < bio->bi_iter.bi_size >> SECTOR_SHIFT)
dm_accept_partial_bio(bio, next_boundary);
+ } else {
+ next_boundary = bio->bi_iter.bi_size;
}
+ miss_count = (round_up(next_boundary, wc->block_size) >>
+ wc->block_size_bits) - 1;
+ if (read)
+ wc->stats.reads += miss_count;
+ else
+ wc->stats.writes += miss_count;
+
return WC_MAP_REMAP_ORIGIN;
}
@@ -1366,7 +1379,7 @@ static enum wc_map_op writecache_map_read(struct dm_writecache *wc, struct bio *
map_op = WC_MAP_REMAP;
}
} else {
- map_op = writecache_map_remap_origin(wc, bio, e);
+ map_op = writecache_map_remap_origin(wc, bio, e, true);
}
return map_op;
@@ -1458,7 +1471,8 @@ static enum wc_map_op writecache_map_write(struct dm_writecache *wc, struct bio
direct_write:
wc->stats.writes_around++;
e = writecache_find_entry(wc, bio->bi_iter.bi_sector, WFE_RETURN_FOLLOWING);
- return writecache_map_remap_origin(wc, bio, e);
+ return writecache_map_remap_origin(wc, bio, e,
+ false);
}
wc->stats.writes_blocked_on_freelist++;
writecache_wait_on_freelist(wc);
--
2.31.1
Powered by blists - more mailing lists