[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55535D9C.3050706@fb.com>
Date: Wed, 13 May 2015 10:20:12 -0400
From: Jens Axboe <axboe@...com>
To: Jeff Moyer <jmoyer@...hat.com>, Shaohua Li <shli@...com>
CC: <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] blk: don't account discard request size
On 05/13/2015 09:10 AM, Jeff Moyer wrote:
> Shaohua Li <shli@...com> writes:
>
>> In a workload with discard request, the IO throughput is generally much
>> higher than expected. This is quite confusing checking iostat. Discard
>> request doesn't really write data to drive, so don't account it.
>>
>> Signed-off-by: Shaohua Li <shli@...com>
>> ---
>> block/blk-core.c | 6 +++++-
>> 1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/block/blk-core.c b/block/blk-core.c
>> index fd154b9..0128d18 100644
>> --- a/block/blk-core.c
>> +++ b/block/blk-core.c
>> @@ -2138,7 +2138,11 @@ EXPORT_SYMBOL_GPL(blk_rq_err_bytes);
>>
>> void blk_account_io_completion(struct request *req, unsigned int bytes)
>> {
>> - if (blk_do_io_stat(req)) {
>> + /*
>> + * discard request doesn't really write @bytes to drive,
>> + * doesn't account it
>> + **/
>> + if (blk_do_io_stat(req) && !(req->cmd_flags & REQ_DISCARD)) {
>> const int rw = rq_data_dir(req);
>> struct hd_struct *part;
>> int cpu;
>
> I think you want to modify __get_request to not set REQ_IO_STAT for
> discard requests. This patch will still account the start of I/O, which
> means in_flight will be off.
That would be better. But I'm still not sure we want to turn off
accounting for discards. For the mixed write/discard cases it's
definitely confusing. The better option would be to account it as a
discard and not a write. Preferably in a way that would not break
existing tools, but so that they could get updated to support it.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists