[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47D0873B.6090705@gmail.com>
Date: Fri, 07 Mar 2008 09:07:23 +0900
From: Tejun Heo <htejun@...il.com>
To: Jens Axboe <jens.axboe@...cle.com>
CC: FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
James.Bottomley@...senPartnership.com, bharrosh@...asas.com,
efault@....de, tomof@....org, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-ide@...r.kernel.org,
linux-scsi@...r.kernel.org, jgarzik@...ox.com, bzolnier@...il.com
Subject: Re: [PATCH] blk: missing add of padded bytes to io completion byte
count
Jens Axboe wrote:
>>> If we want the use paradigm shared between block and driver, then I
>>> think the best approach is to keep all the bios the same (so not adjust
>>> for padding), but do adjust in the blk_rq_map_sg(). That way we have
>>> the padding and draining unwind information by comparing with the bio.
>> Adjusting only sg in blk_rq_map_sg (like drain) looks much
>> better. This works with libata for me.
>
> Looks like a much better solution to me. Anyone have any valid
> objections against moving the padding to the sg map time?
Not necessarily objections but some concerns.
* As completion is done in bio terms, it makes completion from LLDs a
bit cumbersome, but this is unavoidable if we break sum(bio) == sum(sg).
* I've been wondering why we are not using sg chain / table or whatever
directly in bios and maybe rq_map_sg can go away in future.
How about separating out the padding / draining adjustment into a
separate interface? Say, blk_rq_apply_extra() and blk_rq_undo_extra()
and make it the responsibility of the LLD which requested
padding/draining to apply and undo the adjustments? It can undo the
adjustments when it returns the the request to its upper layer. If rq
completion is handled by upper layer, it will do the right thing. If rq
completion is handled by LLD, it can see the bio it wants to see.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists