lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 08 Mar 2008 10:06:30 +0900
From:	Tejun Heo <htejun@...il.com>
To:	FUJITA Tomonori <tomof@....org>
CC:	jens.axboe@...cle.com, fujita.tomonori@....ntt.co.jp,
	James.Bottomley@...senPartnership.com, bharrosh@...asas.com,
	efault@....de, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, linux-ide@...r.kernel.org,
	linux-scsi@...r.kernel.org, jgarzik@...ox.com, bzolnier@...il.com
Subject: Re: [PATCH] blk: missing add of padded bytes to io completion byte
   count

FUJITA Tomonori wrote:
> On Fri, 07 Mar 2008 09:07:23 +0900
> Tejun Heo <htejun@...il.com> wrote:
> 
>> Jens Axboe wrote:
>>>>> If we want the use paradigm shared between block and driver, then I
>>>>> think the best approach is to keep all the bios the same (so not adjust
>>>>> for padding), but do adjust in the blk_rq_map_sg().  That way we have
>>>>> the padding and draining unwind information by comparing with the bio.
>>>> Adjusting only sg in blk_rq_map_sg (like drain) looks much
>>>> better. This works with libata for me.
>>> Looks like a much better solution to me. Anyone have any valid
>>> objections against moving the padding to the sg map time?
>> Not necessarily objections but some concerns.
>>
>> * As completion is done in bio terms, it makes completion from LLDs a
>> bit cumbersome, but this is unavoidable if we break sum(bio) == sum(sg).
> 
> What do you mean? How does sub(bio) affect LLDs?

LLDs which loop over sg's trying to complete rq incrementally will see
rq going away sooner than it expected.

>> * I've been wondering why we are not using sg chain / table or whatever
>> directly in bios and maybe rq_map_sg can go away in future.
> 
> You mean that LLDs use bios directly? For me, sg and bio have very
> different objectives and it's a clean layer separation.

Actually the other way, block layer use sg instead of bio_vec in bio.
Layer separation doesn't necessarily require copying about the same
information to differently formatted data structure.  I'm not sure it
will be a clean win tho.  Requests hang longer in scheduler queue and
and bio_vec is smaller and scatterlist.

The thing is that, to me, blk_rq_map_sg() doesn't really look necessary,
it can be done just as well when the request is fetched from the queue
by block driver. (continued below...)

>> How about separating out the padding / draining adjustment into a
>> separate interface?  Say, blk_rq_apply_extra() and blk_rq_undo_extra()
>> and make it the responsibility of the LLD which requested
>> padding/draining to apply and undo the adjustments?  It can undo the
>> adjustments when it returns the the request to its upper layer.  If rq
>> completion is handled by upper layer, it will do the right thing.  If rq
>> completion is handled by LLD, it can see the bio it wants to see.
> 
> If possible, I'd like to avoid creating APIs for them. I think that
> the current approach is much better than such APIs.

And, so, I'm not too sure whether putting more mechanisms into it is a
good idea.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ