lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 31 Oct 2007 13:13:53 +0200
From:	Boaz Harrosh <bharrosh@...asas.com>
To:	Jeff Garzik <jeff@...zik.org>
CC:	Jens Axboe <jens.axboe@...cle.com>, linux-ide@...r.kernel.org,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] "killing" sg_last(), and discussion

On Wed, Oct 31 2007 at 12:29 +0200, Jeff Garzik <jeff@...zik.org> wrote:
> Boaz Harrosh wrote:
>> On Wed, Oct 31 2007 at 10:49 +0200, Jeff Garzik <jeff@...zik.org> wrote:
>>> I looked into killing sg_last(), but really, this is the best its gonna
>>> get (moving sg_last to libata-core.c).
>>>
>>> You could maybe kill one use with caching, but in the other sg_last()
>>> callsites there isn't another s/g loop we can stick a "last_sg = sg;"
>>> into.
>>>
>>> libata is stuck because we undertake the highly unusual operation of
>>> fiddling with the final S/G element, to enforce 32-bit alignment.
>>>
>>> Of course we could eliminate all that nasty fiddling/padding
>>> completely, including sg_last(), if other areas of the kernel would
>>> guarantee ahead of time that buffer lengths are always a multiple
>>> of 4........
>>>
>>> 	Jeff
>>>
>> OK Now I'm confused. I thought that ULD's can give you SG's 
>> that are actually longer than bufflen and that, at the end, the 
>> bufflen should govern the transfer length.
>>
>> Now FS_PC commands are sector aligned so you do not have
>> problems with that.
>>
>> The BLOCK_PC commands have 2 main sources that I know of
>> one is sg && bsg from user mode that can easily enforce
>> 4 bytes alignment. The second is kernel services which 80%
>> of these are done by scsi_execute(). All These can be found
>> and fixed. Starting with scsi_execute(). Another place can be
>> blk_rq_map_sg(), since all IO's are bio based. It can enforce 
>> alignment too.
>>
>> I would start by sticking a WARN_ON(qc->pad_len) and
>> see if it triggers, what are the sources of that.
> 
> The whole qc->pad_len etc. machinery was added because it solved 
> problems in the field with ATAPI devices.  So sr or some userland 
> application is sending lengths that are not padded to 32-bit boundary, 
> probably because plenty of trivial commands can send or return odd 
> amounts of data.
> 
> This used to be irrelevant, but now with SATA, even PIO data xfer 
> (normally what is used for non-READ/WRITE CDBs) must be 32-bit aligned 
> because both SATA DMA and SATA PIO are converted into dword-based SATA 
> FIS's on the wire.
> 
> 	Jeff
> 
> 
> 
2 things

1. Than why not fix blk_rq_map_sg() to enforce the alignment. Also I bet
that these "problems in the field" are from pre 2.6.18 kernels, and this
is no longer the case. Why not put that WARN_ON(qc->pad_len) and prove me
wrong.

2. Just checking bufflen is enough. Since you are already assuming that
first SG's offset is aligned, than if last SG's length is odd than so is
bufflen. (You are already assuming that SG's total length matches bufflen)

Boaz
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ