lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 28 Jun 2010 17:14:28 +0900
From:	FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>
To:	hch@....de
Cc:	fujita.tomonori@....ntt.co.jp, snitzer@...hat.com, axboe@...nel.dk,
	dm-devel@...hat.com, James.Bottomley@...e.de,
	linux-kernel@...r.kernel.org, martin.petersen@...cle.com,
	akpm@...ux-foundation.org, linux-scsi@...r.kernel.org
Subject: Re: [PATCH 1/2] block: fix leaks associated with discard request
 payload

On Mon, 28 Jun 2010 09:57:38 +0200
Christoph Hellwig <hch@....de> wrote:

> On Sun, Jun 27, 2010 at 09:32:07PM +0900, FUJITA Tomonori wrote:
> > On Sun, 27 Jun 2010 13:07:12 +0200
> > Christoph Hellwig <hch@....de> wrote:
> > 
> > > > How about this?
> > > 
> > > As I tried to explain before this utterly confuses the I/O completion
> > > path.  With the patch applied even a simple mkfs.xfs that issues discard
> > > just hangs.
> > 
> > Wired. I just tried mkfs.xfs against scsi_debug with my block patches
> > (I saw one discard command). Seemed that it worked fine.
> 
> I've tracked it down to the call to scsi_requeue_command in scsi_end_request.
> When the command is marked BLOCK_PC we'll just get it back as such in
> ->prep_fn next time, but now it's reverting to the previous state.

If scsi_end_request() calls scsi_requeue_command(), the command has a
left over (i.e. hasn't finished all the data), right? You hit such
condition with discard commands?

BLOCK_PC requests don't hit this case since blk_end_request() always
return false for PC.


> While I see the problems with leaking ressources in that case I still
> can't quite explain the hang I see.

Any way to reproduce the hang without ssd drives?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists