lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091203103245.GA5023@quack.suse.cz>
Date:	Thu, 3 Dec 2009 11:32:46 +0100
From:	Jan Kara <jack@...e.cz>
To:	Nick Piggin <npiggin@...e.de>
Cc:	Jan Kara <jack@...e.cz>, Mike Galbraith <gleep@....de>,
	James Y Knight <foom@...m.net>,
	LKML <linux-kernel@...r.kernel.org>, linux-ext4@...r.kernel.org
Subject: Re: writev data loss bug in (at least) 2.6.31 and 2.6.32pre8 x86-64

On Thu 03-12-09 06:28:25, Nick Piggin wrote:
> On Wed, Dec 02, 2009 at 08:04:26PM +0100, Jan Kara wrote:
> > >   When using writev, the page we copy from is not paged in (while when we
> > > use ordinary write, it is paged in). This difference might be worth
> > > investigation on its own (as it is likely to heavily impact performance of
> > > writev) but is irrelevant for us now - we should handle this without data
> > > corruption anyway.
> >   I've looked into why writev fails reliably the writes. The reason is that
> > iov_iter_fault_in_readable() faults in only the first IO buffer. Because
> > this is just 600 bytes big, following iov_iter_copy_from_user_atomic copies
> > only 600 bytes and block_write_end sets number of copied bytes to 0. Thus
> > we restart the write and do it one iov per iteration which succeeds. So
> > everything works as designed only it gets inefficient in this particular
> > case.
> Yep, this would be right. We could actually do more prefaulting; I
> think I was being a little over conservative and worried about earlier
> pages being unmapped before we were able to consume them... but I
> think being too worried about that case is optimizing an unusual case
> that is probably performing badly anyway at the expense of more common
> patterns.
  Yeah, IMHO optimal would be to fault in enough buffers so that we can
fill one page (although we may pose some upper bound on the number of pages
we are willing to fault in - like 1 MB of data or so).

> Anyway, what I was doing to test this code when I wrote it was to
> inject random failures into user copy functions. I guess this could
> be useful to merge in the error injection framework?
  Yes, that would be definitely useful. This was exceptionally easy to
track down because it was easily reproducible. But otherwise this path is
almost never taken and bugs in there are hard to debug so it would get more
testing coverage. I've spent like a month debugging a bug in reiserfs
causing data corruption in this path - mainly because it took a few days to
reproduce it and I didn't know what could be possibly triggering it...

								Honza
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ