lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20100601155417.GA7425@quack.suse.cz>
Date:	Tue, 1 Jun 2010 17:54:17 +0200
From:	Jan Kara <jack@...e.cz>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Dave Chinner <david@...morbit.com>, linux-kernel@...r.kernel.org,
	xfs@....sgi.com, linux-fsdevel@...r.kernel.org,
	linux-ext4@...r.kernel.org, tytso@....edu, jens.axboe@...cle.com
Subject: Re: [PATCH 6/6] writeback: limit write_cache_pages integrity
 scanning to current EOF

On Thu 27-05-10 14:33:41, Andrew Morton wrote:
> On Tue, 25 May 2010 20:54:12 +1000
> Dave Chinner <david@...morbit.com> wrote:
> 
> > From: Dave Chinner <dchinner@...hat.com>
> > 
> > sync can currently take a really long time if a concurrent writer is
> > extending a file. The problem is that the dirty pages on the address
> > space grow in the same direction as write_cache_pages scans, so if
> > the writer keeps ahead of writeback, the writeback will not
> > terminate until the writer stops adding dirty pages.
> 
> <looks at Jens>
> 
> The really was a pretty basic bug.  It's writeback 101 to test that case :(
  The code has this live-lock since Nick fixed data integrity issues in
write_cache_pages which was (digging) commit 05fe478d ("mm:
write_cache_pages integrity fix") in January 2009. Jens just kept the code
as it was...

...
> That being said, I think the patch is insufficient.  If I create an
> enormous (possibly sparse) file with a 16TB hole (or a run of clean
> pages) in the middle and then start busily writing into that hole (run
> of clean pages), the problem will still occur.
> 
> One obvious fix for that (a) would be to add another radix-tree tag and
> do two passes across the radix-tree.
> 
> Another fix (b) would be to track the number of dirty pages per
> adddress_space, and only write that number of pages.
> 
> Another fix would be to work out how the code handled this situation
> before we broke it, and restore that in some fashion.  I guess fix (b)
> above kinda does that.
  (b) does not work for data integrity sync (see changelog of the above
mentioned commit). I was sending a patch doing (a) in February but in
particular you raised concerns whether it's not too expensive... Since
it indeed has some cost (although I was not able to measure any performance
impact) and I didn't know a better solution, I just postponed the patches.
But I guess it's time to revive the series and maybe we'll get further with
it.

								Honza
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ