[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100608053831.GR26335@laptop>
Date: Tue, 8 Jun 2010 15:38:31 +1000
From: Nick Piggin <npiggin@...e.de>
To: Dave Chinner <david@...morbit.com>
Cc: torvalds@...ux-foundation.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, xfs@....sgi.com,
akpm@...ux-foundation.org
Subject: Re: [PATCH 6/6] writeback: limit write_cache_pages integrity
scanning to current EOF
On Tue, Jun 08, 2010 at 10:38:07AM +1000, Dave Chinner wrote:
> From: Dave Chinner <dchinner@...hat.com>
>
> sync can currently take a really long time if a concurrent writer is
> extending a file. The problem is that the dirty pages on the address
> space grow in the same direction as write_cache_pages scans, so if
> the writer keeps ahead of writeback, the writeback will not
> terminate until the writer stops adding dirty pages.
>
> For a data integrity sync, we only need to write the pages dirty at
> the time we start the writeback, so we can stop scanning once we get
> to the page that was at the end of the file at the time the scan
> started.
>
> This will prevent operations like copying a large file preventing
> sync from completing as it will not write back pages that were
> dirtied after the sync was started. This does not impact the
> existing integrity guarantees, as any dirty page (old or new)
> within the EOF range at the start of the scan will still be
> captured.
>
> This patch will not prevent sync from blocking on large writes into
> holes.
The writes don't have to be into holes to cause this starvation
problem, do they?
> That requires more complex intervention while this patch only
> addresses the common append-case of this sync holdoff.
Jan's tagging patch looks pretty good to me and isn't so complex.
I think we should just take that.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists