lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110426121751.GB5114@quack.suse.cz>
Date:	Tue, 26 Apr 2011 14:17:51 +0200
From:	Jan Kara <jack@...e.cz>
To:	Wu Fengguang <fengguang.wu@...el.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>, Jan Kara <jack@...e.cz>,
	Mel Gorman <mel@...ux.vnet.ibm.com>,
	Dave Chinner <david@...morbit.com>,
	Rik van Riel <riel@...hat.com>, Mel Gorman <mel@....ul.ie>,
	Itaru Kitayama <kitayama@...bb4u.ne.jp>,
	Minchan Kim <minchan.kim@...il.com>,
	Linux Memory Management List <linux-mm@...ck.org>,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 5/6] writeback: sync expired inodes first in background
 writeback

On Sun 24-04-11 11:15:31, Wu Fengguang wrote:
> > One of the many requirements for writeback is that if userspace is
> > continually dirtying pages in a particular file, that shouldn't cause
> > the kupdate function to concentrate on that file's newly-dirtied pages,
> > neglecting pages from other files which were less-recently dirtied. 
> > (and dirty nodes, etc).
> 
> Sadly I do find the old pages that the flusher never get a chance to
> catch and write them out.
  What kind of load do you use?

> In the below case, if the task dirties pages fast enough at the end of
> file, writeback_index will never get a chance to wrap back. There may
> be various variations of this case.
> 
> file head
> [          ***                        ==>***************]==>
>            old pages          writeback_index            fresh dirties
> 
> Ironically the current kernel relies on pageout() to catch these
> old pages, which is not only inefficient, but also not reliable.
> If a full LRU walk takes an hour, the old pages may stay dirtied
> for an hour.
  Well, the kupdate behavior has always been just a best-effort thing. We
always tried to handle well common cases but didn't try to solve all of
them. Unless we want to track dirty-age of every page (which we don't
want because it's too expensive), there is really no way to make syncing
of old pages 100% working for all the cases unless we do data-integrity
type of writeback for the whole inode - but that could create new problems
with stalling other files for too long I suspect.

> We may have to do (conditional) tagged ->writepages to safeguard users
> from losing data he'd expect to be written hours ago.
  Well, if the file is continuously written (and in your case it must be
even continuosly grown) I'd be content if we handle well the common case of
linear append (that happens for log files etc.). If we can do well for more
cases, even better but I'd be cautious not to disrupt some other more
common cases.

								Honza
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ