[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201008181452.05047.knikanth@suse.de>
Date: Wed, 18 Aug 2010 14:52:04 +0530
From: Nikanth Karthikesan <knikanth@...e.de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Wu Fengguang <fengguang.wu@...el.com>,
Bill Davidsen <davidsen@....com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Jens Axboe <axboe@...nel.dk>,
Andrew Morton <akpm@...ux-foundation.org>,
Jan Kara <jack@...e.cz>
Subject: Re: [RFC][PATCH] Per file dirty limit throttling
On Tuesday 17 August 2010 13:54:35 Peter Zijlstra wrote:
> On Tue, 2010-08-17 at 10:39 +0530, Nikanth Karthikesan wrote:
> > Oh, nice. Per-task limit is an elegant solution, which should help
> > during most of the common cases.
> >
> > But I just wonder what happens, when
> > 1. The dirtier is multiple co-operating processes
> > 2. Some app like a shell script, that repeatedly calls dd with seek and
> > skip? People do this for data deduplication, sparse skipping etc..
> > 3. The app dies and comes back again. Like a VM that is rebooted, and
> > continues writing to a disk backed by a file on the host.
> >
> > Do you think, in those cases this might still be useful?
>
> Those cases do indeed defeat the current per-task-limit, however I think
> the solution to that is to limit the amount of writeback done by each
> blocked process.
>
Blocked on what? Sorry, I do not understand.
Thanks
Nikanth
> Jan Kara had some good ideas in that department.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists