[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100521160014.GC3412@quack.suse.cz>
Date: Fri, 21 May 2010 18:00:14 +0200
From: Jan Kara <jack@...e.cz>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc: Zan Lynx <zlynx@....org>, lwoodman@...hat.com,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>, Nick Piggin <npiggin@...e.de>,
Jan Kara <jack@...e.cz>
Subject: Re: RFC: dirty_ratio back to 40%
On Fri 21-05-10 10:11:59, KOSAKI Motohiro wrote:
> > > So, I'd prefer to restore the default rather than both Redhat and SUSE apply exactly
> > > same distro specific patch. because we can easily imazine other users will face the same
> > > issue in the future.
> >
> > On desktop systems the low dirty limits help maintain interactive feel.
> > Users expect applications that are saving data to be slow. They do not
> > like it when every application in the system randomly comes to a halt
> > because of one program stuffing data up to the dirty limit.
>
> really?
> Do you mean our per-task dirty limit wouldn't works?
>
> If so, I think we need fix it. IOW sane per-task dirty limitation seems
> independent issue from per-system dirty limit.
Well, I don't know about any per-task dirty limits. What function
implements them? Any application that dirties a single page can be caught
and forced to call balance_dirty_pages() and do writeback.
But generally what we observe on a desktop with lots of dirty memory is
that application needs to allocate memory (either private one or for page
cache) and that triggers direct reclaim so the allocation takes a long
time to finish and thus the application is sluggish...
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists