[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090428090916.GC17038@localhost>
Date: Tue, 28 Apr 2009 17:09:16 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Elladan <elladan@...imo.com>, linux-kernel@...r.kernel.org,
linux-mm <linux-mm@...ck.org>, Rik van Riel <riel@...hat.com>
Subject: Re: Swappiness vs. mmap() and interactive response
On Tue, Apr 28, 2009 at 09:48:39AM +0200, Peter Zijlstra wrote:
> On Tue, 2009-04-28 at 14:35 +0900, KOSAKI Motohiro wrote:
> > (cc to linux-mm and Rik)
> >
> >
> > > Hi,
> > >
> > > So, I just set up Ubuntu Jaunty (using Linux 2.6.28) on a quad core phenom box,
> > > and then I did the following (with XFS over LVM):
> > >
> > > mv /500gig/of/data/on/disk/one /disk/two
> > >
> > > This quickly caused the system to. grind.. to... a.... complete..... halt.
> > > Basically every UI operation, including the mouse in Xorg, started experiencing
> > > multiple second lag and delays. This made the system essentially unusable --
> > > for example, just flipping to the window where the "mv" command was running
> > > took 10 seconds on more than one occasion. Basically a "click and get coffee"
> > > interface.
> >
> > I have some question and request.
> >
> > 1. please post your /proc/meminfo
> > 2. Do above copy make tons swap-out? IOW your disk read much faster than write?
> > 3. cache limitation of memcgroup solve this problem?
> > 4. Which disk have your /bin and /usr/bin?
> >
>
> FWIW I fundamentally object to 3 as being a solution.
>
> I still think the idea of read-ahead driven drop-behind is a good one,
> alas last time we brought that up people thought differently.
The semi-drop-behind is a great idea for the desktop - to put just
accessed pages to end of LRU. However I'm still afraid it vastly
changes the caching behavior and wont work well as expected in server
workloads - shall we verify this?
Back to this big-cp-hurts-responsibility issue. Background write
requests can easily pass the io scheduler's obstacles and fill up
the disk queue. Now every read request will have to wait 10+ writes
- leading to 10x slow down of major page faults.
I reach this conclusion based on recent CFQ code reviews. Will bring up
a queue depth limiting patch for more exercises..
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists