[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1160850046.13212.30.camel@Homer.simpson.net>
Date: Sat, 14 Oct 2006 18:20:46 +0000
From: Mike Galbraith <efault@....de>
To: Matthias Dahl <mlkernel@...tal-soul.de>
Cc: Paolo Ornati <ornati@...twebnet.it>, Jens Axboe <axboe@...e.de>,
linux-kernel@...r.kernel.org
Subject: Re: sluggish system responsiveness under higher IO load
Greetings,
On Sat, 2006-10-14 at 16:39 +0200, Matthias Dahl wrote:
> On Friday 06 October 2006 17:58, Paolo Ornati wrote:
>
> > I used to have this type of problem and 2.6.19-rc1 looks much better
> > than 2.6.18.
> >
> > I'm using CONFIG_PREEMPT + CONFIG_PREEMPT_BKL, CFQ i/o scheduler
> > and /proc/sys/vm/swappiness = 20.
>
> I will give 2.6.19 a test in a few weeks when the dust of all the changes have
> settled a bit. :-)
>
> As my Mike Galbraith suggested, I made some tests with renicing the IO
> intensive applications. This indeed makes a hell of a difference. Currently I
> am renicing everything that causes a lot of disk IO to a nice of 19. Even
> though this doesn't fix it completely, the occasional short hangs have become
> less common.
(I probably should have been more verbose in my suggestion;)
What I actually suggested was that you try renicing the application you
were experiencing sluggishness with to -10, and retry your IO
interference test to see if you were experiencing scheduling latency or
something else. For example, if your GL application is using lots of
cpu, it will likely not be classified as interactive, and can end up in
the expired array, at which time an IO task can do a long burst of heavy
cpu usage at interactive status, and keep your application off of the
cpu for quite a while. The intent of renicing your application to -10
was to keep it at interactive status, and above the heavy IO tasks. (if
it sleeps at all that should work. there are other scenarios too, but
less likely than this one)
If running IO at nice 19 more or less fixes your problem, I think we can
assume that you are having scheduling troubles, so the thing to do is to
grab some top snapshots showing cpu distribution during a problem
period.
-Mike
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists