[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100804095811.GC2326@arachsys.com>
Date: Wed, 4 Aug 2010 10:58:12 +0100
From: Chris Webb <chris@...chsys.com>
To: Wu Fengguang <fengguang.wu@...el.com>
Cc: Minchan Kim <minchan.kim@...il.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Pekka Enberg <penberg@...helsinki.fi>
Subject: Re: Over-eager swapping
Wu Fengguang <fengguang.wu@...el.com> writes:
> This is interesting. Why is it waiting for 1m here? Are there high CPU
> loads? Would you do a
>
> echo t > /proc/sysrq-trigger
>
> and show us the dmesg?
Annoyingly, magic-sysrq isn't compiled in on these kernels. Is there another
way I can get this info for you? Replacing the kernels on the machines is a
painful job as I have to give the clients running on them quite a bit of
notice of the reboot, and I haven't been able to reproduce the problem on a
test machine.
I also think the swap use is much better following a reboot, and only starts
to spiral out of control after the machines have been running for a week or
so.
However, your suggestion is right that the CPU loads on these machines are
typically quite high. The large number of kvm virtual machines they run mean
thatl oads of eight or even sixteen in /proc/loadavg are not unusual, and
these are higher when there's swap than after it has been removed. I assume
this is mostly because of increased IO wait, as this number increases
significantly in top.
Cheers,
Chris.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists