[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTimC1z0MwTxUjxED7N1-R4D_YXtvnPSbiKXdR+4W@mail.gmail.com>
Date: Tue, 3 Aug 2010 13:47:36 +0900
From: Minchan Kim <minchan.kim@...il.com>
To: Wu Fengguang <fengguang.wu@...el.com>
Cc: Chris Webb <chris@...chsys.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Subject: Re: Over-eager swapping
On Tue, Aug 3, 2010 at 1:28 PM, Wu Fengguang <fengguang.wu@...el.com> wrote:
> On Tue, Aug 03, 2010 at 12:09:18PM +0800, Minchan Kim wrote:
>> On Tue, Aug 3, 2010 at 12:31 PM, Chris Webb <chris@...chsys.com> wrote:
>> > Minchan Kim <minchan.kim@...il.com> writes:
>> >
>> >> Another possibility is _zone_reclaim_ in NUMA.
>> >> Your working set has many anonymous page.
>> >>
>> >> The zone_reclaim set priority to ZONE_RECLAIM_PRIORITY.
>> >> It can make reclaim mode to lumpy so it can page out anon pages.
>> >>
>> >> Could you show me /proc/sys/vm/[zone_reclaim_mode/min_unmapped_ratio] ?
>> >
>> > Sure, no problem. On the machine with the /proc/meminfo I showed earlier,
>> > these are
>> >
>> > # cat /proc/sys/vm/zone_reclaim_mode
>> > 0
>> > # cat /proc/sys/vm/min_unmapped_ratio
>> > 1
>>
>> if zone_reclaim_mode is zero, it doesn't swap out anon_pages.
>
> If there are lots of order-1 or higher allocations, anonymous pages
> will be randomly evicted, regardless of their LRU ages. This is
I thought swapped out page is huge (ie, 3G) even though it enters lumpy mode.
But it's possible. :)
> probably another factor why the users claim. Are there easy ways to
> confirm this other than patching the kernel?
cat /proc/buddyinfo can help?
Off-topic:
It would be better to add new vmstat of lumpy entrance.
Pseudo code.
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0f9f624..d10ff4e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1641,7 +1641,7 @@ out:
}
}
-static void set_lumpy_reclaim_mode(int priority, struct scan_control *sc)
+static void set_lumpy_reclaim_mode(int priority, struct scan_control
*sc, struct zone *zone)
{
/*
* If we need a large contiguous chunk of memory, or have
@@ -1654,6 +1654,9 @@ static void set_lumpy_reclaim_mode(int priority,
struct scan_control *sc)
sc->lumpy_reclaim_mode = 1;
else
sc->lumpy_reclaim_mode = 0;
+
+ if (sc->lumpy_reclaim_mode)
+ inc_zone_state(zone, NR_LUMPY);
}
/*
@@ -1670,7 +1673,7 @@ static void shrink_zone(int priority, struct zone *zone,
get_scan_count(zone, sc, nr, priority);
- set_lumpy_reclaim_mode(priority, sc);
+ set_lumpy_reclaim_mode(priority, sc, zone);
while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
nr[LRU_INACTIVE_FILE]) {
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists