[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28c262360907050751t1fccbf4t4ace572b4e003a13@mail.gmail.com>
Date: Sun, 5 Jul 2009 23:51:46 +0900
From: Minchan Kim <minchan.kim@...il.com>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc: Wu Fengguang <fengguang.wu@...el.com>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH 4/5] add isolate pages vmstat
On Sun, Jul 5, 2009 at 9:23 PM, KOSAKI
Motohiro<kosaki.motohiro@...fujitsu.com> wrote:
>> On Sun, Jul 05, 2009 at 05:25:32PM +0800, KOSAKI Motohiro wrote:
>> > Subject: [PATCH] add isolate pages vmstat
>> >
>> > If the system have plenty threads or processes, concurrent reclaim can
>> > isolate very much pages.
>> > Unfortunately, current /proc/meminfo and OOM log can't show it.
>> >
>> > This patch provide the way of showing this information.
>> >
>> >
>> > reproduce way
>> > -----------------------
>> > % ./hackbench 140 process 1000
>> > => couse OOM
>> >
>> > Active_anon:4419 active_file:120 inactive_anon:1418
>> > inactive_file:61 unevictable:0 isolated:45311
>> > ^^^^^
>> > dirty:0 writeback:580 unstable:0
>> > free:27 slab_reclaimable:297 slab_unreclaimable:4050
>> > mapped:221 kernel_stack:5758 pagetables:28219 bounce:0
>> >
>> >
>> >
>> > Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
>> > ---
>> > drivers/base/node.c | 2 ++
>> > fs/proc/meminfo.c | 2 ++
>> > include/linux/mmzone.h | 1 +
>> > mm/page_alloc.c | 6 ++++--
>> > mm/vmscan.c | 4 ++++
>> > mm/vmstat.c | 2 +-
>> > 6 files changed, 14 insertions(+), 3 deletions(-)
>> >
>> > Index: b/fs/proc/meminfo.c
>> > ===================================================================
>> > --- a/fs/proc/meminfo.c
>> > +++ b/fs/proc/meminfo.c
>> > @@ -65,6 +65,7 @@ static int meminfo_proc_show(struct seq_
>> > "Active(file): %8lu kB\n"
>> > "Inactive(file): %8lu kB\n"
>> > "Unevictable: %8lu kB\n"
>> > + "IsolatedPages: %8lu kB\n"
>> > "Mlocked: %8lu kB\n"
>> > #ifdef CONFIG_HIGHMEM
>> > "HighTotal: %8lu kB\n"
>> > @@ -109,6 +110,7 @@ static int meminfo_proc_show(struct seq_
>> > K(pages[LRU_ACTIVE_FILE]),
>> > K(pages[LRU_INACTIVE_FILE]),
>> > K(pages[LRU_UNEVICTABLE]),
>> > + K(global_page_state(NR_ISOLATED)),
>>
>> Glad to see you renamed it to NR_ISOLATED :)
>> But for the user visible name, how about IsolatedLRU?
>
> Ah, nice. below is update patch.
>
> Changelog
> ----------------
> since v1
> - rename "IsolatedPages" to "IsolatedLRU"
>
>
> =================================
> Subject: [PATCH] add isolate pages vmstat
>
> If the system have plenty threads or processes, concurrent reclaim can
> isolate very much pages.
> Unfortunately, current /proc/meminfo and OOM log can't show it.
>
> This patch provide the way of showing this information.
>
>
> reproduce way
> -----------------------
> % ./hackbench 140 process 1000
> => couse OOM
>
> Active_anon:4419 active_file:120 inactive_anon:1418
> inactive_file:61 unevictable:0 isolated:45311
> ^^^^^
> dirty:0 writeback:580 unstable:0
> free:27 slab_reclaimable:297 slab_unreclaimable:4050
> mapped:221 kernel_stack:5758 pagetables:28219 bounce:0
>
>
>
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
> Acked-by: Wu Fengguang <fengguang.wu@...el.com>
> ---
> drivers/base/node.c | 2 ++
> fs/proc/meminfo.c | 2 ++
> include/linux/mmzone.h | 1 +
> mm/page_alloc.c | 6 ++++--
> mm/vmscan.c | 4 ++++
> mm/vmstat.c | 2 +-
> 6 files changed, 14 insertions(+), 3 deletions(-)
>
> Index: b/fs/proc/meminfo.c
> ===================================================================
> --- a/fs/proc/meminfo.c
> +++ b/fs/proc/meminfo.c
> @@ -65,6 +65,7 @@ static int meminfo_proc_show(struct seq_
> "Active(file): %8lu kB\n"
> "Inactive(file): %8lu kB\n"
> "Unevictable: %8lu kB\n"
> + "IsolatedLRU: %8lu kB\n"
> "Mlocked: %8lu kB\n"
> #ifdef CONFIG_HIGHMEM
> "HighTotal: %8lu kB\n"
> @@ -109,6 +110,7 @@ static int meminfo_proc_show(struct seq_
> K(pages[LRU_ACTIVE_FILE]),
> K(pages[LRU_INACTIVE_FILE]),
> K(pages[LRU_UNEVICTABLE]),
> + K(global_page_state(NR_ISOLATED)),
> K(global_page_state(NR_MLOCK)),
> #ifdef CONFIG_HIGHMEM
> K(i.totalhigh),
> Index: b/include/linux/mmzone.h
> ===================================================================
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -100,6 +100,7 @@ enum zone_stat_item {
> NR_BOUNCE,
> NR_VMSCAN_WRITE,
> NR_WRITEBACK_TEMP, /* Writeback using temporary buffers */
> + NR_ISOLATED, /* Temporary isolated pages from lru */
> #ifdef CONFIG_NUMA
> NUMA_HIT, /* allocated in intended node */
> NUMA_MISS, /* allocated in non intended node */
> Index: b/mm/page_alloc.c
> ===================================================================
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2116,8 +2116,7 @@ void show_free_areas(void)
> }
>
> printk("Active_anon:%lu active_file:%lu inactive_anon:%lu\n"
> - " inactive_file:%lu"
> - " unevictable:%lu"
> + " inactive_file:%lu unevictable:%lu isolated:%lu\n"
It's good.
I have a one suggestion.
I know this patch came from David's OOM problem a few days ago.
I think total pages isolated of all lru doesn't help us much.
It just represents why [in]active[anon/file] is zero.
How about adding isolate page number per each lru ?
IsolatedPages(file)
IsolatedPages(anon)
It can help knowing exact number of each lru.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists