[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130320185618.GC970@dhcp22.suse.cz>
Date: Wed, 20 Mar 2013 19:56:18 +0100
From: Michal Hocko <mhocko@...e.cz>
To: David Rientjes <rientjes@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [patch] mm, hugetlb: include hugepages in meminfo
On Wed 20-03-13 11:46:12, David Rientjes wrote:
> On Wed, 20 Mar 2013, Michal Hocko wrote:
>
> > On Tue 19-03-13 17:18:12, David Rientjes wrote:
> > > Particularly in oom conditions, it's troublesome that hugetlb memory is
> > > not displayed. All other meminfo that is emitted will not add up to what
> > > is expected, and there is no artifact left in the kernel log to show that
> > > a potentially significant amount of memory is actually allocated as
> > > hugepages which are not available to be reclaimed.
> >
> > Yes, I like the idea. It's bitten me already in the past.
> >
> > The only objection I have is that you print only default_hstate. You
> > just need to wrap your for_each_node_state by for_each_hstate to do
> > that. With that applied, feel free to add my
> > Acked-by: Michal Hocko <mhocko@...e.cz>
> >
>
> I didn't do this because it isn't already exported in /proc/meminfo and
> since we've made an effort to reduce the amount of information emitted by
> the oom killer at oom kill time to avoid spamming the kernel log, I only
> print the default hstate.
I do not see how this would make the output too much excessive. If
you do not want to have too many lines in the output then the hstate
loop can be pushed inside the node loop and have only per-node number
of lines same as you are proposing except you would have a complete
information.
Besides that we are talking about handful of hstates.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists