[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121211033659.GA16230@one.firstfloor.org>
Date: Tue, 11 Dec 2012 04:36:59 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Xishi Qiu <qiuxishi@...wei.com>
Cc: Andi Kleen <andi@...stfloor.org>,
Fengguang Wu <fengguang.wu@...el.com>,
Simon Jeons <simon.jeons@...il.com>,
Wanpeng Li <liwanp@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
WuJianguo <wujianguo@...wei.com>,
Liujiang <jiang.liu@...wei.com>, Vyacheslav.Dubeyko@...wei.com,
Borislav Petkov <bp@...en8.de>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, wency@...fujitsu.com,
Hanjun Guo <guohanjun@...wei.com>
Subject: Re: [PATCH V2] MCE: fix an error of mce_bad_pages statistics
> "There are not so many free pages in a typical server system", sorry I don't
> quite understand it.
Linux tries to keep most memory in caches. As Linus says "free memory is
bad memory"
>
> buffered_rmqueue()
> prep_new_page()
> check_new_page()
> bad_page()
>
> If we alloc 2^10 pages and one of them is a poisoned page, then the whole 4M
> memory will be dropped.
prep_new_page() is only called on whatever is allocated.
MAX_ORDER is much smaller than 2^10
If you allocate a large order page then yes the complete page is
dropped. This is today generally true in hwpoison. It would be one
possible area of improvement (probably mostly if 1GB pages become
more common than they are today)
It's usually not a problem because usually most allocations are
small order and systems have generally very few memory errors,
and even the largest MAX_ORDER pages are a small fraction of the
total memory.
If you lose larger amounts of memory usually you quickly hit something
that HWPoison cannot handle.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists