[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1401275180.2497.5.camel@perseus.fritz.box>
Date: Wed, 28 May 2014 19:06:20 +0800
From: Ian Kent <raven@...maw.net>
To: Heiko Carstens <heiko.carstens@...ibm.com>
Cc: Christoph Hellwig <hch@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Andrea Righi <andrea@...terlinux.com>,
Eric Dumazet <eric.dumazet@...il.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Hendrik Brueckner <brueckner@...ux.vnet.ibm.com>,
Thorsten Diehl <thorsten.diehl@...ibm.com>,
"Elliott, Robert (Server Storage)" <Elliott@...com>
Subject: Re: [PATCH 1/2] fs: proc/stat: use num_online_cpus() for buffer size
On Wed, 2014-05-28 at 10:59 +0200, Heiko Carstens wrote:
> The number of bytes contained 'within' /proc/stat depends on the number
> of online cpus and not of the numbe of possible cpus.
>
> This reduces the number of bytes requested for the initial buffer allocation
> within stat_open(). Which is usually way too high and for nr_possible_cpus()
> == 256 cpus would result in an order 4 allocation.
>
> Order 4 allocations however may fail if memory is fragmented and we would
> end up with an unreadable /proc/stat file:
>
> 62129.701569] sadc: page allocation failure: order:4, mode:0x1040d0
> [62129.701573] CPU: 1 PID: 192063 Comm: sadc Not tainted 3.10.0-123.el7.s390x #1
> [...]
> [62129.701586] Call Trace:
> [62129.701588] ([<0000000000111fbe>] show_trace+0xe6/0x130)
> [62129.701591] [<0000000000112074>] show_stack+0x6c/0xe8
> [62129.701593] [<000000000020d356>] warn_alloc_failed+0xd6/0x138
> [62129.701596] [<00000000002114d2>] __alloc_pages_nodemask+0x9da/0xb68
> [62129.701598] [<000000000021168e>] __get_free_pages+0x2e/0x58
> [62129.701599] [<000000000025a05c>] kmalloc_order_trace+0x44/0xc0
> [62129.701602] [<00000000002f3ffa>] stat_open+0x5a/0xd8
> [62129.701604] [<00000000002e9aaa>] proc_reg_open+0x8a/0x140
> [62129.701606] [<0000000000273b64>] do_dentry_open+0x1bc/0x2c8
> [62129.701608] [<000000000027411e>] finish_open+0x46/0x60
> [62129.701610] [<000000000028675a>] do_last+0x382/0x10d0
> [62129.701612] [<0000000000287570>] path_openat+0xc8/0x4f8
> [62129.701614] [<0000000000288bde>] do_filp_open+0x46/0xa8
> [62129.701616] [<000000000027541c>] do_sys_open+0x114/0x1f0
> [62129.701618] [<00000000005b1c1c>] sysc_tracego+0x14/0x1a
>
> Signed-off-by: Heiko Carstens <heiko.carstens@...ibm.com>
> ---
> fs/proc/stat.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/proc/stat.c b/fs/proc/stat.c
> index 9d231e9e5f0e..3898ca5f1e92 100644
> --- a/fs/proc/stat.c
> +++ b/fs/proc/stat.c
> @@ -184,7 +184,7 @@ static int show_stat(struct seq_file *p, void *v)
>
> static int stat_open(struct inode *inode, struct file *file)
> {
> - size_t size = 1024 + 128 * num_possible_cpus();
> + size_t size = 1024 + 128 * num_online_cpus();
Yes, I thought of this too when I was looking at the problem but was
concerned about the number of online cpus changing during the read.
If a system can hotplug cpus then I guess we don't care much about the
number of cpus increasing during the read, we'll just see incorrect data
once, but what would happen if some cpus were removed? Do we even care
about that case?
> char *buf;
> struct seq_file *m;
> int res;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists