lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120120145545.bcf5c76f.akpm@linux-foundation.org>
Date:	Fri, 20 Jan 2012 14:55:45 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Glauber Costa <glommer@...allels.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	Russell King - ARM Linux <linux@....linux.org.uk>,
	Paul Tuner <pjt@...gle.com>
Subject: Re: [PATCH] proc:  speedup /proc/stat handling

On Fri, 20 Jan 2012 16:59:24 +0100
Eric Dumazet <eric.dumazet@...il.com> wrote:

> On a typical 16 cpus machine, "cat /proc/stat" gives more than 4096
> bytes, and is slow :
> 
> # strace -T -o /tmp/STRACE cat /proc/stat | wc -c
> 5826
> # grep "cpu " /tmp/STRACE
> read(0, "cpu  1949310 19 2144714 12117253"..., 32768) = 5826 <0.001504>
> 
> 
> Thats partly because show_stat() must be called twice since initial
> buffer size is too small (4096 bytes for less than 32 possible cpus)
> 
> Fix this by :
> 
> 1) Taking into account nr_irqs in the initial buffer sizing.
> 
> 2) Using ksize() to allow better filling of initial buffer.
> 
> 3) Reduce the bloat on "intr ..." line :
>    Dont output trailing " 0" values at the end of irq range.

This one is worrisome.  Mainly because the number of fields in the
`intr' line can now increase over time (yes?).  So if a monitoring program
were to read this line and use the result to size an internal buffer
then after a while it might start to drop information or to get buffer
overruns.

> An alternative to 1) would be to remember the largest m->count reached
> in show_stat()
> 
>
> ...
>
> @@ -157,14 +171,17 @@ static int show_stat(struct seq_file *p, void *v)
>  
>  static int stat_open(struct inode *inode, struct file *file)
>  {
> -	unsigned size = 4096 * (1 + num_possible_cpus() / 32);
> +	unsigned size = 1024 + 128 * num_possible_cpus();
>  	char *buf;
>  	struct seq_file *m;
>  	int res;
>  
> +	/* minimum size to display a 0 count per interrupt : 2 bytes */
> +	size += 2 * nr_irqs;
> +
>  	/* don't ask for more than the kmalloc() max size */
> -	if (size > KMALLOC_MAX_SIZE)
> -		size = KMALLOC_MAX_SIZE;
> +	size = min_t(unsigned, size, KMALLOC_MAX_SIZE);

The change looks reasonable, however the use of KMALLOC_MAX_SIZE in the
existing code is worrisome.  If `size' ever gets that large then
there's a decent chance that the kmalloc() will simply fail and a
better chance that it would cause tons of VM scanning activity,
including disk writeout.

But I've never seen anyone report problems in this area, so shrug.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ