lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 29 Jan 2013 16:24:31 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	xtu4 <xiaobing.tu@...el.com>
Cc:	linux-kernel@...r.kernel.org, guifang.tang@...el.com,
	linX.z.chen@...el.com,
	Arve Hjønnevåg <arve@...roid.com>
Subject: Re: Avoid high order memory allocating with kmalloc, when read
 large seq file

On Tue, 29 Jan 2013 14:14:14 +0800
xtu4 <xiaobing.tu@...el.com> wrote:

> @@ -209,8 +209,17 @@ ssize_t seq_read(struct file *file, char __user 
> *buf, size_t size, loff_t *ppos)
>           if (m->count < m->size)
>               goto Fill;
>           m->op->stop(m, p);
> -        kfree(m->buf);
> -        m->buf = kmalloc(m->size <<= 1, GFP_KERNEL);
> +        if (m->size > 2 * PAGE_SIZE) {
> +            vfree(m->buf);
> +        } else
> +            kfree(m->buf);
> +        m->size <<= 1;
> +        if (m->size > 2 * PAGE_SIZE) {
> +            m->buf = vmalloc(m->size);
> +        } else
> +            m->buf = kmalloc(m->size <<= 1, GFP_KERNEL);
> +
> +
>           if (!m->buf)
>               goto Enomem;
>           m->count = 0;
> @@ -325,7 +334,10 @@ EXPORT_SYMBOL(seq_lseek);

The conventional way of doing this is to attempt the kmalloc with
__GFP_NOWARN and if that failed, fall back to vmalloc().

Using vmalloc is generally not a good thing, mainly because of
fragmentation issues, but for short-lived allocations like this, that
shouldn't be too bad.

But really, the binder code is being obnoxious here and it would be
best to fix it up.  Please identify with some care which part of the
binder code is causing this problem.  binder_stats_show(), from a
guess?  It looks like that function's output size is proportional to
the number of processes on binder_procs?  If so, there is no upper
bound, is there?  Problem!

btw, binder_debug_no_lock should just go away.  That list needs
locking.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ