lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 31 Jan 2013 06:19:15 +0000
From:	"Tu, Xiaobing" <xiaobing.tu@...el.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"Tang, Guifang" <guifang.tang@...el.com>,
	"Chen, LinX Z" <linx.z.chen@...el.com>,
	Arve Hj?nnev?g <arve@...roid.com>
Subject: RE: Avoid high order memory allocating with kmalloc, when read
 large seq file

Hi Morton
  Thank you very much for your kindly info, In android system,, when you read /sys/kernel/debug/binder/proc/xxx, xxx is the process id , it will trigger high order kmalloc.
But we can't limit the size of binder info, because we need this to debug the binder related issue.
I had re-send the patch, how do you think for about using vmalloc instaed of kmalloc when malloc high order allocating? Memory gragment should not be the issue, because this is very quick to free such memory.

Br
Xiaobing


-----Original Message-----
From: Andrew Morton [mailto:akpm@...ux-foundation.org] 
Sent: Wednesday, January 30, 2013 8:25 AM
To: Tu, Xiaobing
Cc: linux-kernel@...r.kernel.org; Tang, Guifang; Chen, LinX Z; Arve Hjønnevåg
Subject: Re: Avoid high order memory allocating with kmalloc, when read large seq file

On Tue, 29 Jan 2013 14:14:14 +0800
xtu4 <xiaobing.tu@...el.com> wrote:

> @@ -209,8 +209,17 @@ ssize_t seq_read(struct file *file, char __user 
> *buf, size_t size, loff_t *ppos)
>           if (m->count < m->size)
>               goto Fill;
>           m->op->stop(m, p);
> -        kfree(m->buf);
> -        m->buf = kmalloc(m->size <<= 1, GFP_KERNEL);
> +        if (m->size > 2 * PAGE_SIZE) {
> +            vfree(m->buf);
> +        } else
> +            kfree(m->buf);
> +        m->size <<= 1;
> +        if (m->size > 2 * PAGE_SIZE) {
> +            m->buf = vmalloc(m->size);
> +        } else
> +            m->buf = kmalloc(m->size <<= 1, GFP_KERNEL);
> +
> +
>           if (!m->buf)
>               goto Enomem;
>           m->count = 0;
> @@ -325,7 +334,10 @@ EXPORT_SYMBOL(seq_lseek);

The conventional way of doing this is to attempt the kmalloc with __GFP_NOWARN and if that failed, fall back to vmalloc().

Using vmalloc is generally not a good thing, mainly because of fragmentation issues, but for short-lived allocations like this, that shouldn't be too bad.

But really, the binder code is being obnoxious here and it would be best to fix it up.  Please identify with some care which part of the binder code is causing this problem.  binder_stats_show(), from a guess?  It looks like that function's output size is proportional to the number of processes on binder_procs?  If so, there is no upper bound, is there?  Problem!

btw, binder_debug_no_lock should just go away.  That list needs locking.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ