lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 24 Apr 2009 01:13:28 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Markus Metzger <markus.t.metzger@...el.com>
Cc:	mingo@...e.hu, a.p.zijlstra@...llo.nl, markus.t.metzger@...il.com,
	roland@...hat.com, eranian@...glemail.com, oleg@...hat.com,
	juan.villacis@...el.com, ak@...ux.jf.intel.com,
	linux-kernel@...r.kernel.org, tglx@...utronix.de, hpa@...or.com
Subject: Re: [rfc 2/2] x86, bts: use physically non-contiguous trace buffer

On Fri, 24 Apr 2009 10:00:55 +0200 Markus Metzger <markus.t.metzger@...el.com> wrote:

> Use vmalloc to allocate the branch trace buffer.
> 
> Peter Zijlstra suggested to use vmalloc rather than kmalloc to
> allocate the potentially multi-page branch trace buffer.

The changelog provides no reason for this change.  It should do so.

> Is there a way to have vmalloc allocate a physically non-contiguous
> buffer for test purposes? Ideally, the memory area would have big
> holes in it with sensitive data in between so I would know immediately
> when this is overwritten.

I suppose you could allocate the pages by hand and then vmap() them. 
Allocating 2* the number you need and then freeing every second one
should make them physically holey.

> --- a/arch/x86/kernel/ptrace.c
> +++ b/arch/x86/kernel/ptrace.c
> @@ -22,6 +22,7 @@
>  #include <linux/seccomp.h>
>  #include <linux/signal.h>
>  #include <linux/workqueue.h>
> +#include <linux/vmalloc.h>
>  
>  #include <asm/uaccess.h>
>  #include <asm/pgtable.h>
> @@ -626,7 +627,7 @@ static int alloc_bts_buffer(struct bts_c
>  	if (err < 0)
>  		return err;
>  
> -	buffer = kzalloc(size, GFP_KERNEL);
> +	buffer = vmalloc(size);
>  	if (!buffer)
>  		goto out_refund;
>  
> @@ -646,7 +647,7 @@ static inline void free_bts_buffer(struc
>  	if (!context->buffer)
>  		return;
>  
> -	kfree(context->buffer);
> +	vfree(context->buffer);
>  	context->buffer = NULL;
>  

The patch looks like a regression to me.  vmalloc memory is slower to
allocate, slower to free, slower to access and can exhaust or fragment
the vmalloc arena.  Confused.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ