lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240113223824.3e9eed42cf10748e4255afde@kernel.org>
Date: Sat, 13 Jan 2024 22:38:24 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Vincent Donnefort <vdonnefort@...gle.com>
Cc: rostedt@...dmis.org, linux-kernel@...r.kernel.org,
 linux-trace-kernel@...r.kernel.org, mathieu.desnoyers@...icios.com,
 kernel-team@...roid.com
Subject: Re: [PATCH v11 1/5] ring-buffer: Zero ring-buffer sub-buffers

On Thu, 11 Jan 2024 16:17:08 +0000
Vincent Donnefort <vdonnefort@...gle.com> wrote:

> In preparation for the ring-buffer memory mapping where each subbuf will
> be accessible to user-space, zero all the page allocations.
> 
> Signed-off-by: Vincent Donnefort <vdonnefort@...gle.com>

Looks good to me.

Reviewed-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>

Thank you!

> 
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index 173d2595ce2d..db73e326fa04 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -1466,7 +1466,8 @@ static int __rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer,
>  
>  		list_add(&bpage->list, pages);
>  
> -		page = alloc_pages_node(cpu_to_node(cpu_buffer->cpu), mflags,
> +		page = alloc_pages_node(cpu_to_node(cpu_buffer->cpu),
> +					mflags | __GFP_ZERO,
>  					cpu_buffer->buffer->subbuf_order);
>  		if (!page)
>  			goto free_pages;
> @@ -1551,7 +1552,8 @@ rb_allocate_cpu_buffer(struct trace_buffer *buffer, long nr_pages, int cpu)
>  
>  	cpu_buffer->reader_page = bpage;
>  
> -	page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL, cpu_buffer->buffer->subbuf_order);
> +	page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_ZERO,
> +				cpu_buffer->buffer->subbuf_order);
>  	if (!page)
>  		goto fail_free_reader;
>  	bpage->page = page_address(page);
> @@ -5525,7 +5527,8 @@ ring_buffer_alloc_read_page(struct trace_buffer *buffer, int cpu)
>  	if (bpage->data)
>  		goto out;
>  
> -	page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_NORETRY,
> +	page = alloc_pages_node(cpu_to_node(cpu),
> +				GFP_KERNEL | __GFP_NORETRY | __GFP_ZERO,
>  				cpu_buffer->buffer->subbuf_order);
>  	if (!page) {
>  		kfree(bpage);
> -- 
> 2.43.0.275.g3460e3d667-goog
> 


-- 
Masami Hiramatsu (Google) <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ