lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240821115636.3546f684@gandalf.local.home>
Date: Wed, 21 Aug 2024 11:56:36 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Vincent Donnefort <vdonnefort@...gle.com>
Cc: mhiramat@...nel.org, linux-kernel@...r.kernel.org,
 linux-trace-kernel@...r.kernel.org, mathieu.desnoyers@...icios.com,
 kernel-team@...roid.com, david@...hat.com
Subject: Re: [PATCH v2] ring-buffer: Align meta-page to sub-buffers for
 improved TLB usage

On Fri, 28 Jun 2024 11:46:11 +0100
Vincent Donnefort <vdonnefort@...gle.com> wrote:

> diff --git a/tools/testing/selftests/ring-buffer/map_test.c b/tools/testing/selftests/ring-buffer/map_test.c
> index a9006fa7097e..4bb0192e43f3 100644
> --- a/tools/testing/selftests/ring-buffer/map_test.c
> +++ b/tools/testing/selftests/ring-buffer/map_test.c
> @@ -228,6 +228,20 @@ TEST_F(map, data_mmap)
>  	data = mmap(NULL, data_len, PROT_READ, MAP_SHARED,
>  		    desc->cpu_fd, meta_len);
>  	ASSERT_EQ(data, MAP_FAILED);
> +
> +	/* Verify meta-page padding */
> +	if (desc->meta->meta_page_size > getpagesize()) {
> +		void *addr;
> +
> +		data_len = desc->meta->meta_page_size;
> +		data = mmap(NULL, data_len,
> +			    PROT_READ, MAP_SHARED, desc->cpu_fd, 0);
> +		ASSERT_NE(data, MAP_FAILED);
> +
> +		addr = (void *)((unsigned long)data + getpagesize());
> +		ASSERT_EQ(*((int *)addr), 0);

Should we make this a test that the entire page is zero?

		for (int i = desc->meta->meta_struct_len; i < desc->meta->meta_page_size; i += sizeof(int))
			ASSERT_EQ(((int *)data)[i], 0);

?

> +		munmap(data, data_len);
> +	}
>  }

Also, looking at the init, if for some reason (I highly doubt it may
happen) that the meta_struct_len becomes bigger than page_size, we should
update the init section to:

	/* Handle the case where meta_struct_len is greater than page size */
	if (page_size < desc->meta->meta_struct_len) {
		/* meta_page_size is >= meta_struct_len */
		page_size = desc->meta->meta_page_size;
		munmap(desc->meta, page_size);
		map = mmap(NULL, page_size, PROT_READ, MAP_SHARED, desc->cpu_fd, 0);
		if (map == MAP_FAILED)
			return -errno;
		desc->meta = (struct trace_buffer_meta *)map;
	}

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ