lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aPulPxKvWiCQcKz5@black.igk.intel.com>
Date: Fri, 24 Oct 2025 18:11:43 +0200
From: Andy Shevchenko <andriy.shevchenko@...el.com>
To: Justinien Bouron <jbouron@...zon.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Baoquan He <bhe@...hat.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
	Petr Mladek <pmladek@...e.com>,
	Mario Limonciello <mario.limonciello@....com>,
	Marcos Paulo de Souza <mpdesouza@...e.com>,
	Steven Chen <chenste@...ux.microsoft.com>,
	Yan Zhao <yan.y.zhao@...el.com>, Alexander Graf <graf@...zon.com>,
	kexec@...ts.infradead.org, linux-kernel@...r.kernel.org,
	Gunnar Kudrjavets <gunnarku@...zon.com>
Subject: Re: [PATCH v3] kexec_core: Remove superfluous page offset handling
 in segment loading

On Fri, Oct 24, 2025 at 08:50:09AM -0700, Justinien Bouron wrote:
> During kexec_segment loading, when copying the content of the segment
> (i.e. kexec_segment::kbuf or kexec_segment::buf) to its associated
> pages, kimage_load_{cma,normal,crash}_segment handle the case where the
> physical address of the segment is not page aligned, e.g. in
> kimage_load_normal_segment:
> ```
> 	page = kimage_alloc_page(image, GFP_HIGHUSER, maddr);
> 	// ...
> 	ptr = kmap_local_page(page);
> 	// ...
> 	ptr += maddr & ~PAGE_MASK;
> 	mchunk = min_t(size_t, mbytes,
> 		PAGE_SIZE - (maddr & ~PAGE_MASK));
> 	// ^^^^ Non page-aligned segments handled here ^^^
> 	// ...
> 	if (image->file_mode)
> 		memcpy(ptr, kbuf, uchunk);
> 	else
> 		result = copy_from_user(ptr, buf, uchunk);
> ```
> (similar logic is present in kimage_load_{cma,crash}_segment).
> 
> This is actually not needed because, prior to their loading, all
> kexec_segments first go through a vetting step in
> `sanity_check_segment_list`, which rejects any segment that is not
> page-aligned:
> ```
> 	for (i = 0; i < nr_segments; i++) {
> 		unsigned long mstart, mend;
> 		mstart = image->segment[i].mem;
> 		mend   = mstart + image->segment[i].memsz;
> 		// ...
> 		if ((mstart & ~PAGE_MASK) || (mend & ~PAGE_MASK))
> 			return -EADDRNOTAVAIL;
> 		// ...
> 	}
> ```
> In case `sanity_check_segment_list` finds a non-page aligned the whole
> kexec load is aborted and no segment is loaded.
> 
> This means that `kimage_load_{cma,normal,crash}_segment` never actually
> have to handle non page-aligned segments and `(maddr & ~PAGE_MASK) == 0`
> is always true no matter if the segment is coming from a file (i.e.
> `kexec_file_load` syscall), from a user-space buffer (i.e. `kexec_load`
> syscall) or created by the kernel through `kexec_add_buffer`. In the
> latter case, `kexec_add_buffer` actually enforces the page alignment:
> ```
> 	/* Ensure minimum alignment needed for segments. */
> 	kbuf->memsz = ALIGN(kbuf->memsz, PAGE_SIZE);
> 	kbuf->buf_align = max(kbuf->buf_align, PAGE_SIZE);
> ```

> Signed-off-by: Justinien Bouron <jbouron@...zon.com>
> Reviewed-by: Gunnar Kudrjavets <gunnarku@...zon.com>
> ---
> Changes since v1:
> 	- Reworked commit message as requested by Baoquan He
> 	  <bhe@...hat.com>
> 	- Removed accidental whitespace change
> 	- v1 Link: https://lore.kernel.org/lkml/20250910163116.49148-1-jbouron@amazon.com/
> 
> Changes since v2:
> 	- Removed unused variable in kimage_load_cma_segment() which was
> 	  causing a warning and failing build with `make W=1`. Thanks
> 	  Andy Shevchenko for finding this issue
> 	- v2 Link: https://lore.kernel.org/lkml/20250929160220.47616-1-jbouron@amazon.com/

At least this has the leftovers being removed, thanks!
FWIW,

Reviewed-by: Andy Shevchenko <andriy.shevchenko@...el.com>

-- 
With Best Regards,
Andy Shevchenko



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ