[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aMKZUY/zg31qN+68@MiWiFi-R3L-srv>
Date: Thu, 11 Sep 2025 17:41:37 +0800
From: Baoquan He <bhe@...hat.com>
To: Justinien Bouron <jbouron@...zon.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Petr Mladek <pmladek@...e.com>,
Mario Limonciello <mario.limonciello@....com>,
Marcos Paulo de Souza <mpdesouza@...e.com>,
Alexander Graf <graf@...zon.com>,
Steven Chen <chenste@...ux.microsoft.com>,
Yan Zhao <yan.y.zhao@...el.com>, kexec@...ts.infradead.org,
linux-kernel@...r.kernel.org,
Gunnar Kudrjavets <gunnarku@...zon.com>
Subject: Re: [PATCH] kexec_core: Remove superfluous page offset handling in
segment loading
On 09/10/25 at 09:31am, Justinien Bouron wrote:
> Kexec does not accept segments for which the destination address is not
> page aligned. Therefore there is no need for page offset handling when
> loading segments.
Do you mean we will adjust the memsz and buf_align to PAGE_SIZE aligned
in kexec_add_buffer()? That better be explained in log.
int kexec_add_buffer(struct kexec_buf *kbuf)
{
......
/* Ensure minimum alignment needed for segments. */
kbuf->memsz = ALIGN(kbuf->memsz, PAGE_SIZE);
kbuf->buf_align = max(kbuf->buf_align, PAGE_SIZE);
kbuf->cma = NULL;
......
}
>
> Signed-off-by: Justinien Bouron <jbouron@...zon.com>
> Reviewed-by: Gunnar Kudrjavets <gunnarku@...zon.com>
> ---
> kernel/kexec_core.c | 13 ++++---------
> 1 file changed, 4 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
> index 31203f0bacaf..7d4c9eebea79 100644
> --- a/kernel/kexec_core.c
> +++ b/kernel/kexec_core.c
> @@ -761,9 +761,7 @@ static int kimage_load_cma_segment(struct kimage *image, int idx)
> while (mbytes) {
> size_t uchunk, mchunk;
>
> - ptr += maddr & ~PAGE_MASK;
> - mchunk = min_t(size_t, mbytes,
> - PAGE_SIZE - (maddr & ~PAGE_MASK));
> + mchunk = min_t(size_t, mbytes, PAGE_SIZE);
I am not so eager to remove it as keeping it makes a little sense on
defensive programming. Surely, I am not opposing it as it's truly not
necessary for now.
> uchunk = min(ubytes, mchunk);
>
> if (uchunk) {
> @@ -815,6 +813,7 @@ static int kimage_load_normal_segment(struct kimage *image, int idx)
> mbytes = segment->memsz;
> maddr = segment->mem;
>
> +
> if (image->segment_cma[idx])
> return kimage_load_cma_segment(image, idx);
>
> @@ -840,9 +839,7 @@ static int kimage_load_normal_segment(struct kimage *image, int idx)
> ptr = kmap_local_page(page);
> /* Start with a clear page */
> clear_page(ptr);
> - ptr += maddr & ~PAGE_MASK;
> - mchunk = min_t(size_t, mbytes,
> - PAGE_SIZE - (maddr & ~PAGE_MASK));
> + mchunk = min_t(size_t, mbytes, PAGE_SIZE);
> uchunk = min(ubytes, mchunk);
>
> if (uchunk) {
> @@ -905,9 +902,7 @@ static int kimage_load_crash_segment(struct kimage *image, int idx)
> }
> arch_kexec_post_alloc_pages(page_address(page), 1, 0);
> ptr = kmap_local_page(page);
> - ptr += maddr & ~PAGE_MASK;
> - mchunk = min_t(size_t, mbytes,
> - PAGE_SIZE - (maddr & ~PAGE_MASK));
> + mchunk = min_t(size_t, mbytes, PAGE_SIZE);
> uchunk = min(ubytes, mchunk);
> if (mchunk > uchunk) {
> /* Zero the trailing part of the page */
> --
> 2.43.0
>
>
Powered by blists - more mailing lists