[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2026020150-nerd-unweave-929a@gregkh>
Date: Sun, 1 Feb 2026 14:05:24 +0100
From: "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>
To: Sai Ritvik Tanksalkar <stanksal@...due.edu>
Cc: "kees@...nel.org" <kees@...nel.org>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"gpiccoli@...lia.com" <gpiccoli@...lia.com>,
"anton.vorontsov@...aro.org" <anton.vorontsov@...aro.org>,
"linux-hardening@...r.kernel.org" <linux-hardening@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>
Subject: Re: [PATCH] pstore/ram: fix buffer overflow in
persistent_ram_save_old()
On Sun, Feb 01, 2026 at 12:59:24PM +0000, Sai Ritvik Tanksalkar wrote:
> persistent_ram_save_old() can be called multiple times for the same
> persistent_ram_zone (e.g., via ramoops_pstore_read -> ramoops_get_next_prz
> for PSTORE_TYPE_DMESG records).
>
> Currently, the function only allocates prz->old_log when it is NULL,
> but it unconditionally updates prz->old_log_size to the current buffer
> size and then performs memcpy_fromio() using this new size. If the
> buffer size has grown since the first allocation (which can happen
> across different kernel boot cycles), this leads to:
>
> 1. A heap buffer overflow (OOB write) in the memcpy_fromio() calls.
> 2. A subsequent OOB read when ramoops_pstore_read() accesses the buffer
> using the incorrect (larger) old_log_size.
>
> The KASAN splat would look similar to:
> BUG: KASAN: slab-out-of-bounds in ramoops_pstore_read+0x...
> Read of size N at addr ... by task ...
>
> Fix this by freeing and reallocating the buffer when the new size
> exceeds the previously allocated size. This ensures old_log always has
> sufficient space for the data being copied.
>
> Fixes: 201e4aca5aa1 ("pstore/ram: Should update old dmesg buffer before reading")
> Cc: stable@...r.kernel.org
> Signed-off-by: Pwnverse <stanksal@...due.edu>
> ---
> fs/pstore/ram_core.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
> index f1848cdd6d34..8df813a42a41 100644
> --- a/fs/pstore/ram_core.c
> +++ b/fs/pstore/ram_core.c
> @@ -298,6 +298,14 @@ void persistent_ram_save_old(struct persistent_ram_zone *prz)
> if (!size)
> return;
>
> + /*
> + * If the existing buffer is too small, free it so a new one is
> + * allocated. This can happen when persistent_ram_save_old() is
> + * called multiple times with different buffer sizes.
> + */
> + if (prz->old_log && prz->old_log_size < size)
> + persistent_ram_free_old(prz);
> +
> if (!prz->old_log) {
> persistent_ram_ecc_old(prz);
> prz->old_log = kvzalloc(size, GFP_KERNEL);
> --
> 2.43.0
Hi,
This is the friendly patch-bot of Greg Kroah-Hartman. You have sent him
a patch that has triggered this response. He used to manually respond
to these common problems, but in order to save his sanity (he kept
writing the same thing over and over, yet to different people), I was
created. Hopefully you will not take offence and will fix the problem
in your patch and resubmit it so that it can be accepted into the Linux
kernel tree.
You are receiving this message because of the following common error(s)
as indicated below:
- Your patch is malformed (tabs converted to spaces, linewrapped, etc.)
and can not be applied. Please read the file,
Documentation/process/email-clients.rst in order to fix this.
- It looks like you did not use your "real" name for the patch on either
the Signed-off-by: line, or the From: line (both of which have to
match). Please read the kernel file,
Documentation/process/submitting-patches.rst for how to do this
correctly.
- This looks like a new version of a previously submitted patch, but you
did not list below the --- line any changes from the previous version.
Please read the section entitled "The canonical patch format" in the
kernel file, Documentation/process/submitting-patches.rst for what
needs to be done here to properly describe this.
If you wish to discuss this problem further, or you have questions about
how to resolve this issue, please feel free to respond to this email and
Greg will reply once he has dug out from the pending patches received
from other developers.
thanks,
greg k-h's patch email bot
Powered by blists - more mailing lists