[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2026020124-flashbulb-stumble-f24a@gregkh>
Date: Sun, 1 Feb 2026 09:18:40 +0100
From: Greg KH <gregkh@...uxfoundation.org>
To: 余昊铖 <3230100410@....edu.cn>
Cc: security@...nel.org, linux-ext4@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf/core: Fix refcount bug and potential UAF in
perf_mmap
On Sat, Jan 31, 2026 at 09:21:23PM +0800, 余昊铖 wrote:
> From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001
> From: 0ne1r0s <yuhaocheng035@...il.com>
> Date: Sat, 31 Jan 2026 21:16:52 +0800
> Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
>
> The issue is caused by a race condition between mmap() and event
> teardown. In perf_mmap(), the ring_buffer (rb) is accessed via
> map_range() after the mmap_mutex is released. If another thread
> closes the event or detaches the buffer during this window, the
> reference count of rb can drop to zero, leading to a UAF or
> refcount saturation when map_range() or subsequent logic attempts
> to use it.
>
> Fix this by extending the scope of mmap_mutex to cover the entire
> setup process, including map_range(), ensuring the buffer remains
> valid until the mapping is complete.
>
> Signed-off-by: 0ne1r0s <yuhaocheng035@...il.com>
> ---
> kernel/events/core.c | 42 +++++++++++++++++++++---------------------
> 1 file changed, 21 insertions(+), 21 deletions(-)
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 2c35acc2722b..7c93f7d057cb 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
> ret = perf_mmap_aux(vma, event, nr_pages);
> if (ret)
> return ret;
> - }
> -
> - /*
> - * Since pinned accounting is per vm we cannot allow fork() to copy our
> - * vma.
> - */
> - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
> - vma->vm_ops = &perf_mmap_vmops;
>
> - mapped = get_mapped(event, event_mapped);
> - if (mapped)
> - mapped(event, vma->vm_mm);
> -
> - /*
> - * Try to map it into the page table. On fail, invoke
> - * perf_mmap_close() to undo the above, as the callsite expects
> - * full cleanup in this case and therefore does not invoke
> - * vmops::close().
> - */
> - ret = map_range(event->rb, vma);
> - if (ret)
> - perf_mmap_close(vma);
> + /*
> + * Since pinned accounting is per vm we cannot allow fork() to copy our
> + * vma.
> + */
> + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
> + vma->vm_ops = &perf_mmap_vmops;
> +
> + mapped = get_mapped(event, event_mapped);
> + if (mapped)
> + mapped(event, vma->vm_mm);
> +
> + /*
> + * Try to map it into the page table. On fail, invoke
> + * perf_mmap_close() to undo the above, as the callsite expects
> + * full cleanup in this case and therefore does not invoke
> + * vmops::close().
> + */
> + ret = map_range(event->rb, vma);
> + if (ret)
> + perf_mmap_close(vma);
> + }
>
> return ret;
> }
> --
> 2.51.0
Can you turn this into a patch we can apply (properly sent, real name
used, etc.) so that the maintainers can review it and apply it
correctly?
Also, be sure to send this to the correct people, I don't think that
the ext4 developers care that much about perf :)
thanks,
greg k-h
Powered by blists - more mailing lists