[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAAoXzSpxpOswM7TAZu5QZ+JdyL2V4SOehF_Kg3N9p29C5JZD4A@mail.gmail.com>
Date: Mon, 9 Feb 2026 23:26:36 +0800
From: 余昊铖 <yuhaocheng035@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: acme@...nel.org, security@...nel.org, linux-kernel@...r.kernel.org,
linux-perf-users@...r.kernel.org, gregkh@...uxfoundation.org
Subject: Re: [PATCH v2] perf/core: Fix refcount bug and potential UAF in perf_mmap
These explanations and patches were indeed generated by LLM, but
I considered and reviewed them before sending the email, along with
the kernel code and the C reproducer, and modified anything I deemed
unreasonable. I believe this patch is meaningful, and there are indeed
some issues with the kernel code, and that's why I send it out.
Below is my own thinking:
In the C reproducer, these four system calls are the core to this problem.
Specifically, the third syscall takes r0 as an argument(group), establishing
a shared ring buffer. The fourth syscall uses an unusual flag combination,
which is likely the reason this bug can be triggered.
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/(intptr_t)-1,
/*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul);
syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul,
/*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul);
res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0,
/*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul);
syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul,
/*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul,
/*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1],
/*offset=*/0ul);
The sequence is as follows: r0 enters perf_mmap first. It acquires the mutex,
executes perf_mmap_rb, releases the mutex, and then calls map_range. If
map_range fails, the function enters perf_mmap_close, which calls
ring_buffer_put and drops the refcount to 0.
At this moment, r1 also enters perf_mmap and attempts to attach to r0's ring
buffer. Because the mutex is released during the r0's execution of map_range,
the second mmap can acquire the mutex and access the rb pointer which is
shared with r0 before it is cleared, attempting to increment the refcount on a
buffer that is already being destroyed.
So I think simply extend the scope of mutex to let it cover perf_mmap_close
could solve this problem.
> On Tue, Feb 03, 2026 at 12:20:56AM +0800, yuhaocheng035@...il.com wrote:
> > From: Haocheng Yu <yuhaocheng035@...il.com>
> >
> > Syzkaller reported a refcount_t: addition on 0; use-after-free warning
> > in perf_mmap.
> >
> > The issue is caused by a race condition between a failing mmap() setup
> > and a concurrent mmap() on a dependent event (e.g., using output
> > redirection).
> >
> > In perf_mmap(), the ring_buffer (rb) is allocated and assigned to
> > event->rb with the mmap_mutex held. The mutex is then released to
> > perform map_range().
> >
> > If map_range() fails, perf_mmap_close() is called to clean up.
> > However, since the mutex was dropped, another thread attaching to
> > this event (via inherited events or output redirection) can acquire
> > the mutex, observe the valid event->rb pointer, and attempt to
> > increment its reference count. If the cleanup path has already
> > dropped the reference count to zero, this results in a
> > use-after-free or refcount saturation warning.
> >
> > Fix this by extending the scope of mmap_mutex to cover the
> > map_range() call. This ensures that the ring buffer initialization
> > and mapping (or cleanup on failure) happens atomically effectively,
> > preventing other threads from accessing a half-initialized or
> > dying ring buffer.
>
> And you're sure this time? To me it feels bit like talking to an LLM.
>
> I suppose there is nothing wrong with having an LLM process syzkaller
> output and even have it propose patches, but before you send it out an
> actual human should get involved and apply critical thinking skills.
>
> Just throwing stuff at a maintainer and hoping he does the thinking for
> you is not appreciated.
>
> > Reported-by: kernel test robot <lkp@...el.com>
> > Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/
> > Signed-off-by: Haocheng Yu <yuhaocheng035@...il.com>
> > ---
> > kernel/events/core.c | 38 +++++++++++++++++++-------------------
> > 1 file changed, 19 insertions(+), 19 deletions(-)
> >
> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index 2c35acc2722b..abefd1213582 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
> > ret = perf_mmap_aux(vma, event, nr_pages);
> > if (ret)
> > return ret;
> > - }
> >
> > - /*
> > - * Since pinned accounting is per vm we cannot allow fork() to copy our
> > - * vma.
> > - */
> > - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
> > - vma->vm_ops = &perf_mmap_vmops;
> > + /*
> > + * Since pinned accounting is per vm we cannot allow fork() to copy our
> > + * vma.
> > + */
> > + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
> > + vma->vm_ops = &perf_mmap_vmops;
> >
> > - mapped = get_mapped(event, event_mapped);
> > - if (mapped)
> > - mapped(event, vma->vm_mm);
> > + mapped = get_mapped(event, event_mapped);
> > + if (mapped)
> > + mapped(event, vma->vm_mm);
> >
> > - /*
> > - * Try to map it into the page table. On fail, invoke
> > - * perf_mmap_close() to undo the above, as the callsite expects
> > - * full cleanup in this case and therefore does not invoke
> > - * vmops::close().
> > - */
> > - ret = map_range(event->rb, vma);
> > - if (ret)
> > - perf_mmap_close(vma);
> > + /*
> > + * Try to map it into the page table. On fail, invoke
> > + * perf_mmap_close() to undo the above, as the callsite expects
> > + * full cleanup in this case and therefore does not invoke
> > + * vmops::close().
> > + */
> > + ret = map_range(event->rb, vma);
> > + if (ret)
> > + perf_mmap_close(vma);
> > + }
> >
> > return ret;
> > }
> >
> > base-commit: 7d0a66e4bb9081d75c82ec4957c50034cb0ea449
> > --
> > 2.51.0
> >
Powered by blists - more mailing lists