lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8c31f7bd-871d-4a38-ad15-a16a116e1f39@amd.com>
Date: Thu, 19 Dec 2024 15:03:40 +0530
From: Ravi Bangoria <ravi.bangoria@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: "mingo@...nel.org" <mingo@...nel.org>,
 "lucas.demarchi@...el.com" <lucas.demarchi@...el.com>,
 "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
 "willy@...radead.org" <willy@...radead.org>,
 "acme@...nel.org" <acme@...nel.org>,
 "namhyung@...nel.org" <namhyung@...nel.org>,
 "mark.rutland@....com" <mark.rutland@....com>,
 "alexander.shishkin@...ux.intel.com" <alexander.shishkin@...ux.intel.com>,
 "jolsa@...nel.org" <jolsa@...nel.org>,
 "irogers@...gle.com" <irogers@...gle.com>,
 "adrian.hunter@...el.com" <adrian.hunter@...el.com>,
 "kan.liang@...ux.intel.com" <kan.liang@...ux.intel.com>,
 Ravi Bangoria <ravi.bangoria@....com>
Subject: Re: [PATCH 19/19] perf: Make perf_pmu_unregister() useable

Hi Peter,

>>> In any case, below sequence of operations triggers a splat when
>>> perf_mmap_close() tries to access event->rb, event->pmu etc. which
>>> were already freed by __pmu_detach_event().
>>>
>>> Sequence:
>>>
>>>     Kernel                       Userspace
>>>     ------                       ---------
>>>     perf_pmu_register()
>>>                                 fd = perf_event_open()
>>>                                 p = mmap(fd)
>>>     perf_pmu_unregister()
>>>                                 munmap(p)
>>>                                 close(fd)
>>
>> Right, let me go have a look. Thanks!
> 
> Bah, that's a right mess indeed, however did I miss all that.
> 
> The easiest solution is probably to leave the RB around on detach, but
> now I need to remember why I did that in the first place :/
> 
> Oh.. I think I mostly that to serialize against perf_mmap(), which
> should reject creating further maps. But we can do that without actually
> detaching the RB -- we only need to acquire and release mmap_mutex.
> 
> Ah, there's that perf_event_stop() inside of ring_buffer_attach(), that
> must not happen after detach, obviously. So that must be dealt with.
> 
> Hmm, also if we leave ->rb around, then we need to deal with
> perf_event_set_output(), someone could try and redirect their things
> into our buffer -- which isn't technically broken, but still weird.
> 
> Something like the below.
> 
> How did you test; perf-fuzzer or something?

Prepared a simple test that does pmu register(), unregister() and
"perf record" in parallel. It's quite dirty, I'll clean it up and
share it here.

> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -1742,7 +1742,7 @@ static inline bool needs_branch_stack(st
>  
>  static inline bool has_aux(struct perf_event *event)
>  {
> -	return event->pmu->setup_aux;
> +	return event->pmu && event->pmu->setup_aux;
>  }
>  
>  static inline bool has_aux_action(struct perf_event *event)
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -5409,7 +5409,6 @@ static void _free_event(struct perf_even
>  	security_perf_event_free(event);
>  
>  	if (event->rb) {
> -		WARN_ON_ONCE(!event->pmu);
>  		/*
>  		 * Can happen when we close an event with re-directed output.
>  		 *
> @@ -12023,7 +12022,10 @@ static void __pmu_detach_event(struct pm
>  	 */
>  	scoped_guard (mutex, &event->mmap_mutex) {
>  		WARN_ON_ONCE(pmu->event_unmapped);
> -		ring_buffer_attach(event, NULL);
> +		/* 
> +		 * Mostly an empy lock sequence, such that perf_mmap(), which
> +		 * relies on mmap_mutex, is sure to observe the state change.
> +		 */
>  	}
>  
>  	perf_event_free_bpf_prog(event);
> @@ -12823,6 +12825,9 @@ perf_event_set_output(struct perf_event
>  		goto unlock;
>  
>  	if (output_event) {
> +		if (output_event->state <= PERF_EVENT_STATE_REVOKED)
> +			goto unlock;
> +
>  		/* get the rb we want to redirect to */
>  		rb = ring_buffer_get(output_event);
>  		if (!rb)

I needed this additional diff on top of your change. With this, it survives
my test. perf_mmap_close() change seems correct. Not sure about perf_mmap().
I'll inspect the code further.

--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6540,7 +6540,7 @@ static void perf_mmap_close(struct vm_area_struct *vma)
 	bool detach_rest = false;
 
 	/* FIXIES vs perf_pmu_unregister() */
-	if (event->pmu->event_unmapped)
+	if (event->pmu && event->pmu->event_unmapped)
 		event->pmu->event_unmapped(event, vma->vm_mm);
 
 	/*
@@ -6873,7 +6873,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
 	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_ops = &perf_mmap_vmops;
 
-	if (!ret && event->pmu->event_mapped)
+	if (!ret && event->pmu && event->pmu->event_mapped)
 		event->pmu->event_mapped(event, vma->vm_mm);
 
 	return ret;

Thanks,
Ravi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ