[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250812104018.660347811@infradead.org>
Date: Tue, 12 Aug 2025 12:39:01 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: tglx@...utronix.de
Cc: linux-kernel@...r.kernel.org,
peterz@...radead.org,
torvalds@...uxfoundation.org,
mingo@...nel.org,
namhyung@...nel.org,
acme@...hat.com,
kees@...nel.org,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Subject: [PATCH v3 03/15] perf: Split out VM accounting
From: Thomas Gleixner <tglx@...utronix.de>
Similary to the mlock limit calculation the VM accounting is required for
both the ringbuffer and the AUX buffer allocations.
To prepare for splitting them out into seperate functions, move the
accounting into a helper function.
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Link: https://lkml.kernel.org/r/20250811070620.527392167@linutronix.de
---
kernel/events/core.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6962,10 +6962,17 @@ static bool perf_mmap_calc_limits(struct
return locked <= lock_limit || !perf_is_paranoid() || capable(CAP_IPC_LOCK);
}
+static void perf_mmap_account(struct vm_area_struct *vma, long user_extra, long extra)
+{
+ struct user_struct *user = current_user();
+
+ atomic_long_add(user_extra, &user->locked_vm);
+ atomic64_add(extra, &vma->vm_mm->pinned_vm);
+}
+
static int perf_mmap(struct file *file, struct vm_area_struct *vma)
{
struct perf_event *event = file->private_data;
- struct user_struct *user = current_user();
unsigned long vma_size, nr_pages;
long user_extra = 0, extra = 0;
struct mutex *aux_mutex = NULL;
@@ -7136,9 +7143,7 @@ static int perf_mmap(struct file *file,
unlock:
if (!ret) {
- atomic_long_add(user_extra, &user->locked_vm);
- atomic64_add(extra, &vma->vm_mm->pinned_vm);
-
+ perf_mmap_account(vma, user_extra, extra);
atomic_inc(&event->mmap_count);
} else if (rb) {
/* AUX allocation failed */
Powered by blists - more mailing lists