[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200122190447.1920297-1-songliubraving@fb.com>
Date: Wed, 22 Jan 2020 11:04:47 -0800
From: Song Liu <songliubraving@...com>
To: <linux-kernel@...r.kernel.org>
CC: <kernel-team@...com>, Song Liu <songliubraving@...com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Jiri Olsa <jolsa@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: [PATCH v2] perf/core: fix mlock accounting in perf_mmap()
sysctl_perf_event_mlock and user->locked_vm can change value
independently, so we can't guarantee:
user->locked_vm <= user_lock_limit
When user->locked_vm is larger than user_lock_limit, we cannot simply
update extra and user_extra as:
extra = user_locked - user_lock_limit;
user_extra -= extra;
Otherwise, user_extra will be negative. In extreme cases, this may lead to
negative user->locked_vm (until this perf-mmap is closed), which break
locked_vm badly.
Fix this by adjusting user_locked before calculating extra and user_extra.
Fixes: c4b75479741c ("perf/core: Make the mlock accounting simple again")
Signed-off-by: Song Liu <songliubraving@...com>
Suggested-by: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@...hat.com>
Cc: Jiri Olsa <jolsa@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
---
kernel/events/core.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2173c23c25b4..d25f2de45996 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5916,8 +5916,19 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
*/
user_lock_limit *= num_online_cpus();
- user_locked = atomic_long_read(&user->locked_vm) + user_extra;
+ user_locked = atomic_long_read(&user->locked_vm);
+ /*
+ * sysctl_perf_event_mlock and user->locked_vm can change value
+ * independently. so we can't guarantee:
+ * user->locked_vm <= user_lock_limit
+ *
+ * Adjust user_locked to be <= user_lock_limit so we can calcualte
+ * correct extra and user_extra.
+ */
+ user_locked = min_t(unsigned long, user_locked, user_lock_limit);
+
+ user_locked += user_extra;
if (user_locked > user_lock_limit) {
/*
* charge locked_vm until it hits user_lock_limit;
--
2.17.1
Powered by blists - more mailing lists