[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <79CAEE07-57BC-4915-A812-DD99AAF1B809@fb.com>
Date: Tue, 21 Jan 2020 18:55:57 +0000
From: Song Liu <songliubraving@...com>
To: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
CC: linux-kernel <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Jiri Olsa <jolsa@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH] perf/core: fix mlock accounting in perf_mmap()
> On Jan 20, 2020, at 12:24 AM, Alexander Shishkin <alexander.shishkin@...ux.intel.com> wrote:
>
> Song Liu <songliubraving@...com> writes:
>
>> sysctl_perf_event_mlock and user->locked_vm can change value
>> independently, so we can't guarantee:
>>
>> user->locked_vm <= user_lock_limit
>
> This means: if the sysctl got sufficiently decreased, so that the
> existing locked_vm exceeds it, we need to deal with the overflow, right?
Reducing sysctl is one way to generate the overflow. Another way is to
call setrlimit() from user space to allow bigger user->locked_vm.
>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index a1f8bde19b56..89acdd1574ef 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -5920,11 +5920,31 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
>>
>> if (user_locked > user_lock_limit) {
>> /*
>> - * charge locked_vm until it hits user_lock_limit;
>> - * charge the rest from pinned_vm
>> + * sysctl_perf_event_mlock and user->locked_vm can change
>> + * value independently, so we can't guarantee:
>> + *
>> + * user->locked_vm <= user_lock_limit
>> + *
>> + * We need be careful to make sure user_extra >=0.
>> + *
>> + * Using "user_locked - user_extra" to avoid calling
>> + * atomic_long_read() again.
>> */
>> - extra = user_locked - user_lock_limit;
>> - user_extra -= extra;
>> + if (user_locked - user_extra >= user_lock_limit) {
>> + /*
>> + * already used all user_locked_limit, charge all
>> + * to pinned_vm
>> + */
>> + extra = user_extra;
>> + user_extra = 0;
>> + } else {
>> + /*
>> + * charge locked_vm until it hits user_lock_limit;
>> + * charge the rest from pinned_vm
>> + */
>> + extra = user_locked - user_lock_limit;
>> + user_extra -= extra;
>> + }
>
> How about the below for the sake of brevity?
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 763cf34b5a63..632505ce6c12 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -5917,7 +5917,14 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
> */
> user_lock_limit *= num_online_cpus();
>
> - user_locked = atomic_long_read(&user->locked_vm) + user_extra;
> + user_locked = atomic_long_read(&user->locked_vm);
> + /*
> + * If perf_event_mlock has changed since earlier mmaps, so that
> + * it's smaller than user->locked_vm, discard the overflow.
> + */
Since changes in perf_event_mlock is not the only reason for the overflow,
we need to revise this comment.
> + if (user_locked > user_lock_limit)
> + user_locked = user_lock_limit;
> + user_locked += user_extra;
>
> if (user_locked > user_lock_limit) {
> /*
I think this is logically correct, and probably easier to follow. Let me
respin v2 based on this version.
Thanks,
Song
Powered by blists - more mailing lists