lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 21 Jan 2020 19:35:33 +0000
From:   Song Liu <songliubraving@...com>
To:     Alexander Shishkin <alexander.shishkin@...ux.intel.com>
CC:     linux-kernel <linux-kernel@...r.kernel.org>,
        Kernel Team <Kernel-team@...com>,
        Arnaldo Carvalho de Melo <acme@...hat.com>,
        Jiri Olsa <jolsa@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH] perf/core: fix mlock accounting in perf_mmap()



> On Jan 20, 2020, at 12:24 AM, Alexander Shishkin <alexander.shishkin@...ux.intel.com> wrote:
> 
> Song Liu <songliubraving@...com> writes:
> 
>> sysctl_perf_event_mlock and user->locked_vm can change value
>> independently, so we can't guarantee:
>> 
>>    user->locked_vm <= user_lock_limit
> 
> This means: if the sysctl got sufficiently decreased, so that the
> existing locked_vm exceeds it, we need to deal with the overflow, right?
> 
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index a1f8bde19b56..89acdd1574ef 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -5920,11 +5920,31 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
>> 
>> 	if (user_locked > user_lock_limit) {
>> 		/*
>> -		 * charge locked_vm until it hits user_lock_limit;
>> -		 * charge the rest from pinned_vm
>> +		 * sysctl_perf_event_mlock and user->locked_vm can change
>> +		 * value independently, so we can't guarantee:
>> +		 *
>> +		 *    user->locked_vm <= user_lock_limit
>> +		 *
>> +		 * We need be careful to make sure user_extra >=0.
>> +		 *
>> +		 * Using "user_locked - user_extra" to avoid calling
>> +		 * atomic_long_read() again.
>> 		 */
>> -		extra = user_locked - user_lock_limit;
>> -		user_extra -= extra;
>> +		if (user_locked - user_extra >= user_lock_limit) {
>> +			/*
>> +			 * already used all user_locked_limit, charge all
>> +			 * to pinned_vm
>> +			 */
>> +			extra = user_extra;
>> +			user_extra = 0;
>> +		} else {
>> +			/*
>> +			 * charge locked_vm until it hits user_lock_limit;
>> +			 * charge the rest from pinned_vm
>> +			 */
>> +			extra = user_locked - user_lock_limit;
>> +			user_extra -= extra;
>> +		}
> 
> How about the below for the sake of brevity?
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 763cf34b5a63..632505ce6c12 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -5917,7 +5917,14 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
> 	 */
> 	user_lock_limit *= num_online_cpus();
> 
> -	user_locked = atomic_long_read(&user->locked_vm) + user_extra;
> +	user_locked = atomic_long_read(&user->locked_vm);
> +	/*
> +	 * If perf_event_mlock has changed since earlier mmaps, so that
> +	 * it's smaller than user->locked_vm, discard the overflow.
> +	 */
> +	if (user_locked > user_lock_limit)
> +		user_locked = user_lock_limit;
> +	user_locked += user_extra;
> 
> 	if (user_locked > user_lock_limit) {
> 		/*

Actually, I think this is cleaner. 

diff --git i/kernel/events/core.c w/kernel/events/core.c
index 2173c23c25b4..debd84fcf9cc 100644
--- i/kernel/events/core.c
+++ w/kernel/events/core.c
@@ -5916,14 +5916,18 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
         */
        user_lock_limit *= num_online_cpus();

-       user_locked = atomic_long_read(&user->locked_vm) + user_extra;
+       user_locked = atomic_long_read(&user->locked_vm);

        if (user_locked > user_lock_limit) {
+               /* charge all to pinned_vm */
+               extra = user_extra;
+               user_extra = 0;
+       } else if (user_lock + user_extra > user_lock_limit)
                /*
                 * charge locked_vm until it hits user_lock_limit;
                 * charge the rest from pinned_vm
                 */
-               extra = user_locked - user_lock_limit;
+               extra = user_locked + user_extra - user_lock_limit;
                user_extra -= extra;
        }

Alexander, does this look good to you? 

Thanks,
Song

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ