lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5733e508-f503-46cd-8874-d0c82355ae11@linux.ibm.com>
Date: Wed, 20 Nov 2024 23:40:03 +0530
From: Shrikanth Hegde <sshegde@...ux.ibm.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: mpe@...erman.id.au, linuxppc-dev@...ts.ozlabs.org, npiggin@...il.com,
        christophe.leroy@...roup.eu, maddy@...ux.ibm.com,
        ankur.a.arora@...cle.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy
 preemption



On 11/20/24 13:30, Sebastian Andrzej Siewior wrote:
> On 2024-11-17 00:53:06 [+0530], Shrikanth Hegde wrote:
>> Large user copy_to/from (more than 16 bytes) uses vmx instructions to
>> speed things up. Once the copy is done, it makes sense to try schedule
>> as soon as possible for preemptible kernels. So do this for
>> preempt=full/lazy and rt kernel.
>>
>> Not checking for lazy bit here, since it could lead to unnecessary
>> context switches.
>>
>> Suggested-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
>> Signed-off-by: Shrikanth Hegde <sshegde@...ux.ibm.com>
>> ---
>>   arch/powerpc/lib/vmx-helper.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
>> index d491da8d1838..58ed6bd613a6 100644
>> --- a/arch/powerpc/lib/vmx-helper.c
>> +++ b/arch/powerpc/lib/vmx-helper.c
>> @@ -45,7 +45,7 @@ int exit_vmx_usercopy(void)
>>   	 * set and we are preemptible. The hack here is to schedule a
>>   	 * decrementer to fire here and reschedule for us if necessary.
>>   	 */
>> -	if (IS_ENABLED(CONFIG_PREEMPT) && need_resched())
>> +	if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched())
>>   		set_dec(1);
> 
> Now looking at this again there is a comment why preempt_enable() is
> bad. An interrupt between preempt_enable_no_resched() and set_dec() is
> fine because irq-exit would preempt properly?

I think so. AFAIU the comment says issue lies with amr register not being saved across
context switch. interrupt_exit_kernel_prepare saves it and restore using kuap_kernel_restore.

  Regular preemption works
> again once copy_to_user() is done? So if you copy 1GiB, you are blocked
> for that 1GiB?


yes, regular preemption would work on exit of copy_to_user. Since the preempt_disable was done
before copy starts, i think yes, it would be blocked until it is complete.

> 
>>   	return 0;
>>   }
> 
> Sebastian

nick, mpe; please correct me if i am wrong.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ