[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9f0dcdc6-121d-48a7-8abe-b2ce7acd0cdb@linux.ibm.com>
Date: Sat, 12 Oct 2024 00:05:19 +0530
From: Shrikanth Hegde <sshegde@...ux.ibm.com>
To: Ankur Arora <ankur.a.arora@...cle.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-kernel@...r.kernel.org, peterz@...radead.org, tglx@...utronix.de,
paulmck@...nel.org, mingo@...nel.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, frederic@...nel.org, efault@....de,
Michael Ellerman <mpe@...erman.id.au>
Subject: Re: [PATCH 7/7] powerpc: add support for PREEMPT_LAZY
On 10/10/24 23:40, Ankur Arora wrote:
>
> Sebastian Andrzej Siewior <bigeasy@...utronix.de> writes:
>
>> On 2024-10-09 09:54:11 [-0700], Ankur Arora wrote:
>>> From: Shrikanth Hegde <sshegde@...ux.ibm.com>
>>>
>>> Add PowerPC arch support for PREEMPT_LAZY by defining LAZY bits.
>>>
>>> Since PowerPC doesn't use generic exit to functions, check for
>>> NEED_RESCHED_LAZY when exiting to user or to the kernel from
>>> interrupt routines.
>>>
>>> Signed-off-by: Shrikanth Hegde <sshegde@...ux.ibm.com>
>>> [ Changed TIF_NEED_RESCHED_LAZY to now be defined unconditionally. ]
>>> Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
>>> ---
>>> arch/powerpc/Kconfig | 1 +
>>> arch/powerpc/include/asm/thread_info.h | 5 ++++-
>>> arch/powerpc/kernel/interrupt.c | 5 +++--
>>> 3 files changed, 8 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>>> index 8094a01974cc..593a1d60d443 100644
>>> --- a/arch/powerpc/Kconfig
>>> +++ b/arch/powerpc/Kconfig
>>> @@ -270,6 +270,7 @@ config PPC
>>> select HAVE_PERF_REGS
>>> select HAVE_PERF_USER_STACK_DUMP
>>> select HAVE_RETHOOK if KPROBES
>>> + select ARCH_HAS_PREEMPT_LAZY
>>> select HAVE_REGS_AND_STACK_ACCESS_API
>>> select HAVE_RELIABLE_STACKTRACE
>>> select HAVE_RSEQ
>>
>> I would move this up to the ARCH_HAS_ block.
>
> Makes sense.
>
>>> diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
>>> index 6ebca2996f18..ae7793dae763 100644
>>> --- a/arch/powerpc/include/asm/thread_info.h
>>> +++ b/arch/powerpc/include/asm/thread_info.h
>>> @@ -117,11 +117,14 @@ void arch_setup_new_exec(void);
>>> #endif
>>> #define TIF_POLLING_NRFLAG 19 /* true if poll_idle() is polling TIF_NEED_RESCHED */
>>> #define TIF_32BIT 20 /* 32 bit binary */
>>> +#define TIF_NEED_RESCHED_LAZY 21 /* Lazy rescheduling */
>>
>> I don't see any of the bits being used in assembly anymore.
>> If you group the _TIF_USER_WORK_MASK bits it a single 16 bit block then
>> the compiler could issue a single andi.
>
That's a good find. since by default powerpc uses 4 byte fixed ISA,
compiler would generate extra code for _TIF_USER_WORK_MASK. Looked at
the objdump. It indeed does.
I see that value 9 isn't being used. It was last used for TIF_NOHZ which
is removed now. That value could be used for RESCHED_LAZY. Using that
value i see the code generated is similar to what we have now.
+mpe
Ankur, Could you please change the value to 9?
---------------------------------------------------------------------
I see with value 9, andi is used again.
254: 80 00 dc eb ld r30,128(r28)
258: 4e 62 c9 73 andi. r9,r30,25166
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -103,6 +103,7 @@ void arch_setup_new_exec(void);
#define TIF_PATCH_PENDING 6 /* pending live patching update */
#define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */
#define TIF_SINGLESTEP 8 /* singlestepping active */
+#define TIF_NEED_RESCHED_LAZY 9 /* Lazy rescheduling */
#define TIF_SECCOMP 10 /* secure computing */
#define TIF_RESTOREALL 11 /* Restore all regs (implies
NOERROR) */
#define TIF_NOERROR 12 /* Force successful syscall
return */
@@ -117,7 +118,6 @@ void arch_setup_new_exec(void);
#endif
#define TIF_POLLING_NRFLAG 19 /* true if poll_idle() is
polling TIF_NEED_RESCHED */
#define TIF_32BIT 20 /* 32 bit binary */
-#define TIF_NEED_RESCHED_LAZY 21 /* Lazy rescheduling */
Powered by blists - more mailing lists