[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a973dda-c79e-4d95-935b-e4b93eb077b8@linux.ibm.com>
Date: Mon, 12 Aug 2024 23:02:19 +0530
From: Shrikanth Hegde <sshegde@...ux.ibm.com>
To: Ankur Arora <ankur.a.arora@...cle.com>
Cc: Michael Ellerman <mpe@...erman.id.au>, tglx@...utronix.de,
peterz@...radead.org, torvalds@...ux-foundation.org,
paulmck@...nel.org, rostedt@...dmis.org, mark.rutland@....com,
juri.lelli@...hat.com, joel@...lfernandes.org, raghavendra.kt@....com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com,
LKML <linux-kernel@...r.kernel.org>,
Nicholas Piggin <npiggin@...il.com>
Subject: Re: [PATCH v2 00/35] PREEMPT_AUTO: support lazy rescheduling
On 7/3/24 10:57, Ankur Arora wrote:
>
> Shrikanth Hegde <sshegde@...ux.ibm.com> writes:
>
Hi.
Sorry for the delayed response.
I could see this hackbench pipe regression with preempt=full kernel on 6.10-rc kernel. i.e without PREEMPT_AUTO as well.
There seems to more wakeups in read path, implies pipe was more often empty. Correspondingly more contention
is there on the mutex pipe lock in preempt=full. But why, not sure. One difference in powerpc is page size. but
here pipe isn't getting full. Its not the write side that is blocked.
preempt=none: Time taken for 20 groups in seconds : 25.70
preempt=full: Time taken for 20 groups in seconds : 54.56
----------------
hackbench (pipe)
----------------
top 3 callstacks of __schedule collected with bpftrace.
preempt=none preempt=full
__schedule+12 |@[
schedule+64 | __schedule+12
interrupt_exit_user_prepare_main+600 | preempt_schedule+84
interrupt_exit_user_prepare+88 | _raw_spin_unlock_irqrestore+124
interrupt_return_srr_user+8 | __wake_up_sync_key+108
, hackbench]: 482228 | pipe_write+1772
@[ | vfs_write+1052
__schedule+12 | ksys_write+248
schedule+64 | system_call_exception+296
pipe_write+1452 | system_call_vectored_common+348
vfs_write+940 |, hackbench]: 538591
ksys_write+248 |@[
system_call_exception+292 | __schedule+12
system_call_vectored_common+348 | schedule+76
, hackbench]: 1427161 | schedule_preempt_disabled+52
@[ | __mutex_lock.constprop.0+1748
__schedule+12 | pipe_write+132
schedule+64 | vfs_write+1052
interrupt_exit_user_prepare_main+600 | ksys_write+248
syscall_exit_prepare+336 | system_call_exception+296
system_call_vectored_common+360 | system_call_vectored_common+348
, hackbench]: 8151309 |, hackbench]: 5388301
@[ |@[
__schedule+12 | __schedule+12
schedule+64 | schedule+76
pipe_read+1100 | pipe_read+1100
vfs_read+716 | vfs_read+716
ksys_read+252 | ksys_read+252
system_call_exception+292 | system_call_exception+296
system_call_vectored_common+348 | system_call_vectored_common+348
, hackbench]: 18132753 |, hackbench]: 64424110
--------------------------------------------
hackbench (messaging) - one that uses sockets
--------------------------------------------
Here there is no regression with preempt=full.
preempt=none: Time taken for 20 groups in seconds : 55.51
preempt=full: Time taken for 20 groups in seconds : 55.10
Similar bpftrace collected for socket based hackbench. highest caller of __schedule doesn't change much.
preempt=none preempt=full
| __schedule+12
| preempt_schedule+84
| _raw_spin_unlock+108
@[ | unix_stream_sendmsg+660
__schedule+12 | sock_write_iter+372
schedule+64 | vfs_write+1052
schedule_timeout+412 | ksys_write+248
sock_alloc_send_pskb+684 | system_call_exception+296
unix_stream_sendmsg+448 | system_call_vectored_common+348
sock_write_iter+372 |, hackbench]: 819290
vfs_write+940 |@[
ksys_write+248 | __schedule+12
system_call_exception+292 | schedule+76
system_call_vectored_common+348 | schedule_timeout+476
, hackbench]: 3424197 | sock_alloc_send_pskb+684
@[ | unix_stream_sendmsg+444
__schedule+12 | sock_write_iter+372
schedule+64 | vfs_write+1052
interrupt_exit_user_prepare_main+600 | ksys_write+248
syscall_exit_prepare+336 | system_call_exception+296
system_call_vectored_common+360 | system_call_vectored_common+348
, hackbench]: 9800144 |, hackbench]: 3386594
@[ |@[
__schedule+12 | __schedule+12
schedule+64 | schedule+76
schedule_timeout+412 | schedule_timeout+476
unix_stream_data_wait+528 | unix_stream_data_wait+468
unix_stream_read_generic+872 | unix_stream_read_generic+804
unix_stream_recvmsg+196 | unix_stream_recvmsg+196
sock_recvmsg+164 | sock_recvmsg+156
sock_read_iter+200 | sock_read_iter+200
vfs_read+716 | vfs_read+716
ksys_read+252 | ksys_read+252
system_call_exception+292 | system_call_exception+296
system_call_vectored_common+348 | system_call_vectored_common+348
, hackbench]: 25375142 |, hackbench]: 27275685
Powered by blists - more mailing lists