[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aDqs0_c_vq96EWW6@gmail.com>
Date: Sat, 31 May 2025 09:16:35 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Bo Li <libo.gcs85@...edance.com>
Cc: tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org, luto@...nel.org,
kees@...nel.org, akpm@...ux-foundation.org, david@...hat.com,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
peterz@...radead.org, dietmar.eggemann@....com, hpa@...or.com,
acme@...nel.org, namhyung@...nel.org, mark.rutland@....com,
alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
irogers@...gle.com, adrian.hunter@...el.com,
kan.liang@...ux.intel.com, viro@...iv.linux.org.uk,
brauner@...nel.org, jack@...e.cz, lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com, vbabka@...e.cz, rppt@...nel.org,
surenb@...gle.com, mhocko@...e.com, rostedt@...dmis.org,
bsegall@...gle.com, mgorman@...e.de, vschneid@...hat.com,
jannh@...gle.com, pfalcato@...e.de, riel@...riel.com,
harry.yoo@...cle.com, linux-kernel@...r.kernel.org,
linux-perf-users@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, duanxiongchun@...edance.com,
yinhongbo@...edance.com, dengliang.1214@...edance.com,
xieyongji@...edance.com, chaiwen.cc@...edance.com,
songmuchun@...edance.com, yuanzhu@...edance.com,
chengguozhu@...edance.com, sunjiadong.lff@...edance.com
Subject: Re: [RFC v2 00/35] optimize cost of inter-process communication
* Bo Li <libo.gcs85@...edance.com> wrote:
> # Performance
>
> To quantify the performance improvements driven by RPAL, we measured
> latency both before and after its deployment. Experiments were
> conducted on a server equipped with two Intel(R) Xeon(R) Platinum
> 8336C CPUs (2.30 GHz) and 1 TB of memory. Latency was defined as the
> duration from when the client thread initiates a message to when the
> server thread is invoked and receives it.
>
> During testing, the client transmitted 1 million 32-byte messages, and we
> computed the per-message average latency. The results are as follows:
>
> *****************
> Without RPAL: Message length: 32 bytes, Total TSC cycles: 19616222534,
> Message count: 1000000, Average latency: 19616 cycles
> With RPAL: Message length: 32 bytes, Total TSC cycles: 1703459326,
> Message count: 1000000, Average latency: 1703 cycles
> *****************
>
> These results confirm that RPAL delivers substantial latency
> improvements over the current epoll implementation—achieving a
> 17,913-cycle reduction (an ~91.3% improvement) for 32-byte messages.
No, these results do not necessarily confirm that.
19,616 cycles per message on a vanilla kernel on a 2.3 GHz CPU suggests
a messaging performance of 117k messages/second or 8.5 usecs/message,
which is *way* beyond typical kernel interprocess communication
latencies on comparable CPUs:
root@...alhost:~# taskset 1 perf bench sched pipe
# Running 'sched/pipe' benchmark:
# Executed 1000000 pipe operations between two processes
Total time: 2.790 [sec]
2.790614 usecs/op
358344 ops/sec
And my 2.8 usecs result was from a kernel running inside a KVM sandbox
...
( I used 'taskset' to bind the benchmark to a single CPU, to remove any
inter-CPU migration noise from the measurement. )
The scheduler parts of your series simply try to remove much of
scheduler and context switching functionality to create a special
fast-path with no FPU context switching and TLB flushing AFAICS, for
the purposes of message latency benchmarking in essence, and you then
compare it against the full scheduling and MM context switching costs
of full-blown Linux processes.
I'm not convinced, at all, that this many changes are required to speed
up the usecase you are trying to optimize:
> 61 files changed, 9710 insertions(+), 4 deletions(-)
Nor am I convinced that 9,700 lines of *new* code of a parallel
facility are needed, crudely wrapped in 1970s technology (#ifdefs),
instead of optimizing/improving facilities we already have...
So NAK for the scheduler bits, until proven otherwise (and presented in
a clean fashion, which the current series is very far from).
I'll be the first one to acknowledge that our process and MM context
switching overhead is too high and could be improved, and I have no
objections against the general goal of improving Linux inter-process
messaging performance either, I only NAK this particular
implementation/approach.
Thanks,
Ingo
Powered by blists - more mailing lists