lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAMJ=MEc=bzSroEnEyWnVGz9-XkQAs1Wa5pdBNDSE-=PknEw8Aw@mail.gmail.com>
Date:   Wed, 23 Sep 2020 20:39:35 +0200
From:   Ronny Meeus <ronny.meeus@...il.com>
To:     linux-kernel@...r.kernel.org
Cc:     Ronny Meeus <ronny.meeus@...il.com>
Subject: Kernel work in user context

Hello

I have a system, running on a 2 core device, doing packet processing.
The kernel version is 4.9.
2 applications communicate with each other via IPC. One application is
receiving packets from an ethernet interface and after doing some
preprocessing it forwards the packets to the other application. Next
to these 2 applications a lot of other applications are running, so
the system is under high load.

Depending on the IPC mechanism I use between the 2 apps, I see a
completely different CPU load pattern.

The total load consumed by the 2 applications when using stream based
Unix Domain sockets (UDS) is 130% (200% = full load on the 2 cores).
The load consumed when using posix message queues is only 90%.
Apart from some IPC implementation details, the logic is identical in
the 2 tests so I cannot explain the 40% difference on the total load.

I was doing some monitoring with top and in the /proc/<pid>/stat files
of the 2 applications and I have the feeling that in the case of UDS,
a lot more system processing is done in the application threads. I'm
talking about the "stime" info the stat file.

When I read "kernel-hacking/hacking.html" in the linux documentation I see:
"Whenever a system call is about to return to userspace, or a hardware
interrupt handler exits, any ‘software interrupts’ which are marked
pending (usually by hardware interrupts) are run (kernel/softirq.c)."

Can it be that this is the case for the read/write system calls used
by UDS while this is not the case for the mqueue system calls
(mq_timedsend/mq_timedreceive)?

BTW the system is running with "threadirqs" enabled so that the irq
work is done in dedicated kernel threads.

Thanks

Best regards,
Ronny

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ