[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <60C4F187.3050808@gmail.com>
Date: Sat, 12 Jun 2021 20:40:23 +0300
From: Nikolai Zhubr <zhubr.2@...il.com>
To: Arnd Bergmann <arnd@...nel.org>
CC: netdev <netdev@...r.kernel.org>
Subject: Re: Realtek 8139 problem on 486.
Hi Arnd,
09.06.2021 10:09, Arnd Bergmann:
[...]
> If it's only a bit slower, that is not surprising, I'd expect it to
> use fewer CPU
> cycles though, as it avoids the expensive polling.
>
> There are a couple of things you could do to make it faster without reducing
> reliability, but I wouldn't recommend major surgery on this driver, I was just
> going for the simplest change that would make it work right with broken
> IRQ settings.
>
> You could play around a little with the order in which you process events:
> doing RX first would help free up buffer space in the card earlier, possibly
> alternating between TX and RX one buffer at a time, or processing both
> in a loop until the budget runs out would also help.
I've modified your patch so as to quickly test several approaches within
a single file by just switching some conditional defines.
My diff against 4.14 is here:
https://pastebin.com/mgpLPciE
The tests were performed using a simple shell script:
https://pastebin.com/Vfr8JC3X
Each cell in the resulting table shows:
- tcp sender/receiver (Mbit/s) as reported by iperf3 (total)
- udp sender/receiver (Mbit/s) as reported by iperf3 (total)
- accumulated cpu utilization during tcp+upd test.
The first line in the table essentially corresponds to a standard
unmodified kernel. The second line corresponds to your initially
proposed approach.
All tests run with the same physical instance of 8139D card against the
same server.
(The table best viewed in monospace font)
+-------------------+-------------+-----------+-----------+
| #Defines ; i486dx2/66 ; Pentium3/ ; PentiumE/ |
| ; (Edge IRQ) ; 1200 ; Dual 2600 |
+-------------------+-------------+-----------+-----------+
| TX_WORK_IN_IRQ 1 ; ; tcp 86/86 ; tcp 94/94 |
| TX_WORK_IN_POLL 0 ; (fails) ; udp 96/96 ; udp 96/96 |
| LOOP_IN_IRQ 0 ; ; cpu 59% ; cpu 15% |
| LOOP_IN_POLL 0 ; ; ; |
+-------------------+-------------+-----------+-----------+
| TX_WORK_IN_IRQ 0 ; tcp 9.4/9.1 ; tcp 88/88 ; tcp 95/94 |
| TX_WORK_IN_POLL 1 ; udp 5.5/5.5 ; udp 96/96 ; udp 96/96 |
| LOOP_IN_IRQ 0 ; cpu 98% ; cpu 55% ; cpu 19% |
| LOOP_IN_POLL 0 ; ; ; |
+-------------------+-------------+-----------+-----------+
| TX_WORK_IN_IRQ 0 ; tcp 9.0/8.7 ; tcp 87/87 ; tcp 95/94 |
| TX_WORK_IN_POLL 1 ; udp 5.8/5.8 ; udp 96/96 ; udp 96/96 |
| LOOP_IN_IRQ 0 ; cpu 98% ; cpu 58% ; cpu 20% |
| LOOP_IN_POLL 1 ; ; ; |
+-------------------+-------------+-----------+-----------+
| TX_WORK_IN_IRQ 1 ; tcp 7.3/7.3 ; tcp 87/86 ; tcp 94/94 |
| TX_WORK_IN_POLL 0 ; udp 6.2/6.2 ; udp 96/96 ; udp 96/96 |
| LOOP_IN_IRQ 1 ; cpu 99% ; cpu 57% ; cpu 17% |
| LOOP_IN_POLL 0 ; ; ; |
+-------------------+-------------+-----------+-----------+
| TX_WORK_IN_IRQ 1 ; tcp 6.5/6.5 ; tcp 88/88 ; tcp 94/94 |
| TX_WORK_IN_POLL 1 ; udp 6.1/6.1 ; udp 96/96 ; udp 96/96 |
| LOOP_IN_IRQ 1 ; cpu 99% ; cpu 55% ; cpu 16% |
| LOOP_IN_POLL 1 ; ; ; |
+-------------------+-------------+-----------+-----------+
| TX_WORK_IN_IRQ 1 ; tcp 5.7/5.7 ; tcp 87/87 ; tcp 95/94 |
| TX_WORK_IN_POLL 1 ; udp 6.1/6.1 ; udp 96/96 ; udp 96/96 |
| LOOP_IN_IRQ 1 ; cpu 98% ; cpu 56% ; cpu 15% |
| LOOP_IN_POLL 0 ; ; ; |
+-------------------+-------------+-----------+-----------+
Hopefully this helps to choose the most benefical approach.
Thank you,
Regards,
Nikolai
>
> Arnd
>
Powered by blists - more mailing lists