lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <VI1PR0402MB3600C15E60CB9436DFB59FCFFFDA0@VI1PR0402MB3600.eurprd04.prod.outlook.com>
Date:   Tue, 14 Apr 2020 05:12:00 +0000
From:   Andy Duan <fugang.duan@....com>
To:     Andrew Lunn <andrew@...n.ch>
CC:     David Miller <davem@...emloft.net>,
        netdev <netdev@...r.kernel.org>,
        Chris Healy <Chris.Healy@....aero>,
        Chris Heally <cphealy@...il.com>
Subject: RE: [EXT] [PATCH] net: ethernet: fec: Replace interrupt driven MDIO
 with polled IO

From: Andrew Lunn <andrew@...n.ch> Sent: Tuesday, April 14, 2020 11:49 AM
> On Tue, Apr 14, 2020 at 03:07:09AM +0000, Andy Duan wrote:
> > From: Andrew Lunn <andrew@...n.ch> Sent: Tuesday, April 14, 2020 8:46
> > AM
> > > Measurements of the MDIO bus have shown that driving the MDIO bus
> > > using interrupts is slow. Back to back MDIO transactions take about
> > > 90uS, with 25uS spent performing the transaction, and the remainder
> > > of the time the bus is idle.
> > >
> > > Replacing the completion interrupt with polled IO results in back to
> > > back transactions of 40uS. The polling loop waiting for the hardware
> > > to complete the transaction takes around 27uS. Which suggests
> > > interrupt handling has an overhead of 50uS, and polled IO nearly
> > > halves this overhead, and doubles the MDIO performance.
> > >
> >
> > Although the mdio performance is better, but polling IO by reading
> > register cause system/bus loading more heavy.
> 
> Hi Andy
> 
> I actually think is reduces the system bus load. With interrupts we have 27uS
> waiting for the interrupt when the bus is idle, followed by 63uS the CPU is
> busy handling the interrupt and setting up the next transfer, which will case
> the bus to be loaded. So the system bus is busy for 63uS per transaction. With
> polled IO, yes the system bus is busy for 27uS polling while the transaction
> happens, and then another 13uS setting up the next transaction. But in total,
> that is only 40uS.
>
We cannot calculate the bus loading like this number. As you know, interrupt handler
may cost many instructions on cacheable memory accessing, but IO register
accessing is non-cacheable, which bring heavy AIPS bus loading when a flood of
MDIO read/write operations.  But I don't deny your conclusion that system bus
is more busy in interrupt mode than polling mode. 

> So with interrupts we have 63uS of load per transaction, vs 40uS of load per
> transaction for polled IO. Polled IO is better for the bus.
If we switch to polling mode, it is better to add usleep or cpu relax between IO
Polling.

> 
> I also have follow up patches which allows the bus to be run at higher speeds.
> The Ethernet switch i have on the bus is happy to run a 5MHz rather than the
> default 2.5MHz. 

Please compatible with 2.5Hz for your following patches.

Thanks,
Andy 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ