lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Sep 2007 05:50:04 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	ossthema@...ibm.com
Cc:	shemminger@...ux-foundation.org, netdev@...r.kernel.org,
	themann@...ibm.com, raisch@...ibm.com
Subject: Re: new NAPI interface broken

From: Jan-Bernd Themann <ossthema@...ibm.com>
Date: Fri, 7 Sep 2007 11:37:02 +0200

> 2) On SMP systems: after netif_rx_complete has been called on CPU1
>    (+interruts enabled), netif_rx_schedule could be called on CPU2 
>    (irq handler) before net_rx_action on CPU1 has checked NAPI_STATE_SCHED. 
>    In that case the device would be added to poll lists of CPU1 and CPU2
>    as net_rx_action would see NAPI_STATE_SCHED set.
>    This must not happen. It will be caught when netif_rx_complete is
>    called the second time (BUG() called)
> 
> This would mean we have a problem on all SMP machines right now.

This is not a correct statement.

Only on your platform do network device interrupts get moved
around, no other platform does this.

Sparc64 doesn't, all interrupts stay in one location after
the cpu is initially choosen.

x86 and x86_64 specifically do not move around network
device interrupts, even though other device types do
get dynamic IRQ cpu distribution.

That's why you are the only person seeing this problem.

I agree that it should be fixed, but we should also fix the IRQ
distribution scheme used on powerpc platforms which is totally
broken in these cases.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists