lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070828.132542.35016161.davem@davemloft.net>
Date:	Tue, 28 Aug 2007 13:25:42 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	ossthema@...ibm.com
Cc:	jchapman@...alix.com, shemminger@...ux-foundation.org,
	akepner@....com, netdev@...r.kernel.org, raisch@...ibm.com,
	themann@...ibm.com, linux-kernel@...r.kernel.org,
	linuxppc-dev@...abs.org, meder@...ibm.com, tklein@...ibm.com,
	stefan.roscher@...ibm.com
Subject: Re: RFC: issues concerning the next NAPI interface

From: Jan-Bernd Themann <ossthema@...ibm.com>
Date: Tue, 28 Aug 2007 13:21:09 +0200

> Problem for multi queue driver with interrupt distribution scheme set to
> round robin for this simple example:
> Assuming we have 2 SLOW CPUs. CPU_1 is under heavy load (applications). CPU_2
> is not under heavy load. Now we receive a lot of packets (high load situation).
> Receive queue 1 (RQ1) is scheduled on CPU_1. NAPI-Poll does not manage to empty
> RQ1 ever, so it stays on CPU_1. The second receive queue (RQ2) is scheduled on
> CPU_2. As that CPU is not under heavy load, RQ2 can be emptied, and the next IRQ
> for RQ2 will go to CPU_1. Then both RQs are on CPU_1 and will stay there if
> no IRQ is forced at some time as both RQs are never emptied completely.

Why would RQ2's IRQ move over to CPU_1?  That's stupid.

If you lock the IRQs to specific cpus, the load adjusts automatically.
Because packet work, in high load situations, will be processed via
ksoftirqd daemons even if the packet is not for a user application's
socket.

In such a case the scheduler will move the user applications that are
now not able to get their timeslices over to a less busy cpu.

If you keep moving the interrupts around, the scheduler cannot react
properly and it makes the situation worse.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ