lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 01 Jul 2007 18:05:12 -0400
From:	Chris Snook <csnook@...hat.com>
To:	Krzysztof Oledzki <olel@....pl>
CC:	Arjan van de Ven <arjan@...radead.org>,
	linux-kernel@...r.kernel.org
Subject: Re: IRQ handling difference between i386 and x86_64

Krzysztof Oledzki wrote:
> 
> 
> On Sat, 30 Jun 2007, Arjan van de Ven wrote:
> 
>> On Sat, 2007-06-30 at 16:55 +0200, Krzysztof Oledzki wrote:
>>> Hello,
>>>
>>> It seems that IRQ handling is somehow different between i386 and x86_64.
>>>
>>> In my Dell PowerEdge 1950 is it possible to enable interrupts spreading
>>> over all CPUs. This a single CPU, four CORE system (Quad-Core E5335 
>>> Xeon)
>>> so I think that interrupts migration may be useful. Unfortunately, it
>>> works only with 32-bit kernel. Booting it with x86_64 leads to 
>>> situation,
>>> when all interrupts goes only to the first cpu matching a smp_affinity
>>> mask.
>>
>> arguably that is the most efficient behavior... round robin of
>> interrupts is the worst possible case in terms of performance
> 
> Even on dual/quadro core CPUs with shared cache? So why it is possible 
> to enable such behaviuor in BIOS, which works only on i386 BTW. :(
> 
>> are you using irqbalance ? (www.irqbalance.org)
> 
> Yes, I'm aware about this useful tool, but in some situations (routing 
> for example) it cannot help much as it keeps three cpus idle. :(
> 
> Best regards,
> 
>                 Krzysztof Olędzki

Interleaving interrupt delivery will completely break TCP header 
prediction, and cost you far more CPU time than it will save.  In fact, 
because of the locking, it will probably scale negatively with the 
number of CPUs, if your workload is mostly TCP/IP processing.  The way 
around this is to ensure that the packets for any given TCP socket are 
all delivered to the same processor.  If you have multiple NICs and use 
802.3ad bonding with layer3+4 hashing, header prediction will work fine, 
and you don't have to disable irqbalance, because it will do the right 
thing.

	-- Chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ