lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 30 Jan 2009 02:42:51 -0500
From:	Bill Fink <billfink@...dspring.com>
To:	Vladimir Kukushkin <v.kukushkin@...il.com>
Cc:	netdev@...r.kernel.org
Subject: Re: How-to use Miltiqueue Rx on SMP ?

On Thu, 29 Jan 2009, Vladimir Kukushkin wrote:

> I need in advice for multiqueue Rx in SMP environment.
> 
> We have a task for high performance analyzer on SMP system. The system
> includes 1Gb multiqueue Ethernet cards, say Intel 1000/PRO series.
> Also we use multiqueue capable eth driver like igb. The system also
> have additional ethernet card for control (ssh, etc.).
> Most important requirements for the system
> - high performance on Rx
> - reliable response under any load
> 
> Could you criticize approach described below ?
> Would Rx throughput be raised using Rx multiqueue card/driver and SMP ?
> Would the system be responsible if we separate Rx traffic handling
> from other processes by using dedicated CPUs ?
> 
> Principles:
> 
> 1) Separation of CPUs in two groups: dedicated CPUs and other CPUs.
> Critical processes (in user and kernel space) of the data receive and
> analyze are executed on dedicated CPUs,
> however system services and other applications are running on rest CPUs.
> To start user-space tasks on selected CPUs I use "taskset" command.
> To set default process affinity I use kernel parameter "default_affinity".
> 
> 2) Rx interrupts are processed on dedicated CPUs using smp_affinity feature.
> I configure this with the help of "smp_affinity" mask in
> /proc/irq/XX/smp_affinitiy
> 
> 3) The daemon irqbalance has to be configured to ignore dedicated CPUs.
> I set mask IRQ_AFFINITY_MASK in the config file /etc/sysconfig/irqbalance.
> Note: irqbalance may be switched off (or should be ?)
> 
> Example:
> -------------
> 
> Assume we have 4 CPUs - cpu0..cpu3 and we have 2 NICs for data -
>   eth1 on irq=18, and
>   eth2 on irq=223.
> Also we use 1 more NIC for system control stuff -
>   eth0 on irq=44
> 
> We want to use CPUs in such manner:
> Cpu0 - handles eth0
> Cpu1 - handles eth1
> Cpu0 and cpu1 also handle our data analyzis application.
> 
> Cpu2 cpu3 - handle system services and other applications
> 
>  Setup (NB: bit masks)
> =================
> 
> 1) Start kernel with default affinity mask so that all processes will
> be executed on cpu2 and cpu3 by default
> i.e. add "default_affinity" mask to boot/grub.conf (or lilo by your choice)
> 
> kernel /boot/linux... default_affinity=0xC
> 
> 2) Set SMP affinity for NICs
> 
> $ echo 0x1 > /proc/irq/18/smp_affinitiy  // eth1 (irq=18) on cpu0
> $ echo 0x2 > /proc/irq/223/smp_affinitiy // eth2 (irq=223) on cpu1
> $ echo 0xC > /proc/irq/44/smp_affinitiy // eth0 (irq=44) on cpu2 cpu3
> 
> 3) Hide cpu0 __ cpu1 for system services (mask 0xC=01100)
> 
> i.e. set IRQ_AFFINITY_MASK=0xC (1100) to file /etc/sysconfig/irqbalance
> 
> 4) Run analyzer application on cpu0 cpu1 (mask=0x3=0011)
> 
> $ taskset 0x3 <analyzer app>

You might want to also force disk interrupts to cpus 2 and 3.

						-Bill
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ