lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 16 Dec 2011 12:30:23 -0600 (CST)
From:	Christoph Lameter <cl@...ux.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
cc:	igorm@....rs, netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: [PATCH 00/10 net-next] Introduce per interface ipv4 statistics

On Fri, 16 Dec 2011, Eric Dumazet wrote:

> Le vendredi 16 décembre 2011 à 11:29 -0600, Christoph Lameter a écrit :
>
> > I have some latency critical processes here and I wish I could get
> > networking in general off the processor where the latency sensitive stuff
> > is running should that process decide to do some calls that cause network
> > I/O.
> >
> > Traditional networking is a slow process these days.
>
>
> Most of the slow process is in the process scheduler actually and cache
> line misses in large TCP structures, but also in many layers (Qdisc...),
> not counting icache footprint.
>
> We slowly improve things, but its always a tradeoff (did I said code
> bloat ?)
>
> If you have dedicated network thread(s) in your application, bound to
> the right cpu(s), then all network stack can be run on the cpus you
> decide [ Also needs using correct irq affinities for the NIC interrupts]
>
> This means your latency critical threads should delegate their network
> IO (like disk IO) to other threads, _and_ avoid being blocked in
> scheduler land.

Right. So why can the OS not do this? If I do read/write via a socket then
do the usual processing as much as possible on the current cpu. Never
schedule any operations later on this cpu. Run scheduled tasks only on the
cpus that are allowed to be used for network stuff.

> Given the nature of socket api, I am not sure adding a layer to
> transparently delegate network IO to a pool of dedicated cpus would be a
> win.

The problem is that the socket layer is drag for low latency apps. They
only use the socket api in non latency critical sections. If an action in
a non latency critical section causes the processor to be interrupted
later in a latency critical section then this is not good.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ