[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <459A9CE5.8020403@hp.com>
Date: Tue, 02 Jan 2007 09:56:53 -0800
From: Rick Jones <rick.jones2@...com>
To: hadi@...erus.ca
Cc: Robert Iakobashvili <coroberti@...il.com>,
Arjan van de Ven <arjan@...radead.org>,
netdev@...r.kernel.org
Subject: Re: Network card IRQ balancing with Intel 5000 series chipsets
> The best way to achieve such balancing is to have the network card help
> and essentially be able to select the CPU to notify while at the same
> time considering:
> a) avoiding any packet reordering - which restricts a flow to be
> processed to a single CPU at least within a timeframe
> b) be per-CPU-load-aware - which means to busy out only CPUs which are
> less utilized
>
> Various such schemes have been discussed here but no vendor is making
> such nics today (search Daves Blog - he did discuss this at one point or
> other).
I thought that Neterion were doing something along those lines with
their Xframe II NICs - perhaps not CPU loading aware, but doing stuff to
spread the work of different connections across the CPUs.
I would add a:
c) some knowledge of the CPU on which the thread accessing the socket
for that "connection" will run. This could be as simple as the CPU on
which the socket was last accessed. Having a _NIC_ know this sort of
thing is somewhat difficult and expensive (perhaps too much so). If a
NIC simply hashes the connection idendifiers you then have the issue of
different connections, each "owned/accessed" by one thread, taking
different paths through the system. No issues about reordering, but
perhaps some on cache lines going hither and yon.
The question boils down to - Should the application (via the scheduler)
dictate where its connections are processed, or should the connections
dictate where the application runs?
rick jones
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists