[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20091124.095703.107687163.davem@davemloft.net>
Date: Tue, 24 Nov 2009 09:57:03 -0800 (PST)
From: David Miller <davem@...emloft.net>
To: tglx@...utronix.de
Cc: peter.p.waskiewicz.jr@...el.com, linux-kernel@...r.kernel.org,
arjan@...ux.jf.intel.com, mingo@...e.hu, yong.zhang0@...il.com,
netdev@...r.kernel.org
Subject: Re: [PATCH v2] irq: Add node_affinity CPU masks for smarter
irqbalance hints
From: Thomas Gleixner <tglx@...utronix.de>
Date: Tue, 24 Nov 2009 12:07:35 +0100 (CET)
> And what does the kernel do with this information and why are we not
> using the existing device/numa_node information ?
It's a different problem space Thomas.
If the device lives on NUMA node X, we still end up wanting to
allocate memory resources (RX ring buffers) on other NUMA nodes on a
per-queue basis.
Otherwise a network card's forwarding performance is limited by the
memory bandwidth of a single NUMA node, and on a multiqueue cards we
therefore fare much better by allocating each device RX queue's memory
resources on a different NUMA node.
It is this NUMA usage that PJ is trying to export somehow to userspace
so that irqbalanced and friends can choose the IRQ cpu masks more
intelligently.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists