[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87eivnpqde.fsf@basil.nowhere.org>
Date: Mon, 20 Apr 2009 12:32:29 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Tom Herbert <therbert@...gle.com>
Cc: netdev@...r.kernel.org, David Miller <davem@...emloft.net>
Subject: Re: [PATCH] Software receive packet steering
Tom Herbert <therbert@...gle.com> writes:
> +static int netif_cpu_for_rps(struct net_device *dev, struct sk_buff *skb)
> +{
> + cpumask_t mask;
> + unsigned int hash;
> + int cpu, count = 0;
> +
> + cpus_and(mask, dev->soft_rps_cpus, cpu_online_map);
> + if (cpus_empty(mask))
> + return smp_processor_id();
There's a race here with CPU hotunplug I think. When a CPU is hotunplugged
in parallel you can still push packets to it even though they are not
drained. You probably need some kind of drain callback in a CPU hotunplug
notifier that eats all packets left over.
> +got_hash:
> + hash %= cpus_weight_nr(mask);
That looks rather heavyweight even on modern CPUs. I bet it's 40-50+ cycles
alone forth the hweight and the division. Surely that can be done better?
Also I suspect some kind of runtime switch for this would be useful.
Also the manual set up of the receive mask seems really clumpsy. Couldn't
you set that up dynamically based on where processes executing recvmsg()
are running?
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists