[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65634d661001071607k63c33a0q4f402942d64ea77e@mail.gmail.com>
Date: Thu, 7 Jan 2010 16:07:35 -0800
From: Tom Herbert <therbert@...gle.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Miller <davem@...emloft.net>,
Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: rps: some comments
On Thu, Jan 7, 2010 at 9:42 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Hi Tom
>
> 1) netif_receive_skb() might enqueue skb instead of handling them directly.
>
> int netif_receive_skb(struct sk_buff *skb)
> {
> int cpu = get_rps_cpu(skb->dev, skb);
>
> if (cpu < 0)
> return __netif_receive_skb(skb);
> else
> return enqueue_to_backlog(skb, cpu);
> }
>
> If get_rps_cpu() happens to select the current cpu, we enqueue to backlog the
> skb instead of directly calling __netif_receive_skb().
>
> One way to solve this is to make get_rps_cpu() returns -1 in this case :
>
> ...
> if (cpu == smp_processor_id() || !cpu_online(cpu))
> cpu = -1;
>
> done:
> rcu_read_unlock();
> return cpu;
> } /* get_rps_cpu() */
>
I kind of like the way it is right now, which is to enqueue even on
the local CPU. The advantage of that is that all the packets received
in the napi run are dispatched to all the participating CPUs before
any are processed, this is good for average and worst case latency on
a packet burst.
>
> 2) RPS_MAP_SIZE might be too expensive to use in fast path, on big (NR_CPUS=4096) machines :
> num_possible_cpus() has to count all bits set in a bitmap.
>
> #define MAX_RPS_CPUS (num_possible_cpus() < 256 ? num_possible_cpus() : 256)
> #define RPS_MAP_SIZE (sizeof(struct rps_map) + (MAX_RPS_CPUS * sizeof(u16))
>
> You can use nr_cpu_ids, or even better a new variable, that you init _once_ to
>
> unsigned int rps_map_size = sizeof(struct rps_map) + (min(256, nr_cpu_ids) * sizeof(u16));
>
Yes that's probably better.
>
> 3) cpu hotplug.
>
> I cannot find how cpu unplug can be safe.
>
> if (!cpu_online(cpu))
> cpu = -1;
> rcu_read_unlock();
> ...
> cpu_set(cpu, __get_cpu_var(rps_remote_softirq_cpus));
> ...
> __smp_call_function_single(cpu, &queue->csd, 0);
I believe were dealing with remote CPUs with preemption disabled (from
softirq's), so maybe that is sufficient to prevent CPU from going away
below us. We probably need a "if cpu_online(cpu)" before
__smp_call_function_single though.
Thanks,
Tom
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists