[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1298984669.3284.99.camel@edumazet-laptop>
Date: Tue, 01 Mar 2011 14:04:29 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: Thomas Graf <tgraf@...radead.org>,
David Miller <davem@...emloft.net>, rick.jones2@...com,
therbert@...gle.com, wsommerfeld@...gle.com,
daniel.baluta@...il.com, netdev@...r.kernel.org
Subject: Re: SO_REUSEPORT - can it be done in kernel?
Le mardi 01 mars 2011 à 20:32 +0800, Herbert Xu a écrit :
> On Tue, Mar 01, 2011 at 07:53:05PM +0800, Herbert Xu wrote:
> > On Tue, Mar 01, 2011 at 12:45:09PM +0100, Eric Dumazet wrote:
> > >
> > > CPU 11 handles all TX completions : Its a potential bottleneck.
> > >
> > > I might ressurect XPS patch ;)
> >
> > Actually this has been my gripe all along with our TX multiqueue
> > support. We should not decide the queue based on the socket, but
> > on the current CPU.
> >
> > We already do the right thing for forwarded packets because there
> > is no socket to latch onto, we just need to fix it for locally
> > generated traffic.
> >
> > The odd packet reordering each time your scheduler decides to
> > migrate the process isn't a big deal IMHO. If your scheduler
> > is constantly moving things you've got bigger problems to worry
> > about.
>
> If anybody wants to play here is a patch to do exactly that:
>
> net: Determine TX queue purely by current CPU
>
> Distributing packets generated on one CPU to multiple queues
> makes no sense. Nor does putting packets from multiple CPUs
> into a single queue.
>
> While this may introduce packet reordering should the scheduler
> decide to migrate a thread, it isn't a big deal because migration
> is meant to be a rare event, and nothing will die as long as the
> ordering doesn't occur all the time.
>
> Signed-off-by: Herbert Xu <herbert@...dor.apana.org.au>
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 8ae6631..87bd20a 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -2164,22 +2164,12 @@ static u32 hashrnd __read_mostly;
> u16 __skb_tx_hash(const struct net_device *dev, const struct sk_buff *skb,
> unsigned int num_tx_queues)
> {
> - u32 hash;
> + u32 hash = raw_smp_processor_id();
>
> - if (skb_rx_queue_recorded(skb)) {
> - hash = skb_get_rx_queue(skb);
> - while (unlikely(hash >= num_tx_queues))
> - hash -= num_tx_queues;
> - return hash;
> - }
> + while (unlikely(hash >= num_tx_queues))
> + hash -= num_tx_queues;
>
> - if (skb->sk && skb->sk->sk_hash)
> - hash = skb->sk->sk_hash;
> - else
> - hash = (__force u16) skb->protocol ^ skb->rxhash;
> - hash = jhash_1word(hash, hashrnd);
> -
> - return (u16) (((u64) hash * num_tx_queues) >> 32);
> + return hash;
> }
> EXPORT_SYMBOL(__skb_tx_hash);
>
> Cheers,
Well, some machines have 4096 cpus ;)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists