[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1305667662.2691.7.camel@edumazet-laptop>
Date: Tue, 17 May 2011 23:27:42 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Ben Hutchings <bhutchings@...arflare.com>
Cc: David Miller <davem@...emloft.net>, therbert@...gle.com,
netdev@...r.kernel.org
Subject: Re: small RPS cache for fragments?
Le mardi 17 mai 2011 à 22:13 +0100, Ben Hutchings a écrit :
> On Tue, 2011-05-17 at 17:10 -0400, David Miller wrote:
> > From: Eric Dumazet <eric.dumazet@...il.com>
> > Date: Tue, 17 May 2011 23:00:50 +0200
> >
> > > Le mardi 17 mai 2011 à 16:49 -0400, David Miller a écrit :
> > >> From: Tom Herbert <therbert@...gle.com>
> > >> Date: Tue, 17 May 2011 13:02:25 -0700
> > >>
> > >> > I like it! And this sounds like the sort of algorithm that NICs might
> > >> > be able to implement to solve the UDP/RSS unpleasantness, so even
> > >> > better.
> > >>
> > >> Actually, I think it won't work. Even Linux emits fragments last to
> > >> first, so we won't see the UDP header until the last packet where it's
> > >> no longer useful.
> > >>
> > >> Back to the drawing board. :-/
> > >
> > > Well, we could just use the iph->id in the rxhash computation for frags.
> > >
> > > At least all frags of a given datagram should be reassembled on same
> > > cpu, so we get RPS (but not RFS)
> >
> > That's true, but one could also argue that in the existing code at least
> > one of the packets (the one with the UDP header) would make it to the
> > proper flow cpu.
>
> No, we ignore the layer-4 header when either MF or OFFSET is non-zero.
Exactly
As is, RPS (based on our software rxhash computation) should be working
fine with frags, unless we receive different flows with same
(src_addr,dst_addr) pair.
This is why I asked David if real workloads could hit one cpu instead of
many ones.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists