[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALx6S34ctcycnjTR4gon7oSRMdh3Xod2yi_B_TtTvrxt6MW4gg@mail.gmail.com>
Date: Thu, 8 Oct 2015 09:44:38 -0700
From: Tom Herbert <tom@...bertland.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Eric Dumazet <edumazet@...gle.com>,
"David S . Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 1/4] net: SO_INCOMING_CPU setsockopt() support
On Thu, Oct 8, 2015 at 9:29 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Thu, 2015-10-08 at 09:03 -0700, Tom Herbert wrote:
>> On Thu, Oct 8, 2015 at 8:37 AM, Eric Dumazet <edumazet@...gle.com> wrote:
>> > SO_INCOMING_CPU as added in commit 2c8c56e15df3 was a getsockopt() command
>> > to fetch incoming cpu handling a particular TCP flow after accept()
>> >
>> > This commits adds setsockopt() support and extends SO_REUSEPORT selection
>> > logic : If a TCP listener or UDP socket has this option set, a packet is
>> > delivered to this socket only if CPU handling the packet matches the specified one.
>> >
>> > This allows to build very efficient TCP servers, using one thread per cpu,
>> > as the associated TCP listener should only accept flows handled in softirq
>> > by the same cpu. This provides optimal NUMA/SMP behavior and keep cpu caches hot.
>> >
>> Please look again at my SO_INCOMING_CPU_MASK patches to see it these
>> will work. I believe SO_INCOMING_CPU setsockopt is probably a subset
>> of that functionality. A single CPU assigned to socket forces an
>> application design to have one thread per CPU-- this may be overkill.
>> It's probably be sufficient in many cases to have just one listener
>> thread per NUMA node.
>
>
> I think you misunderstood my patch.
>
> For optimal behavior against DDOS, you need one TCP _listener_ per RX
> queue on the NIC.
>
I see. We are not using SO_INCOMING_CPU_MASK as a defense against
DDOS. It's used ensure affinity in application connection processing
between CPUs. For instance, if we have two NUMA nodes we can start two
instances of the application bound to each node and then use
SO_REUSEPORT and SO_INCOMING_CPU_MASK to ensure connections are
processed on the the same NUMA node. Packets crossing NUMA boundaries
even with RFS is painful.
Tom
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists