[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c46e43d1-ba7d-39d9-688f-0141931df1b0@gmail.com>
Date: Fri, 29 Nov 2019 17:07:20 -0800
From: Eric Dumazet <eric.dumazet@...il.com>
To: David Laight <David.Laight@...LAB.COM>,
'Paolo Abeni' <pabeni@...hat.com>,
Jesper Dangaard Brouer <brouer@...hat.com>
Cc: 'Marek Majkowski' <marek@...udflare.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
network dev <netdev@...r.kernel.org>,
kernel-team <kernel-team@...udflare.com>
Subject: Re: epoll_wait() performance
On 11/28/19 2:17 AM, David Laight wrote:
> From: Eric Dumazet
>> Sent: 27 November 2019 17:47
> ...
>> A QUIC server handles hundred of thousands of ' UDP flows' all using only one UDP socket
>> per cpu.
>>
>> This is really the only way to scale, and does not need kernel changes to efficiently
>> organize millions of UDP sockets (huge memory footprint even if we get right how
>> we manage them)
>>
>> Given that UDP has no state, there is really no point trying to have one UDP
>> socket per flow, and having to deal with epoll()/poll() overhead.
>
> How can you do that when all the UDP flows have different destination port numbers?
> These are message flows not idempotent requests.
> I don't really want to collect the packets before they've been processed by IP.
>
> I could write a driver that uses kernel udp sockets to generate a single message queue
> than can be efficiently processed from userspace - but it is a faff compiling it for
> the systems kernel version.
Well if destinations ports are not under your control,
you also could use AF_PACKET sockets, no need for 'UDP sockets' to receive UDP traffic,
especially it the rate is small.
Powered by blists - more mailing lists