lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 8 Sep 2020 11:15:06 +0800
From:   "" <>
To:     Paolo Abeni <>,
        "David S . Miller" <>,
        Eric Dumazet <>
Subject: Re: [PATCH] net/sock: don't drop udp packets if udp_mem[2] not

On Mon, Sep 07, 2020 at 07:18:48PM +0200, Paolo Abeni wrote:
>On Mon, 2020-09-07 at 22:44 +0800, Dust Li wrote:
>> We encoutered udp packets drop under a pretty low pressure
>> with net.ipv4.udp_mem[0] set to a small value (4096).
>> After some tracing and debugging, we found that for udp
>> protocol, __sk_mem_raise_allocated() will possiblly drop
>> packets if:
>>   udp_mem[0] < udp_prot.memory_allocated < udp_mem[2]
>> That's because __sk_mem_raise_allocated() didn't handle
>> the above condition for protocols like udp who doesn't
>> have sk_has_memory_pressure()
>> We can reproduce this with the following condition
>> 1. udp_mem[0] is relateive small,
>> 2. net.core.rmem_default/max > udp_mem[0] * 4K
>This looks like something that could/should be addressed at
>configuration level ?!?
Thanks a lot for the review !

Sorry, maybe I haven't make it clear enough

The real problem is the scability with the sockets number.
Since the udp_mem is for all UDP sockets, with the number of udp
sockets grows, soon or later, udp_prot.memory_allocated will
exceed udp_mem[0], and __sk_mem_raise_allocated() will cause
the packets drop here. But the total udp memory allocated
may still far under udp_mem[1] or udp_mem[2]

>udp_mem[0] should accomodate confortably at least a socket.

Yeah, I agree udp_mem[0] should be large enough for at least a

Here I use 4096 just for simple and reproduce what we met before.

I changed my test program a bit:
 - with 16 server sockets
 - with 1 client sending 3000 messages(size: 4096bytes) to each
   of those 8 server sockets
 - set net.core.rmem_default/max to (2*4096*4096)
 - and keep udp_mem unset, which by default on my 4GB VM is
   'net.ipv4.udp_mem = 91944        122592  183888'

Actually, with more udp sockets, I can always make it large
enough to exceed udp_mem[0], and drop packets before udp_mem[1]
and udp_mem[2].


Powered by blists - more mailing lists