lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b6fcb866-f24e-4508-97f5-eb2b63f6eec7@gmail.com>
Date: Mon, 8 Jul 2024 10:45:46 +0200
From: Leone Fernando <leone4fernando@...il.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com,
 dsahern@...nel.org, willemb@...gle.com, netdev@...r.kernel.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next v2 2/4] net: dst_cache: add input_dst_cache API

Hi Eric

> It targets IPv4 only, which some of us no longer use.

The same optimization can be implemented and applied to IPv6 as well and
I'm planning to do so.

> Also, what is the behavior (loss of performance) of this cache under
> flood, when 16 slots need to be looked up for each packet ?

I did some more measurements under flood as you asked, those are the results:
Total PPS:
number of IPs(x# cache size)    mainline        patched         delta
                                  Kpps            Kpps            %
        20000(x40)                6498            6123          -5.7
        35000(x70)                6502            5261          -19
        50000(x100)               6503            4986          -23.3

I found out that most of the negative effect under extreme flood is
as a result of the atomic operations.
I have some ideas on how to solve this problem. it will demand a bit more memory
percpu, if that's reasonable I can do V3.
IMO the boost in the average case is worth the flood case penalty.

> This patch adds 10MB of memory per netns on host with 512 cpus,
> and adds netns creation and dismantle costs (45% on a host with 256 cpus)

Is such a memory penalty a deal breaker considering the performance gain?
Also I think adding a sysctl to control the size of the cache might be a good
solution for those who want to keep their memory usage lower. what do you think?

Thanks,
Leone

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ