lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 29 Sep 2021 21:42:06 -0600
From:   David Ahern <dsahern@...il.com>
To:     Nikolay Aleksandrov <razor@...ckwall.org>, netdev@...r.kernel.org
Cc:     roopa@...dia.com, donaldsharp72@...il.com, idosch@...sch.org,
        Nikolay Aleksandrov <nikolay@...dia.com>
Subject: Re: [RFC iproute2-next 00/11] ip: nexthop: cache nexthops and print
 routes' nh info

On 9/29/21 9:28 AM, Nikolay Aleksandrov wrote:
> From: Nikolay Aleksandrov <nikolay@...dia.com>
> 
> Hi,
> This set tries to help with an old ask that we've had for some time
> which is to print nexthop information while monitoring or dumping routes.
> The core problem is that people cannot follow nexthop changes while
> monitoring route changes, by the time they check the nexthop it could be
> deleted or updated to something else. In order to help them out I've
> added a nexthop cache which is populated (only used if -d / show_details
> is specified) while decoding routes and kept up to date while monitoring.
> The nexthop information is printed on its own line starting with the
> "nh_info" attribute and its embedded inside it if printing JSON. To
> cache the nexthop entries I parse them into structures, in order to
> reuse most of the code the print helpers have been altered so they rely
> on prepared structures. Nexthops are now always parsed into a structure,
> even if they won't be cached, that structure is later used to print the
> nexthop and destroyed if not going to be cached. New nexthops (not found
> in the cache) are retrieved from the kernel using a private netlink
> socket so they don't disrupt an ongoing dump, similar to how interfaces
> are retrieved and cached.
> 
> I have tested the set with the kernel forwarding selftests and also by
> stressing it with nexthop create/update/delete in loops while monitoring.
> 
> Comments are very welcome as usual. :)

overall it looks fine and not surprised a cache is needed.

Big comment is to re-order the patches - do all of the refactoring first
to get the code where you need it and then add what is needed for the cache.

Powered by blists - more mailing lists