lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZyyN2bSgrpbhbkpp@shredder>
Date: Thu, 7 Nov 2024 11:52:25 +0200
From: Ido Schimmel <idosch@...sch.org>
To: Matt Muggeridge <Matt.Muggeridge@....com>
Cc: davem@...emloft.net, dsahern@...nel.org, edumazet@...gle.com,
	horms@...nel.org, kuba@...nel.org, linux-api@...r.kernel.org,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	pabeni@...hat.com, stable@...r.kernel.org
Subject: Re: [PATCH net 1/1] net/ipv6: Netlink flag for new IPv6 Default
 Routes

On Wed, Nov 06, 2024 at 10:53:03PM -0500, Matt Muggeridge wrote:
> Hi Ido,
> 
> > >>> Is the problem that fib6_table_lookup() chooses a reachable
> > >>> nexthop and then fib6_select_path() overrides it with an unreachable
> > >>> one?
> > 
> > >> I'm afraid I don't know.
> > >>
> > > We need to understand the current behavior before adding a new interface
> > > that we will never be able to remove. It is possible we can improve /
> > > fix the current code. I won't have time to look into it myself until
> > > next week.
> 
> I am grateful that you want to look into it. Thank you! And I look forward to
> learning what you discover.
> 
> You probably already know how to reproduce it, but in case it helps, I still
> have the packet captures and can share them with you. Let me know if you'd
> like me to share them (and how to share them).

It would be best if you could provide a reproducer using iproute2:
Configure a dummy device using ip-link, install the multipath route
using ip-route, configure the neighbour table using ip-neigh and then
perform route queries using "ip route get ..." showing the problem. We
can then use it as the basis for a new test case in
tools/testing/selftests/net/fib_tests.sh 

BTW, do you have CONFIG_IPV6_ROUTER_PREF=y in your config?

> 
> > > 
> > > The objective is to allow IPv6 Netlink clients to be able to create default
> > > routes from RAs in the same way the kernel creates default routes from RAs.
> > > Essentially, I'm trying to have Netlink and Kernel behaviors match.
> > 
> > I understand, but it's essentially an extension for the legacy IPv6
> > multipath API which we are trying to move away from towards the nexthop
> > API (see more below).
> 
> Very interesting, I wasn't aware of this movement.
> 
> While this change is an extension of the legacy IPv6 multipath API, won't it
> still need to support Netlink clients that have been designed around it? I
> imagine that transitioning Netlink clients to the NH API will take many years?

FRR already supports it and I saw that there is some support for nexthop
objects in systemd:

https://github.com/systemd/systemd/pull/13735

> 
> As such, it still seems appropriate (to me) that this be implemented in the
> legacy API as well as ensuring it works with the NH API.

As I understand it you currently get different results because the
kernel installs two default routes whereas user space can only create
one default multipath route. Before adding a new uAPI I want to
understand the source of the difference and see if we can improve / fix
the current multipath code so that the two behave the same. If we can
get them to behave the same then I don't think user space will care
about two default routes versus one default multipath route.

> 
> Another consideration...
> 
> Will the kernel RA processing go through the same nh pathway? The reason I
> ask is because I faced several challenges with IPv6 Logo certification due to
> Netlink clients being unable to achieve the same as the kernel's behavior.

If you are asking if the kernel can install RA routes using nexthop
objects, then the answer is no. Only user space can create nexthop
objects and I don't think we want to allow the kernel to do that.

> 
> As long as the kernel is creating RA routes in a way that meets RFC4861, then
> I'd hope that  Netlink clients would be able to leverage that for 'free'.
> 
> > > 
> > > My analysis led me to the need for Netlink clients to set the kernel's
> > > fib6_config flags RTF_RA_ROUTER, where:
> > > 
> > >     #define RTF_RA_ROUTER		(RTF_ADDRCONF | RTF_DEFAULT)
> > > 
> > >>> +	if (rtm->rtm_flags & RTM_F_RA_ROUTER)
> > >>> +		cfg->fc_flags |= RTF_RA_ROUTER;
> > >>> +
> > >> 
> > >> It is possible there are user space programs out there that set this bit
> > >> (knowingly or not) when sending requests to the kernel and this change
> > >> will result in a behavior change for them. So, if we were to continue in
> > >> this path, this would need to be converted to a new netlink attribute to
> > >> avoid such potential problems.
> > >> 
> > > 
> > > Is this a mandated approach to implementing unspecified bits in a flag?
> > > 
> > > I'm a little surprised by this consideration. If we account for poorly
> > > written buggy user-programs, doesn't this open any API to an explosion
> > > of new attributes or other odd extensions? I'd imagine the same argument
> > > would be applicable to ioctl flags, socket flags, and so on. Why would we
> > > treat implementing unspecified Netlink bits differently to implementing
> > > unspecified ioctl bits, etc.
> > > 
> > > Naturally, if this is the mandated approach, then I'll reimplement it with
> > > a new Netlink attribute. I'm just trying to understand what is the
> > > Linux-lore, here?
> > 
> > Using this bit could have been valid if previously the kernel rejected
> > requests with this bit set, but as evident by your patch the kernel does
> > not do it. It is therefore possible that there are user space programs
> > out there that are working perfectly fine right now and they will break
> > / misbehave after this change.
> > 
> 
> Understood and I agree.
> 
> > > 
> > >> BTW, you can avoid the coalescing problem by using the nexthop API (man
> > >> ip-nexthop).
> > > 
> > > I'm not sure how that would help in this case. We need the nexthop to be
> > > determined according to its REACHABILITY and other considerations described
> > > in RFC4861.
> > 
> > Using your example:
> > 
> > # ip nexthop add id 1 via fe80::200:10ff:fe10:1060 dev enp0s9
> > # ip -6 route add default nhid 1 expires 600 proto ra
> > # ip nexthop add id 2 via fe80::200:10ff:fe10:1061 dev enp0s9
> > # ip -6 route append default nhid 2 expires 600 proto ra
> > # ip -6 route
> > fe80::/64 dev enp0s9 proto kernel metric 256 pref medium
> > default nhid 1 via fe80::200:10ff:fe10:1060 dev enp0s9 proto ra metric 1024 expires 563sec pref medium
> > default nhid 2 via fe80::200:10ff:fe10:1061 dev enp0s9 proto ra metric 1024 expires 594sec pref medium
> 
> Thanks! That looks like it should work. I'll raise this with the the developers
> of systemd-networkd.
> 
> Just to confirm; are these two nhid routes equivalent to having two separate
> default routes that are created when the kernel processes IPv6 RAs?
> 
> Specifically, if one of these nhid routes becomes UNREACHABLE, will that be
> taken into consideration during the routing decision? (I'm guessing so?)

I didn't test it, but I don't see a reason for these two routes to
behave differently than two default routes installed with legacy
nexthops.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ