lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 18 Jan 2020 20:13:36 +0100
From:   Guillaume Nault <gnault@...hat.com>
To:     Tom Parkin <tparkin@...alix.com>
Cc:     Ridge Kennedy <ridgek@...iedtelesis.co.nz>, netdev@...r.kernel.org
Subject: Re: [PATCH net] l2tp: Allow duplicate session creation with UDP

On Fri, Jan 17, 2020 at 07:19:39PM +0000, Tom Parkin wrote:
> On  Fri, Jan 17, 2020 at 15:25:58 +0100, Guillaume Nault wrote:
> > On Fri, Jan 17, 2020 at 01:18:49PM +0000, Tom Parkin wrote:
> > > More generally, for v3 having the session ID be unique to the LCCE is
> > > required to make IP-encap work at all.  We can't reliably obtain the
> > > tunnel context from the socket because we've only got a 3-tuple
> > > address to direct an incoming frame to a given socket; and the L2TPv3
> > > IP-encap data packet header only contains the session ID, so that's
> > > literally all there is to work with.
> > > 
> > I don't see how that differs from the UDP case. We should still be able
> > to get the corresponding socket and lookup the session ID in that
> > context. Or did I miss something? Sure, that means that the socket is
> > the tunnel, but is there anything wrong with that?
> 
> It doesn't fundamentally differ from the UDP case.
> 
> The issue is that if you're stashing tunnel context with the socket
> (as UDP currently does), then you're relying on the kernel's ability
> to deliver packets for a given tunnel on that tunnel's socket.
> 
> In the UDP case this is normally easily done, assuming each UDP tunnel
> socket has a unique 5-tuple address.  So if peers allow the use of
> ports other than port 1701, it's normally not an issue.
> 
> However, if you do get a 5-tuple clash, then packets may start
> arriving on the "wrong" socket.  In general this is a corner case
> assuming peers allow ports other than 1701 to be used, and so we don't
> see it terribly often.
> 
> Contrast this with IP-encap.  Because we don't have ports, the 5-tuple
> address now becomes a 3-tuple address.  Suddenly it's quite easy to
> get a clash: two IP-encap tunnels between the same two peers would do
> it.
> 
Well, the situation is the same with UDP, when the peer always uses
source port 1701, which is a pretty common case as you noted
previously.
I've never seen that as a problem in practice since establishing more
than one tunnel between two LCCE or LAC/LNS doesn't bring any
advantage.

> Since we don't want to arbitrarily limit IP-encap tunnels to on per
> pair of peers, it's not practical to stash tunnel context with the
> socket in the IP-encap data path.
> 
Even though l2tp_ip doesn't lookup the session in the context of the
socket, it is limitted to one tunnel for a pair of peers, because it
doesn't support SO_REUSEADDR and SO_REUSEPORT.

> > > If we relax the restriction for UDP-encap then it fixes your (Ridge's)
> > > use case; but it does impose some restrictions:
> > > 
> > >  1. The l2tp subsystem has an existing bug for UDP encap where
> > >  SO_REUSEADDR is used, as I've mentioned.  Where the 5-tuple address of
> > >  two sockets clashes, frames may be directed to either socket.  So
> > >  determining the tunnel context from the socket isn't valid in this
> > >  situation.
> > > 
> > >  For L2TPv2 we could fix this by looking the tunnel context up using
> > >  the tunnel ID in the header.
> > > 
> > >  For L2TPv3 there is no tunnel ID in the header.  If we allow
> > >  duplicated session IDs for L2TPv3/UDP, there's no way to fix the
> > >  problem.
> > > 
> > >  This sounds like a bit of a corner case, although its surprising how
> > >  many implementations expect all traffic over port 1701, making
> > >  5-tuple clashes more likely.
> > > 
> > Hum, I think I understand your scenario better. I just wonder why one
> > would establish several tunnels over the same UDP or IP connection (and
> > I've also been surprised by all those implementations forcing 1701 as
> > source port).
> >
> 
> Indeed, it's not ideal :-(
> 
> > >  2. Part of the rationale for L2TPv3's approach to IDs is that it
> > >  allows the data plane to potentially be more efficient since a
> > >  session can be identified by session ID alone.
> > >  
> > >  The kernel hasn't really exploited that fact fully (UDP encap
> > >  still uses the socket to get the tunnel context), but if we make
> > >  this change we'll be restricting the optimisations we might make
> > >  in the future.
> > > 
> > > Ultimately it comes down to a judgement call.  Being unable to fix
> > > the SO_REUSEADDR bug would be the biggest practical headache I
> > > think.
> > And it would be good to have a consistent behaviour between IP and UDP
> > encapsulation. If one does a global session lookup, the other should
> > too.
> 
> That would also be my preference.
> 
Thinking more about the original issue, I think we could restrict the
scope of session IDs to the 3-tuple (for IP encap) or 5-tuple (for UDP
encap) of its parent tunnel. We could do that by adding the IP addresses,
protocol and ports to the hash key in the netns session hash-table.
This way:
 * Sessions would be only accessible from the peer with whom we
   established the tunnel.
 * We could use multiple sockets bound and connected to the same
   address pair, and lookup the right session no matter on which
   socket L2TP messages are received.
 * We would solve Ridge's problem because we could reuse session IDs
   as long as the 3 or 5-tuple of the parent tunnel is different.

That would be something for net-next though. For -net, we could get
something like Ridge's patch, which is simpler, since we've never
supported multiple tunnels per session anyway.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ