lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4ed5aff3-43e8-0138-1848-22a3a1176e46@gssi.it>
Date:   Fri, 6 Mar 2020 17:45:26 +0100
From:   Ahmed Abdelsalam <ahmed.abdelsalam@...i.it>
To:     David Ahern <dsahern@...il.com>,
        Carmine Scarpitta <carmine.scarpitta@...roma2.it>
Cc:     davem@...emloft.net, kuznet@....inr.ac.ru, yoshfuji@...ux-ipv6.org,
        kuba@...nel.org, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org, dav.lebrun@...il.com,
        andrea.mayer@...roma2.it, paolo.lungaroni@...t.it,
        hiroki.shirokura@...ecorp.com
Subject: Re: [net-next 1/2] Perform IPv4 FIB lookup in a predefined FIB table

Hi David,

Thanks for the pointers for the VRF with MPLS.

We have been looking at this for the last weeks also watched your videos 
on the VRF and l3mdev implementation at the different netdev conferences.

However, in the SRv6 we don’t really need a VRF device. The SRv6 
functions (the already supported ones as well as the End.DT4 submitted 
here) resides in the IPv6 FIB table.

The way it works is as follows:
1) create a table for the tenant
$ echo 100 tenant1 >> /etc/iproute2/rt_tables

You instantiate an SRv6 End.DT4 function at the Egress PE to decapsulate 
the SRv6 encapsulation and lookup the inner packet in the tenant1 table. 
The example iproute2 command to do so is as below.

$ ip -6 route add A::B encap seg6local action End.DT4 table tenant1 dev 
enp0s8

This installs an IPv6 FIB entry as shown below.
$ ip -6 r
a::b  encap seg6local action End.DT4 table 100 dev enp0s8 metric 1024 
pref medium

Then the BGP routing daemon at the Egress PE is used to advertise this 
VPN service. The BGP sub-TLV to support SRv6 IPv4 L3VPN is defined in [2].

The SRv6 BGP extensions to support IPv4/IPv6 L3VPN are now merged in in 
FRRouting/frr [3][4][5][6].

There is also a pull request for the CLI to configure SRv6-locator on 
zebra [7].

The BGP daemon at the Ingress PE receives the BGP update and installs an 
a FIB entry that this bound to SRv6 encapsulation.

$ ip r
30.0.0.0/24  encap seg6 mode encap segs 1 [ a::b ] dev enp0s9

Traffic destined to that tenant will get encapsulated at the ingress 
node and forwarded to the egress node on the IPv6 fabric.

The encapsulation is in the form of outer IPv6 header that has the 
destination address equal to the VPN service A::B instantiated at the 
Egress PE.

When the packet arrives at the Egress PE, the destination address 
matches the FIB entry associated with the End.DT4 function which does 
the decapsulation and the lookup inside the tenant table associated with 
it (tenant1).

Everything I explained is in the Linux kernel since a while. End.DT4 was 
missing and this the reason we submitted this patch.

In this multi-tenant DC fabric we leverage the IPv6 forwarding. No need 
for MPLS dataplane in the fabric.

We can submit a v2 of patch addressing your comments on the "tbl_known" 
flag.

Thanks,
Ahmed

[1] https://segment-routing.org/index.php/Implementation/AdvancedConf
[2] https://tools.ietf.org/html/draft-ietf-bess-srv6-services-02
[3] 
https://github.com/FRRouting/frr/commit/7f1ace03c78ca57c7f8b5df5796c66fddb47e5fe
[4] 
https://github.com/FRRouting/frr/commit/e496b4203055c50806dc7193b9762304261c4bbd
[5] 
https://github.com/FRRouting/frr/commit/63d02478b557011b8606668f1e3c2edbf263794d
[6] 
https://github.com/FRRouting/frr/commit/c6ca155d73585b1ca383facd74e9973c281f1f93
[7] https://github.com/FRRouting/frr/pull/5865


On 19/02/2020 05:29, David Ahern wrote:
> On 2/18/20 7:49 PM, Carmine Scarpitta wrote:
>> Hi David,
>> Thanks for the reply.
>>
>> The problem is not related to the table lookup. Calling fib_table_lookup and then rt_dst_alloc from seg6_local.c is good.
>>
> 
> you did not answer my question. Why do all of the existing policy
> options (mark, L3 domains, uid) to direct the lookup to the table of
> interest not work for this use case?
> 
> What you want is not unique. There are many ways to make it happen.
> Bleeding policy details to route.c and adding a flag that is always
> present and checked even when not needed (e.g.,
> CONFIG_IP_MULTIPLE_TABLES is disabled) is not the right way to do it.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ