[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <73b535a0-1f0c-14a9-95ab-faef66ae758b@gmail.com>
Date: Fri, 13 Sep 2019 11:41:09 -0600
From: David Ahern <dsahern@...il.com>
To: Gowen <gowen@...atocomputing.co.uk>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: VRF Issue Since kernel 5
[ FYI: you should not 'top post' in responses to netdev; rather comment
inline with the previous message ]
On 9/12/19 7:50 AM, Gowen wrote:
>
> Hi David -thanks for getting back to me
>
>
>
> The DNS servers are 10.24.65.203 or 10.24.64.203 which you want to go
>
> out mgmt-vrf. correct? No - 10.24.65.203 10.25.65.203, so should hit the route leak rule as below (if I've put the 10.24.64.0/24 subnet anywhere it is a typo)
>
> vmAdmin@...M06:~$ ip ro get 10.24.65.203 fibmatch
> 10.24.65.0/24 via 10.24.12.1 dev eth0
>
>
> I've added the 127/8 route - no difference.
you mean address on mgmt-vrf right?
>
> The reason for what you might think is an odd design is that I wanted any non-VRF aware users to be able to come in and run all commands in default context without issue, while production and mgmt traffic was separated still
>
> DNS is now working as long as /etc/resolv.conf is populated with my DNS servers - a lot of people would be using this on Azure which uses netplan, so they'll have the same issue, is there documentation I could update or raise a bug to check the systemd-resolve servers as well?
That is going to be the fundamental system problem: handing DNS queries
off to systemd is losing the VRF context of the process doing the DNS
query.
Powered by blists - more mailing lists