[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240229063425.5ccbd06b@kernel.org>
Date: Thu, 29 Feb 2024 06:34:25 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Jiri Pirko <jiri@...nulli.us>
Cc: "Samudrala, Sridhar" <sridhar.samudrala@...el.com>, Greg Kroah-Hartman
<gregkh@...uxfoundation.org>, Tariq Toukan <ttoukan.linux@...il.com>, Saeed
Mahameed <saeed@...nel.org>, "David S. Miller" <davem@...emloft.net>, Paolo
Abeni <pabeni@...hat.com>, Eric Dumazet <edumazet@...gle.com>, Saeed
Mahameed <saeedm@...dia.com>, netdev@...r.kernel.org, Tariq Toukan
<tariqt@...dia.com>, Gal Pressman <gal@...dia.com>, Leon Romanovsky
<leonro@...dia.com>, jay.vosburgh@...onical.com
Subject: Re: [net-next V3 15/15] Documentation: networking: Add description
for multi-pf netdev
On Thu, 29 Feb 2024 09:21:26 +0100 Jiri Pirko wrote:
> >> Correct? Does the orchestration setup a bond on top of them or some other
> >> master device or let the container use them independently?
> >
> >Just multi-nexthop routing and binding sockets to the netdev (with
> >some BPF magic, I think).
>
> Yeah, so basically 2 independent ports, 2 netdevices working
> independently. Not sure I see the parallel to the subject we discuss
> here :/
From the user's perspective it's almost exactly the same.
User wants NUMA nodes to have a way to reach the network without
crossing the interconnect. Whether you do that with 2 200G NICs
or 1 400G NIC connected to two nodes is an implementation detail.
Powered by blists - more mailing lists