[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UcyyaWOw1BnH8XSW2Endpm+1EqHGtrwj1kjtkTDpNUprw@mail.gmail.com>
Date: Fri, 18 Dec 2020 07:54:02 -0800
From: Alexander Duyck <alexander.duyck@...il.com>
To: David Ahern <dsahern@...il.com>
Cc: Jason Gunthorpe <jgg@...dia.com>,
Saeed Mahameed <saeed@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Leon Romanovsky <leonro@...dia.com>,
Netdev <netdev@...r.kernel.org>, linux-rdma@...r.kernel.org,
David Ahern <dsahern@...nel.org>,
Jacob Keller <jacob.e.keller@...el.com>,
Sridhar Samudrala <sridhar.samudrala@...el.com>,
"Ertman, David M" <david.m.ertman@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
Kiran Patil <kiran.patil@...el.com>,
Greg KH <gregkh@...uxfoundation.org>
Subject: Re: [net-next v4 00/15] Add mlx5 subfunction support
On Thu, Dec 17, 2020 at 7:55 PM David Ahern <dsahern@...il.com> wrote:
>
> On 12/17/20 8:11 PM, Alexander Duyck wrote:
> > On Thu, Dec 17, 2020 at 5:30 PM David Ahern <dsahern@...il.com> wrote:
> >>
> >> On 12/16/20 3:53 PM, Alexander Duyck wrote:
> >>> The problem in my case was based on a past experience where east-west
> >>> traffic became a problem and it was easily shown that bypassing the
> >>> NIC for traffic was significantly faster.
> >>
> >> If a deployment expects a lot of east-west traffic *within a host* why
> >> is it using hardware based isolation like a VF. That is a side effect of
> >> a design choice that is remedied by other options.
> >
> > I am mostly talking about this from past experience as I had seen a
> > few instances when I was at Intel when it became an issue. Sales and
> > marketing people aren't exactly happy when you tell them "don't sell
> > that" in response to them trying to sell a feature into an area where
>
> that's a problem engineers can never solve...
>
> > it doesn't belong. Generally they want a solution. The macvlan offload
> > addressed these issues as the replication and local switching can be
> > handled in software.
>
> well, I guess almost never. :-)
>
> >
> > The problem is PCIe DMA wasn't designed to function as a network
> > switch fabric and when we start talking about a 400Gb NIC trying to
> > handle over 256 subfunctions it will quickly reduce the
> > receive/transmit throughput to gigabit or less speeds when
> > encountering hardware multicast/broadcast replication. With 256
> > subfunctions a simple 60B ARP could consume more than 19KB of PCIe
> > bandwidth due to the packet having to be duplicated so many times. In
> > my mind it should be simpler to simply clone a single skb 256 times,
> > forward that to the switchdev ports, and have them perform a bypass
> > (if available) to deliver it to the subfunctions. That's why I was
> > thinking it might be a good time to look at addressing it.
> >
>
> east-west traffic within a host is more than likely the same tenant in
> which case a proper VPC is a better solution than the s/w stack trying
> to detect and guess that a bypass is needed. Guesses cost cycles in the
> fast path which is a net loss - and even more so as speeds increase.
Yes, but this becomes the hardware limitations deciding the layout of
the network. I lean towards more flexibility to allow more
configuration options being a good thing rather than us needing to
dictate how a network has to be constructed based on the limitations
of the hardware and software.
For broadcast/multicast it isn't so much a guess. It would be a single
bit test. My understanding is the switchdev setup is already making
special cases for things like broadcast/multicast due to the extra
overhead incurred. I mentioned ARP because in many cases it has to be
offloaded specifically due to these sorts of issues.
Powered by blists - more mailing lists