[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YcJJh9e2QCJOoEB/@lunn.ch>
Date: Tue, 21 Dec 2021 22:39:19 +0100
From: Andrew Lunn <andrew@...n.ch>
To: "Chen, Mike Ximing" <mike.ximing.chen@...el.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"arnd@...db.de" <arnd@...db.de>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"pierre-louis.bossart@...ux.intel.com"
<pierre-louis.bossart@...ux.intel.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"kuba@...nel.org" <kuba@...nel.org>
Subject: Re: [RFC PATCH v12 01/17] dlb: add skeleton for DLB driver
On Tue, Dec 21, 2021 at 08:56:42PM +0000, Chen, Mike Ximing wrote:
>
>
> > -----Original Message-----
> > From: Andrew Lunn <andrew@...n.ch>
> > Sent: Tuesday, December 21, 2021 4:54 AM
> > To: Chen, Mike Ximing <mike.ximing.chen@...el.com>
> > Cc: linux-kernel@...r.kernel.org; arnd@...db.de; gregkh@...uxfoundation.org; Williams, Dan J
> > <dan.j.williams@...el.com>; pierre-louis.bossart@...ux.intel.com; netdev@...r.kernel.org;
> > davem@...emloft.net; kuba@...nel.org
> > Subject: Re: [RFC PATCH v12 01/17] dlb: add skeleton for DLB driver
> >
> > > +The following diagram shows a typical packet processing pipeline with the Intel DLB.
> > > +
> > > + WC1 WC4
> > > + +-----+ +----+ +---+ / \ +---+ / \ +---+ +----+ +-----+
> > > + |NIC | |Rx | |DLB| / \ |DLB| / \ |DLB| |Tx | |NIC |
> > > + |Ports|---|Core|---| |-----WC2----| |-----WC5----| |---|Core|---|Ports|
> > > + +-----+ -----+ +---+ \ / +---+ \ / +---+ +----+ ------+
> > > + \ / \ /
> > > + WC3 WC6
> >
> > This is the only mention of NIC here. Does the application interface to the network stack in the usual way
> > to receive packets from the TCP/IP stack up into user space and then copy it back down into the MMIO
> > block for it to enter the DLB for the first time? And at the end of the path, does the application copy it
> > from the MMIO into a standard socket for TCP/IP processing to be send out the NIC?
> >
> For load balancing and distribution purposes, we do not handle packets directly in DLB. Instead, we only
> send QEs (queue events) to MMIO for DLB to process. In an network application, QEs (64 bytes each) can
> contain pointers to the actual packets. The worker cores can use these pointers to process packets and
> forward them to the next stage. At the end of the path, the last work core can send the packets out to NIC.
Sorry for asking so many questions, but i'm trying to understand the
architecture. As a network maintainer, and somebody who reviews
network drivers, i was trying to be sure there is not an actual
network MAC and PHY driver hiding in this code.
So you talk about packets. Do you actually mean frames? As in Ethernet
frames? TCP/IP processing has not occurred? Or does this plug into the
network stack at some level? After TCP reassembly has occurred? Are
these pointers to skbufs?
> > Do you even needs NICs here? Could the data be coming of a video camera and you are distributing image
> > processing over a number of cores?
> No, the diagram is just an example for packet processing applications. The data can come from other sources
> such video cameras. The DLB can schedule up to 100 million packets/events per seconds. The frame rate from
> a single camera is normally much, much lower than that.
So i'm trying to understand the scope of this accelerator. Is it just
a network accelerator? If so, are you pointing to skbufs? How are the
lifetimes of skbufs managed? How do you get skbufs out of the NIC? Are
you using XDP?
Andrew
Powered by blists - more mailing lists