lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CO1PR11MB5170C1925DFB4BFE4B7819F5D97C9@CO1PR11MB5170.namprd11.prod.outlook.com>
Date:   Tue, 21 Dec 2021 23:05:41 +0000
From:   "Chen, Mike Ximing" <mike.ximing.chen@...el.com>
To:     Andrew Lunn <andrew@...n.ch>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "arnd@...db.de" <arnd@...db.de>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        "Williams, Dan J" <dan.j.williams@...el.com>,
        "pierre-louis.bossart@...ux.intel.com" 
        <pierre-louis.bossart@...ux.intel.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "kuba@...nel.org" <kuba@...nel.org>
Subject: RE: [RFC PATCH v12 01/17] dlb: add skeleton for DLB driver



> -----Original Message-----
> From: Andrew Lunn <andrew@...n.ch>
> Sent: Tuesday, December 21, 2021 4:39 PM
> To: Chen, Mike Ximing <mike.ximing.chen@...el.com>
> Cc: linux-kernel@...r.kernel.org; arnd@...db.de; gregkh@...uxfoundation.org; Williams, Dan J
> <dan.j.williams@...el.com>; pierre-louis.bossart@...ux.intel.com; netdev@...r.kernel.org;
> davem@...emloft.net; kuba@...nel.org
> Subject: Re: [RFC PATCH v12 01/17] dlb: add skeleton for DLB driver
> 
> On Tue, Dec 21, 2021 at 08:56:42PM +0000, Chen, Mike Ximing wrote:
> >
> >
> > > -----Original Message-----
> > > From: Andrew Lunn <andrew@...n.ch>
> > > Sent: Tuesday, December 21, 2021 4:54 AM
> > > To: Chen, Mike Ximing <mike.ximing.chen@...el.com>
> > > Cc: linux-kernel@...r.kernel.org; arnd@...db.de;
> > > gregkh@...uxfoundation.org; Williams, Dan J
> > > <dan.j.williams@...el.com>; pierre-louis.bossart@...ux.intel.com;
> > > netdev@...r.kernel.org; davem@...emloft.net; kuba@...nel.org
> > > Subject: Re: [RFC PATCH v12 01/17] dlb: add skeleton for DLB driver
> > >
> > > > +The following diagram shows a typical packet processing pipeline with the Intel DLB.
> > > > +
> > > > +                              WC1              WC4
> > > > + +-----+   +----+   +---+  /      \  +---+  /      \  +---+   +----+   +-----+
> > > > + |NIC  |   |Rx  |   |DLB| /        \ |DLB| /        \ |DLB|   |Tx  |   |NIC  |
> > > > + |Ports|---|Core|---|   |-----WC2----|   |-----WC5----|   |---|Core|---|Ports|
> > > > + +-----+   -----+   +---+ \        / +---+ \        / +---+   +----+   ------+
> > > > +                           \      /         \      /
> > > > +                              WC3              WC6
> > >
> > > This is the only mention of NIC here. Does the application interface
> > > to the network stack in the usual way to receive packets from the
> > > TCP/IP stack up into user space and then copy it back down into the
> > > MMIO block for it to enter the DLB for the first time? And at the end of the path, does the application
> copy it from the MMIO into a standard socket for TCP/IP processing to be send out the NIC?
> > >
> > For load balancing and distribution purposes, we do not handle packets
> > directly in DLB. Instead, we only send QEs (queue events) to MMIO for
> > DLB to process. In an network application, QEs (64 bytes each) can
> > contain pointers to the actual packets. The worker cores can use these pointers to process packets and
> forward them to the next stage. At the end of the path, the last work core can send the packets out to NIC.
> 
> Sorry for asking so many questions, but i'm trying to understand the architecture. As a network maintainer,
> and somebody who reviews network drivers, i was trying to be sure there is not an actual network MAC
> and PHY driver hiding in this code.
> 
> So you talk about packets. Do you actually mean frames? As in Ethernet frames? TCP/IP processing has not
> occurred? Or does this plug into the network stack at some level? After TCP reassembly has occurred? Are
> these pointers to skbufs?
> 
There is no network MAC or PHY driver in the code. Actually DLB and the driver does not have any direct access to
the network ports/sockets. In the above diagram, the Rx/Tx CPU core receives/transmits packet (or frames)
from/to the NIC. These can be either L2 or L3 packets/frames. The Rx CPU core sends corresponding QEs with
proper meta data (such as pointers to packets/frames) to DLB, which distributes QEs to a set of worker cores.
the worker cores receive QEs, process the corresponding packets/frames, and send QEs back to DLB for
the next stage processing. After several stages of processing, the worker cores in the last stage send the QEs
to Tx core, which then transmits the packets/frames to NIC ports. So between the Rx core and Tx core is where
DLB and the driver operates. The DLB operation itself does not involve any network access.

I am not very familiar with skbufs, but they sound like queue buffers in the kernel. Most of the DLB applications
are in user space. So these pointers can be for any buffers that an application uses. DLB does not process any
packets/frames, it distributes QEs to worker cores which process the corresponding packets/frames.

> > > Do you even needs NICs here? Could the data be coming of a video
> > > camera and you are distributing image processing over a number of cores?
> > No, the diagram is just an example for packet processing applications.
> > The data can come from other sources such video cameras. The DLB can
> > schedule up to 100 million packets/events per seconds. The frame rate from a single camera is normally
> much, much lower than that.
> 
> So i'm trying to understand the scope of this accelerator. Is it just a network accelerator? If so, are you
> pointing to skbufs? How are the lifetimes of skbufs managed? How do you get skbufs out of the NIC? Are
> you using XDP?

This is not a network accelerator in the sense that it does not have direct access to the network sockets/ports. We do not use XDP.
What it does is to effectively distribute workloads (such as packet processing) among CPU cores and therefore
increases the total packet/frame processing throughput of the CPU processors (such as Intel's Xeon processors).
Imagine, for example, that the Rx core receives 1000 packets/frames in a burst with random payloads, how to
distribute the packet processing to (say) 16 CPU cores is the job of the DLB hardware. The driver is responsible
for the resource management, system configuration and reset, multiple user/application support, and virtualization
enablement.

Thanks
Mike

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ