[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55C9AA08.4000000@suse.de>
Date: Tue, 11 Aug 2015 09:53:44 +0200
From: Alexander Graf <agraf@...e.de>
To: Stuart Yoder <stuart.yoder@...escale.com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
Jose Rivera <German.Rivera@...escale.com>,
Katz Itai <itai.katz@...escale.com>
Cc: "devel@...verdev.osuosl.org" <devel@...verdev.osuosl.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"arnd@...db.de" <arnd@...db.de>
Subject: Re: [PATCH v2] staging: fsl-mc: add DPAA2 overview readme
On 11.08.15 04:38, Stuart Yoder wrote:
>
>
>> -----Original Message-----
>> From: Alexander Graf [mailto:agraf@...e.de]
>> Sent: Sunday, August 09, 2015 9:25 AM
>> To: Yoder Stuart-B08248; gregkh@...uxfoundation.org; Rivera Jose-B46482; katz Itai-RM05202
>> Cc: devel@...verdev.osuosl.org; linux-kernel@...r.kernel.org; arnd@...db.de
>> Subject: Re: [PATCH v2] staging: fsl-mc: add DPAA2 overview readme
>>
>>
>>
>> On 07.08.15 03:09, Stuart Yoder wrote:
>>> add README file providing an overview of the DPAA2 architecture
>>> and how it is integrated in Linux
>>>
>>> Signed-off-by: Stuart Yoder <stuart.yoder@...escale.com>
>>> ---
>>> -v2: added changelog text
>>>
>>> drivers/staging/fsl-mc/README.txt | 364 ++++++++++++++++++++++++++++++++++++++
>>> drivers/staging/fsl-mc/TODO | 4 -
>>> 2 files changed, 364 insertions(+), 4 deletions(-)
>>> create mode 100644 drivers/staging/fsl-mc/README.txt
>>>
>>> diff --git a/drivers/staging/fsl-mc/README.txt b/drivers/staging/fsl-mc/README.txt
>>> new file mode 100644
>>> index 0000000..8214102
>>> --- /dev/null
>>> +++ b/drivers/staging/fsl-mc/README.txt
>>> @@ -0,0 +1,364 @@
>>> +Copyright (C) 2015 Freescale Semiconductor Inc.
>>> +
>>> +DPAA2 (Data Path Acceleration Architecture Gen2)
>>> +------------------------------------------------
>>> +
>>> +This document provides an overview of the Freescale DPAA2 architecture
>>> +and how it is integrated into the Linux kernel.
>>> +
>>> +Contents summary
>>> + -DPAA2 overview
>>> + -Overview of DPAA2 objects
>>> + -DPAA2 Linux driver architecture overview
>>> + -bus driver
>>> + -dprc driver
>>> + -allocator
>>> + -dpio driver
>>> + -Ethernet
>>> + -mac
>>> +
>>> +DPAA2 Overview
>>> +--------------
>>> +
>>> +DPAA2 is a hardware architecture designed for high-speeed network
>>> +packet processing. DPAA2 consists of sophisticated mechanisms for
>>> +processing Ethernet packets, queue management, buffer management,
>>> +autonomous L2 switching, virtual Ethernet bridging, and accelerator
>>> +(e.g. crypto) sharing.
>>> +
>>> +A DPAA2 hardware component called the Management Complex (or MC) manages the
>>> +DPAA2 hardware resources. The MC provides an object-based abstraction for
>>> +software drivers to use the DPAA2 hardware.
>>> +
>>> +The MC uses DPAA2 hardware resources such as queues, buffer pools, and
>>> +network ports to create functional objects/devices such as network
>>> +interfaces, an L2 switch, or accelerator instances.
>>> +
>>> +The MC provides memory-mapped I/O command interfaces (MC portals)
>>> +which DPAA2 software drivers use to operate on DPAA2 objects:
>>> +
>>> + +--------------------------------------+
>>> + | OS |
>>> + | DPAA2 drivers |
>>> + | | |
>>> + +-----------------------------|--------+
>>> + |
>>> + | (create,discover,connect
>>> + | config,use,destroy)
>>> + |
>>> + DPAA2 |
>>> + +------------------------| mc portal |-+
>>> + | | |
>>> + | +- - - - - - - - - - - - -V- - -+ |
>>> + | | | |
>>> + | | Management Complex (MC) | |
>>> + | | | |
>>> + | +- - - - - - - - - - - - - - - -+ |
>>> + | |
>>> + | Hardware Hardware |
>>> + | Resources Objects |
>>> + | --------- ------- |
>>> + | -queues -DPRC |
>>> + | -buffer pools -DPMCP |
>>> + | -Eth MACs/ports -DPIO |
>>> + | -network interface -DPNI |
>>> + | profiles -DPMAC |
>>> + | -queue portals -DPBP |
>>> + | -MC portals ... |
>>> + | ... |
>>> + | |
>>> + +--------------------------------------+
>>> +
>>> +The MC mediates operations such as create, discover,
>>> +connect, configuration, and destroy. Fast-path operations
>>> +on data, such as packet transmit/receive, are not mediated by
>>> +the MC and are done directly using memory mapped regions in
>>> +DPIO objects.
>>> +
>>> +Overview of DPAA2 Objects
>>> +-------------------------
>>> +The section provides a brief overview of some key objects
>>> +in the DPAA2 hardware. A simple scenario is described illustrating
>>> +the objects involved in creating a network interfaces.
>>> +
>>> +-DPRC (Datapath Resource Container)
>>> +
>>> + A DPRC is an container object that holds all the other
>>> + types of DPAA2 objects. In the example diagram below there
>>> + are 8 objects of 5 types (DPMCP, DPIO, DPBP, DPNI, and DPMAC)
>>> + in the container.
>>> +
>>> + +---------------------------------------------------------+
>>> + | DPRC |
>>> + | |
>>> + | +-------+ +-------+ +-------+ +-------+ +-------+ |
>>> + | | DPMCP | | DPIO | | DPBP | | DPNI | | DPMAC | |
>>> + | +-------+ +-------+ +-------+ +---+---+ +---+---+ |
>>> + | | DPMCP | | DPIO | |
>>> + | +-------+ +-------+ |
>>> + | | DPMCP | |
>>> + | +-------+ |
>>> + | |
>>> + +---------------------------------------------------------+
>>> +
>>> + From the point of view of an OS, a DPRC is bus-like. Like
>>> + a plug-and-play bus, such as PCI, DPRC commands can be used to
>>> + enumerate the contents of the DPRC, discover the hardware
>>> + objects present (including mappable regions and interrupts).
>>> +
>>> + dprc.1 (bus)
>>> + |
>>> + +--+--------+-------+-------+-------+
>>> + | | | | |
>>> + dpmcp.1 dpio.1 dpbp.1 dpni.1 dpmac.1
>>> + dpmcp.2 dpio.2
>>> + dpmcp.3
>>> +
>>> + Hardware objects can be created and destroyed dynamically, providing
>>> + the ability to hot plug/unplug objects in and out of the DPRC.
>>> +
>>> + A DPRC has a mappable mmio region (an MC portal) that can be used
>>> + to send MC commands. It has an interrupt for status events (like
>>> + hotplug).
>>> +
>>> + All objects in a container share the same hardware "isolation context".
>>> + This means that with respect to an IOMMU the isolation granularity
>>> + is at the DPRC (container) level, not at the individual object
>>> + level.
>>> +
>>> + DPRCs can be defined statically and populated with objects
>>> + via a config file passed to the MC when firmware starts
>>> + it. There is also a Linux user space tool called "restool"
>>> + that can be used to create/destroy containers and objects
>>> + dynamically.
>>
>> Is this tool publicly available yet?
>
> Not yet.
>
>> Also, I find the naming unfortunate
>> for a tool that potentially will get included in general purpose
>> distributions. Naming it "dpaa2-restool" for example would make it much
>> clearer what its purpose is and would give you a nice namespace to add
>> more tools to later.
>
> Probably a good idea.
>
>>> +
>>> +-DPAA2 Objects for an Ethernet Network Interface
>>> +
>>> + A typical Ethernet NIC is monolithic-- the NIC device contains TX/RX
>>> + queuing mechanisms, configuration mechanisms, buffer management,
>>> + physical ports, and interrupts. DPAA2 uses a more granular approach
>>> + utilizing multiple hardware objects. Each object has specialized
>>> + functions, and are used together by software to provide Ethernet network
>>> + interface functionality. This approach provides efficient use of finite
>>> + hardware resources, flexibility, and performance advantages.
>>> +
>>> + The diagram below shows the objects needed for a simple
>>> + network interface configuration on a system with 2 CPUs.
>>> +
>>> + +---+---+ +---+---+
>>> + CPU0 CPU1
>>> + +---+---+ +---+---+
>>> + | |
>>> + +---+---+ +---+---+
>>> + DPIO DPIO
>>> + +---+---+ +---+---+
>>> + \ /
>>> + \ /
>>> + \ /
>>> + +---+---+
>>> + DPNI --- DPBP,DPMCP
>>> + +---+---+
>>> + |
>>> + |
>>> + +---+---+
>>> + DPMAC
>>> + +---+---+
>>> + |
>>> + port/PHY
>>> +
>>> + Below the objects are described. For each object a brief description
>>> + is provided along with a summary of the kinds of operations the object
>>> + supports and a summary of key resources of the object (mmio regions
>>> + and irqs).
>>> +
>>> + -DPMAC (Datapath Ethernet MAC): represents an Ethernet MAC, a
>>> + hardware device that connects to an Ethernet PHY and allows
>>> + physical transmission and reception of Ethernet frames.
>>> + -mmio regions: none
>>> + -irqs: dpni link change
>>> + -commands: set link up/down, link config, get stats,
>>> + irq config, enable, reset
>>> +
>>> + -DPNI (Datapath Network Interface): contains TX/RX queues,
>>> + network interface configuration, and rx buffer pool configuration
>>> + mechanisms.
>>> + -mmio regions: none
>>> + -irqs: link state
>>> + -commands: port config, offload config, queue config,
>>> + parse/classify config, irq config, enable, reset
>>> +
>>> + -DPIO (Datapath I/O): provides interfaces to enqueue and dequeue
>>> + packets and do hardware buffer pool management operations. For
>>
>> I think you may want to explain the difference between TX/RX queues and
>> "interfaces to enqueue and dequeue packets" ;).
>
> So, the queues are literally just the queues themselves (identified by
> queue #). They are accessible via the DPIO mmio region. So to enqueue
> something you write a descriptor to the DPIO mmio region, which includes
> the target queue #.
>
> So the architecture separates the interface to access the queues from
> the queues themselves. DPIOs are intended to be shared among all
> DPAA2 drivers in the kernel that interact with queues.
>
> Since a CPU can only do one thing at a time, you just need 1 DPIO
> queuing interface per CPU for optimal performance. But, there are
> thousands of queues that could exists in the system.
>
> Will expand the description to clarify this.
Awesome :). With that clarification the diagrams make much more sense.
>
>>> + optimum performance there is typically DPIO per CPU. This allows
>>
>> typically [one] DPIO?
>
> Yes, as mentioned above, one DPIO per CPU is optimal. But, you could
> have just 1 DPIO period and 8 cpus sharing it.
So I did guess right, just wanted to make sure you add the "one" :).
>
>>> + each CPU to perform simultaneous enqueue/dequeue operations.
>>> + -mmio regions: queue operations, buffer mgmt
>>> + -irqs: data availability, congestion notification, buffer
>>> + pool depletion
>>> + -commands: irq config, enable, reset
>>> +
>>> + -DPBP (Datapath Buffer Pool): represents a hardware buffer
>>> + pool.
>>> + -mmio regions: none
>>> + -irqs: none
>>> + -commands: enable, reset
>>> +
>>> + -DPMCP (Datapath MC Portal): provides an MC command portal.
>>> + Used by drivers to send commands to the MC to manage
>>> + objects.
>>> + -mmio regions: MC command portal
>>> + -irqs: command completion
>>> + -commands: irq config, enable, reset
>>> +
>>> + Object Connections
>>> + ------------------
>>> + Some objects have explicit relationships that must
>>> + be configured:
>>> +
>>> + -DPNI <--> DPMAC
>>> + -DPNI <--> DPNI
>>> + -DPNI <--> L2-switch-port
>>> + A DPNI must be connected to something such as a DPMAC,
>>> + another DPNI, or L2 switch port. The DPNI connection
>>> + is made via a DPRC command.
>>> +
>>> + +-------+ +-------+
>>> + | DPNI | | DPMAC |
>>> + +---+---+ +---+---+
>>> + | |
>>> + +==========+
>>> +
>>> + -DPNI <--> DPBP
>>> + A network interface requires a 'buffer pool' (DPBP
>>> + object) which provides a list of pointers to memory
>>> + where received Ethernet data is to be copied. The
>>> + Ethernet driver configures the DPBPs associated with
>>> + the network interface.
>>> +
>>> + Interrupts
>>> + ----------
>>> + All interrupts generated by DPAA2 objects are message
>>> + interrupts. At the hardware level message interrupts
>>> + generated by devices will normally have 3 components--
>>> + 1) a non-spoofable 'device-id' expressed on the hardware
>>> + bus, 2) an address, 3) a data value.
>>> +
>>> + In the case of DPAA2 devices/objects, all objects in the
>>> + same container/DPRC share the same 'device-id'.
>>> + For ARM-based SoC this is the same as the stream ID.
>>> +
>>> +
>>> +DPAA2 Linux Driver Overview
>>> +---------------------------
>>> +
>>> +This section provides an overview of the Linux kernel drivers for
>>> +DPAA2-- 1) the bus driver and associated "DPAA2 infrastructure"
>>> +drivers and 2) functional object drivers (such as Ethernet).
>>> +
>>> +As described previously, a DPRC is a container that holds the other
>>> +types of DPAA2 objects. It is functionally similar to a plug-and-play
>>> +bus controller.
>>> +
>>> +Each object in the DPRC is a Linux "device" and is bound to a driver.
>>> +The diagram below shows the Linux drivers involved in a networking
>>> +scenario and the objects bound to each driver. A brief description
>>> +of each driver follows.
>>> +
>>> + +------------+
>>> + | OS Network |
>>> + | Stack |
>>> + +------------+ +------------+
>>> + | Allocator |. . . . . . . | Ethernet |
>>> + |(dpmcp,dpbp)| | (dpni) |
>>> + +-.----------+ +---+---+----+
>>> + . . ^ |
>>> + . . <data avail, | |<enqueue,
>>> + . . tx confirm> | | dequeue>
>>> + +-------------+ . | |
>>> + | DPRC driver | . +---+---V----+ +---------+
>>> + | (dprc) | . . . . . .| DPIO driver| | MAC |
>>> + +----------+--+ | (dpio) | | (dpmac) |
>>> + | +------+-----+ +-----+---+
>>> + |<dev add/remove> | |
>>> + | | |
>>> + +----+--------------+ | +--+---+
>>> + | mc-bus driver | | | PHY |
>>> + | | | |driver|
>>> + | /fsl-mc@...000000 | | +--+---+
>>> + +-------------------+ | |
>>> + | |
>>> + ================================ HARDWARE =========|=================|======
>>> + DPIO |
>>> + | |
>>> + DPNI---DPBP |
>>> + | |
>>> + DPMAC |
>>> + | |
>>> + PHY ---------------+
>>> + ===================================================|========================
>>> +
>>> +A brief description of each driver is provided below.
>>> +
>>> + mc-bus driver
>>> + -------------
>>> + The mc-bus driver is a platform driver and is probed from an
>>> + "/fsl-mc@...x" node in the device tree passed in by boot firmware.
>>
>> Probably better to describe the actual binding here which is based on
>> compatible.
>
> Ok
>
>>> + It is responsible for bootstrapping the DPAA2 kernel infrastructure.
>>> + Key functions include:
>>> + -registering a new bus type named "fsl-mc" with the kernel,
>>> + and implementing bus call-backs (e.g. match/uevent/dev_groups)
>>> + -implemeting APIs for DPAA2 driver registration and for device
>>
>> implemeting? ;)
>
> Thanks...
>
>>> + add/remove
>>> + -creates an MSI irq domain
>>> + -do a device add of the 'root' DPRC device, which is needed
>>> + to bootstrap things
>>
>> I think you can find a better way to describe exposure of the root
>> container than "to bootstrap things".
>
> Runtime management of the DPRC/container is by the "DPRC driver" (see
> below). That driver scans the bus, does device_add() operations,
> handles hotplug events.
>
> But, how did that DPRC itself driver start? It _is_ a DPAA2 driver as
> well... how did the device_add() happen for the root DPRC object? That is the
> bootstrapping role of the mc-bus driver. It manufactures the device info
> needed to add a device to the bus, calls device_add, which in turn causes
> the DPRC driver to get probed, and things start from there.
>
> After that initial boostrapping, the mc-bus platform driver never does
> anything again.
>
> Did that help? If so, I'll try to clarify the text. If it still is unclear
> I'll try to explain more.
I do understand what the code does, I just found the wording clumsy.
Basically in a nutshell, the mc-bus driver exposes the root DPRC.
>
>>> +
>>> + DPRC driver
>>> + -----------
>>> + The dprc-driver is bound DPRC objects and does runtime management
>>
>> bound [to] DPRC
>
> Thanks...
>
>>> + of a bus instance. It performs the initial bus scan of the DPRC
>>> + and handles interrupts for container events such as hot plug.
>>> +
>>> + Allocator
>>> + ----------
>>> + Certain objects such as DPMCP and DPBP are generic and fungible,
>>> + and are intended to be used by other drivers. For example,
>>> + the DPAA2 Ethernet driver needs:
>>> + -DPMCPs to send MC commands, to configure network interfaces
>>> + -DPBPs for network buffer pools
>>> +
>>> + The allocator driver registers for these allocatable object types
>>> + and those objects are bound to the allocator when the bus is probed.
>>> + The allocator maintains a pool of objects that are available for
>>> + allocation by other DPAA2 drivers.
>>> +
>>> + DPIO driver
>>> + -----------
>>> + The DPIO driver is bound to DPIO objects and provides services that allow
>>> + other drivers such as the Ethernet driver to receive and transmit data.
>>> + Key services include:
>>> + -data availability notifications
>>> + -hardware queuing operations (enqueue and dequeue of data)
>>> + -hardware buffer pool management
>>> +
>>> + There is typically one DPIO object per physical CPU for optimum
>>> + performance, allowing each CPU to simultaneously enqueue
>>> + and dequeue data.
>>> +
>>> + The DPIO driver operates on behalf of all DPAA2 drivers
>>> + active in the kernel-- Ethernet, crypto, compression,
>>> + etc.
>>
>> I'm not quite sure what this means? Where do I MMIO into when I want to
>> transmit a packet?
>
> The MMIO region is in the DPIO, but you (e.g. Ethernet) don't touch it
> directly. You call a DPIO driver API (lightweight). The DPIO driver provides
> APIs to put/get data on/from queues for all drivers that access the queuing
> system-- Ethernet, crypto, compression, pattern matcher, etc.
That's an interesting design, thanks for the clarification.
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists