lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <015EB9CD-ADB9-4C12-BD3F-78268E849884@hpe.com>
Date:   Mon, 31 Jan 2022 18:52:25 +0000
From:   "Verdun, Jean-Marie" <verdun@....com>
To:     Krzysztof Kozlowski <krzysztof.kozlowski@...onical.com>,
        Arnd Bergmann <arnd@...db.de>,
        "Hawkins, Nick" <nick.hawkins@....com>
CC:     Rob Herring <robh+dt@...nel.org>,
        Russell King <linux@...linux.org.uk>,
        Shawn Guo <shawnguo@...nel.org>,
        Stanislav Jakubek <stano.jakubek@...il.com>,
        Sam Ravnborg <sam@...nborg.org>,
        Linus Walleij <linus.walleij@...aro.org>,
        Hao Fang <fanghao11@...wei.com>,
        "Russell King (Oracle)" <rmk+kernel@...linux.org.uk>,
        Geert Uytterhoeven <geert+renesas@...der.be>,
        Mark Rutland <mark.rutland@....com>,
        Ard Biesheuvel <ardb@...nel.org>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Lukas Bulwahn <lukas.bulwahn@...il.com>,
        Masahiro Yamada <masahiroy@...nel.org>,
        DTML <devicetree@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH] Adding architectural support for HPE's GXP BMC. This is
 the first of a series of patches to support HPE's BMC with Linux Kernel.

Hi Krzysztof

We made some progress during the week-end and took the decision to breakdown the dts as you recommended (one dtsi for the SoC, and one dts per system board, we will start with the dl360 Gen10 server). We will send you some updates during the week, as I need to validate a few things with some of my colleagues regarding the partition tables definition which we kept (for the moment) into the ASIC definition, as all our implementation are using currently the same partition table.

We also removed many of the warning generated by the dtc compiler.

We will probably send the driver code at the same time than the dts update (or the next day). There will be a few of them including

- gpio
- hwmon
- udc / usb gadget
- umac
- i2c
- watchdog
- fbdev
- kcs
- vuart
- spifi
- clock

So as to simplify your understanding

- GXP is the name of the SoC. It has multiple implementations, which are currently compatibles. I don't think for the moment that we need to distinguished them. We might have a GXP v2 coming up but not before a certain amount of time which is far enough.
- This SoC is used to implement BMC features of HPE servers (all ProLiant, many Apollo, and Superdome machines)

It does support many features including:
- ARMv7 architecture, and it is based on a Cortex A9 core
- Use an AXI bus to which 
	- a memory controller is attached, as well as multiple SPI interfaces to connect boot flash, and ROM flash, a 10/100/1000 Mac engine which supports SGMII (2 ports) and RMII
	- Multiple I2C engines to drive connectivity with a host infrastructure
	- A video engine which support VGA and DP, as well as an hardware video encder
	- Multiple PCIe ports
		- A PECI interface, and LPC eSPI
	- Multiple UART for debug purpose, and Virtual UART for host connectivity
	- A GPIO engine

Hope this help,

vejmarie

On 1/26/22, 12:41 AM, "Krzysztof Kozlowski" <krzysztof.kozlowski@...onical.com> wrote:

    On 26/01/2022 02:49, Verdun, Jean-Marie wrote:
    > Hello Arnd,
    > 
    > I work with Nick on upstreaming the initial code for our GXP asic. Many thanks for your feedbacks.
    > 
    > We will update accordingly. I must admit that I am a little bit lost regarding the process we shall follow to introduce a new SoC. We took the path to send first the DT side and then the drivers through a set of patch per driver. Andrew, seems to guide us into a direction where we shall have a very small DT initially and we will expand it in a step by step manner when we will get drivers approved, this might lead us into a process which might be very sequential. What is the best recommendation to follow ? Either way is ok on our side, I am just looking at the easiest solution for the code Maintainers.

    The current DTS patch won't pass checkpatch because you have around 30
    undocumented compatibles. The process does not have to be sequential -
    quite contrary - rather parallel with several submission happening the
    same time. The point is that we need to see the bindings and check
    whether your DTS complies with them. Actually the check should be done
    by you with dtbs_check, but let's say we also look at it.

    Your patch with full-blown DTS and drivers is also good approach, except
    there are no drivers sent. For example:
    https://lore.kernel.org/?q=hpe%2Cgxp-i2c
    https://lore.kernel.org/?q=hpe%2Cgxp-wdt
    If you want to avoid building DTS sequentially, no problem, just send
    the bindings and DTS.

    Andrew's approach is much more flexible because it allows you to discuss
    the bindings while not postponing the core part of DTS.


    > 
    > Most of this code is intended to be used with OpenBMC and u-boot. We didn't have yet upstream anything into the bootloader, and wanted to follow a step by step approach by initially publishing into the kernel (that explain why some init also are still hardcoded in the case the bootloader doesn't provide the data, that is still work in progress, but we can have end user testing the infrastructure). We have a very small user space environment to validate that the kernel boot properly by using u-root, before getting OpenBMC fully loaded. Last but not least, as this is a BMC code, which is new to our end users, it would be just great to have default fall back if the u-boot environment is not properly setup (roughly we could code the MAC address into the umac driver, or the DT to address such cases). We plan to update uboot in the next couple of days by the way. 
    > 
    > We do not use dtsi at all for the moment, as we are generating a dtb out of the dts file and load it into our SPI image. Probably not the best approach, but this is the way it is implemented currently. The dtb is compiled outside the kernel tree for the moment using dtc compiler. We will add that step into the dts boot Makefile, it make sense. Does the dtsi is mandatory for every SoC ? I can build one if needed. But as this SoC is a BMC, the current dts is an example of what shall be configured. Many other datas related to the hardware target platform are defined into OpenBMC layers while we build for various ProLiant servers. We wanted our kernel code being readily testable that is why we have that generic dts. (GPIOS mapping is machine dependent)

    The commit misses description, so I actually don't know how the
    architecture looks like. For most of SoC, there is a DTSI because the
    SoC is being put on different boards/products. It allows clear
    separation between SoC (which could be reused) and board. If you have
    only a DTS, then:
    1. Where is the SoC here? How it can be re-used by different board?
    2. Is it only one DTS per entire sub-architecture? No more boards? Only
    one product? No even revisions or improved versions?

    Best regards,
    Krzysztof

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ