lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 14 Jan 2020 15:36:19 -0800
From:   David Dai <daidavid1@...eaurora.org>
To:     Evan Green <evgreen@...gle.com>
Cc:     Georgi Djakov <georgi.djakov@...aro.org>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        Rob Herring <robh+dt@...nel.org>, sboyd@...nel.org,
        Lina Iyer <ilina@...eaurora.org>,
        Sean Sweeney <seansw@....qualcomm.com>,
        Alex Elder <elder@...aro.org>,
        LKML <linux-kernel@...r.kernel.org>,
        "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" 
        <devicetree@...r.kernel.org>,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>,
        linux-pm@...r.kernel.org
Subject: Re: [PATCH v1 0/4] Split SDM845 interconnect nodes and consolidate
 RPMh support

Hi Evan,

On 1/7/2020 3:45 PM, Evan Green wrote:
> On Sun, Dec 15, 2019 at 9:59 PM David Dai <daidavid1@...eaurora.org> wrote:
>> While there are no current consumers of the SDM845 interconnect device in
>> devicetree, take this opportunity to redefine the interconnect device nodes
>> as the previous definitions of using a single child node under the apps_rsc
>> device did not accurately capture the description of the hardware.
>> The Network-On-Chip (NoC) interconnect devices should be represented in a
>> manner akin to QCS404 platforms[1] where there is a separation of NoC devices
>> and its RPM/RPMh counterparts.
>>
>> The bcm-voter devices are representing the RPMh devices that the interconnect
>> providers need to communicate with and there can be more than one instance of
>> the Bus Clock Manager (BCM) which can live under different instances of Resource
>> State Coordinators (RSC). There are display use cases where consumers may need
>> to target a different bcm-voter (Some display specific RSC) than the default,
>> and there needs to be a way to represent this connection in devicetree.
> So for my own understanding, the problem here is that things want to
> vote for interconnect bandwidth within a specific RSC context? Where
> normally the RSC context is simply "Apps@EL1", we might also have
> "Apps@EL3" for trustzone, or in the case we're coding for,
> "display-specific RSC context". I guess this context might stay on
> even if Apps@EL1 votes are entirely discounted or off?
That's correct, the state of those votes are tied to the state of that 
execution environment. So even if the Apps CPU goes into a low power 
mode, other context specific vote will still stick.
>   So then would
> there be an additional interconnect provider for "display context RSC"
> next to apps_bcm_voter? Would that expose all the same nodes as
> apps_bcm_voter, or a different set of nodes?

We trim down the topology to what each execution environment needs, so 
each EE really only "sees" a subset of the entire SoC's topology. In 
this specific case, the display context RSC would only expose a small 
subset of the topology that Apps@EL1 would see.

>
> Assuming it's exposing some of the same nodes as apps_bcm_voter, the
> other way to do this would be increasing #interconnect-cells, and
> putting the RSC context there. Did you choose not to go that way
> because nearly all the clients would end up specifying the same thing
> of "Apps@EL1"?
That's correct, the majority of the consumers will stay with default 
Apps@EL1 context.

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ