lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 1 Oct 2018 13:56:32 -0700
From:   Saravana Kannan <skannan@...eaurora.org>
To:     Rob Herring <robh@...nel.org>,
        Georgi Djakov <georgi.djakov@...aro.org>,
        linux-pm@...r.kernel.org, gregkh@...uxfoundation.org,
        rjw@...ysocki.net, mturquette@...libre.com, khilman@...libre.com,
        vincent.guittot@...aro.org, bjorn.andersson@...aro.org,
        amit.kucheria@...aro.org, seansw@....qualcomm.com,
        daidavid1@...eaurora.org, evgreen@...omium.org,
        mark.rutland@....com, lorenzo.pieralisi@....com,
        abailon@...libre.com, maxime.ripard@...tlin.com, arnd@...db.de,
        devicetree@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org,
        linux-arm-msm@...r.kernel.org, robdclark@...il.com
Subject: Re: [PATCH v9 2/8] dt-bindings: Introduce interconnect binding



On 09/26/2018 07:34 AM, Jordan Crouse wrote:
> On Tue, Sep 25, 2018 at 01:02:15PM -0500, Rob Herring wrote:
>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>> This binding is intended to represent the relations between the interconnect
>>> controllers (providers) and consumer device nodes. It will allow creating links
>>> between consumers and interconnect paths (exposed by interconnect providers).
>> As I mentioned in person, I want to see other SoC families using this
>> before accepting. They don't have to be ready for upstream, but WIP
>> patches or even just a "yes, this works for us and we're going to use
>> this binding on X".
>>
>> Also, I think the QCom GPU use of this should be fully sorted out. Or
>> more generically how this fits into OPP binding which seems to be never
>> ending extended...
> This is a discussion I wouldn't mind having now.  To jog memories, this is what
> I posted a few weeks ago:
>
> https://patchwork.freedesktop.org/patch/246117/
>
> This seems like the easiest way to me to tie the frequency and the bandwidth
> quota together for GPU devfreq scaling but I'm not married to the format and
> I'll happily go a few rounds on the bikeshed if we can get something we can
> be happy with.
>
> Jordan

Been meaning to send this out for a while, but caught up with other stuff.

That GPU BW patch is very specific to device to device mapping and 
doesn't work well for different use cases (Eg: those that  can calculate 
based on use case, etc).

Interconnect paths have different BW (bandwidth) operating points that 
they can support. For example: 1 GB/s, 1.7 GB/s, 5GB/s, etc. Having a 
mapping from GPU or CPU to those are fine/necessary, but we still need a 
separate BW OPP table for interconnect paths to list what they can 
actually support.

Two different ways we could represent BW OPP tables for interconnect paths:
1.  Represent interconnect paths (CPU to DDR, GPU to DDR, etc) as 
devices and have OPPs for those devices.

2. We can have a "interconnect-opp-tables" DT binding similar to 
"interconnects" and "interconnect-names". So if a device GPU or Video 
decoder or I2C device needs to vote on an interconnect path, they can 
also list the OPP tables that those paths can support.

I know Rob doesn't like (1). But I'm hoping at least (2) is acceptable. 
I'm open to other suggestions too.

Both (1) and (2) need BW OPP tables similar to frequency OPP tables. 
That should be easy to add and Viresh is open to that. I'm open to other 
options too, but the fundamental missing part is how to tie a list of BW 
OPPs to interconnect paths in DT.

Once we have one of the above two options, we can use the required-opps 
field (already present in kernel) for the mapping between GPU to a 
particular BW need (suggested by Viresh during an in person conversation).

Thanks,
Saravana

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ