lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <26225878-8fed-b990-d02f-5253190de77a@linaro.org>
Date:   Thu, 2 Nov 2017 18:00:42 +0200
From:   Georgi Djakov <georgi.djakov@...aro.org>
To:     Amit Kucheria <amit.kucheria@...durent.com>
Cc:     Linux PM list <linux-pm@...r.kernel.org>,
        gregkh@...uxfoundation.org,
        "Rafael J. Wysocki" <rjw@...ysocki.net>, robh+dt@...nel.org,
        khilman@...libre.com, Michael Turquette <mturquette@...libre.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Saravana Kannan <skannan@...eaurora.org>,
        Stephen Boyd <sboyd@...eaurora.org>,
        Andy Gross <andy.gross@...aro.org>, seansw@....qualcomm.com,
        davidai@...cinc.com, LKML <linux-kernel@...r.kernel.org>,
        lakml <linux-arm-kernel@...ts.infradead.org>,
        linux-arm-msm@...r.kernel.org, mark.rutland@....com,
        Lorenzo Pieralisi <lorenzo.pieralisi@....com>
Subject: Re: [PATCH v3 1/3] interconnect: Add generic on-chip interconnect API

Hi Amit,

On 11/02/2017 09:28 AM, Amit Kucheria wrote:
[..>> +Interconnect node is the software definition of the interconnect
hardware
>> +port. Each interconnect provider consists of multiple interconnect nodes,
>> +which are connected to other SoC components including other interconnect
>> +providers. The point on the diagram where the CPUs connects to the memory is
>> +called an interconnect node, which belongs to the Mem NoC interconnect provider.
>> +
>> +Interconnect endpoits are the first or the last element of the path. Every
> 
> s/endpoits/endpoints
> 

Ok!

>> +endpoint is a node, but not every node is an endpoint.
>> +
>> +Interconnect path is everything between two endpoints including all the nodes
>> +that have to be traversed to reach from a source to destination node. It may
>> +include multiple master-slave pairs across several interconnect providers.
>> +
>> +Interconnect consumers are the entities which make use of the data paths exposed
>> +by the providers. The consumers send requests to providers requesting various
>> +throughput, latency and priority. Usually the consumers are device drivers, that
>> +send request based on their needs. An example for a consumer is a a video
> 
> typo: is a video>

Ok!

[..]
>> +/**
>> + * interconnect_set() - set constraints on a path between two endpoints
>> + * @path: reference to the path returned by interconnect_get()
>> + * @creq: request from the consumer, containing its requirements
>> + *
>> + * This function is used by an interconnect consumer to express its own needs
>> + * in term of bandwidth and QoS for a previously requested path between two
>> + * endpoints. The requests are aggregated and each node is updated accordingly.
>> + *
>> + * Returns 0 on success, or an approproate error code otherwise.
>> + */
>> +int interconnect_set(struct interconnect_path *path,
>> +                    struct interconnect_creq *creq)
>> +{
>> +       struct interconnect_node *next, *prev = NULL;
>> +       size_t i;
>> +       int ret = 0;
>> +
>> +       for (i = 0; i < path->num_nodes; i++, prev = next) {
>> +               next = path->reqs[i].node;
>> +
>> +               /*
>> +                * Both endpoints should be valid and master-slave pairs of
> 
> Losing the 'and' improves readability.
> 

Ok!

>> +                * the same interconnect provider that will be configured.
>> +                */
>> +               if (!next || !prev)
>> +                       continue;
>> +               if (next->icp != prev->icp)
>> +                       continue;
>> +
>> +               mutex_lock(&next->icp->lock);
>> +
>> +               /* update the consumer request for this path */
>> +               path->reqs[i].avg_bw = creq->avg_bw;
>> +               path->reqs[i].peak_bw = creq->peak_bw;
>> +
>> +               /* aggregate all requests */
>> +               interconnect_aggregate_icn(next);
>> +               interconnect_aggregate_icp(next->icp);
>> +
>> +               if (next->icp->ops->set) {
>> +                       /* commit the aggregated constraints */
>> +                       ret = next->icp->ops->set(prev, next, &next->icp->creq);
>> +               }
> 
> A comment here on how the contraints are propagated along the path
> would be good. At the moment, this seems to go to each provider along
> the path, take the provider lock and set the new constraints, then
> move on to the next provider. Is there any need to make the changes
> along the entire path "atomic"?

Yes, the above is correct. There is no such need at least for the
hardware i am currently playing with. We can add support for that later
if needed.

> 
> I'm trying to understand what happens in the case where a new request
> comes along while the previous path is still be traversed.

It just gets aggregated and set and this seems to not be an issue as the
paths are valid. Now I am trying to keep it simple and if anything needs
serialization we can add some locking later.

> 
>> +               mutex_unlock(&next->icp->lock);
>> +               if (ret)
>> +                       goto out;
>> +       }
>> +
>> +out:
>> +       return ret;
>> +}
>> +
>> +/**
>> + * interconnect_get() - return a handle for path between two endpoints
> 
> This is not used currently by the msm8916 platform driver? Is this
> expected to be called by leaf device drivers or by higher layer bus
> drivers? I suspect a mix of the two, but an example would be nice.

This is called by a consumer driver to express its for example bandwidth
needs between various endpoints. Will add some examples.

[..]
>> +/**
>> + * struct icp - interconnect provider (controller) entity that might
>> + * provide multiple interconnect controls
>> + *
>> + * @icp_list: list of the registered interconnect providers
>> + * @nodes: internal list of the interconnect provider nodes
>> + * @ops: pointer to device specific struct icp_ops
>> + * @dev: the device this interconnect provider belongs to
>> + * @lock: a lock to protect creq and users
>> + * @creq: the actual state of constraints for this interconnect provider
>> + * @users: count of active users
>> + * @data: pointer to private data
>> + */
>> +struct icp {
>> +       struct list_head        icp_list;
>> +       struct list_head        nodes;
>> +       const struct icp_ops    *ops;
>> +       struct device           *dev;
>> +       struct mutex            lock;
>> +       struct interconnect_creq creq;
>> +       int                     users;
>> +       void                    *data;
>> +};
> 
> Use interconnect_provider here instead of icp for the sake of
> consistency. Same for icp_ops. Or replace everything with any of the
> other suggested alternatives. My suggestion to the name pool is 'xcon'
> where x == inter.

Yes i am working on better naming, thanks!

BR,
Georgi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ