[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <553E9B4A.1050606@codeaurora.org>
Date: Mon, 27 Apr 2015 14:25:46 -0600
From: Jeffrey Hugo <jhugo@...eaurora.org>
To: Bjorn Andersson <bjorn.andersson@...ymobile.com>
CC: Kumar Gala <galak@...eaurora.org>,
Andy Gross <agross@...eaurora.org>,
David Brown <davidb@...eaurora.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arm-msm@...r.kernel.org" <linux-arm-msm@...r.kernel.org>,
"linux-soc@...r.kernel.org" <linux-soc@...r.kernel.org>
Subject: Re: [PATCH v2 2/2] soc: qcom: Add Shared Memory Manager driver
[..]
>>> +struct smem_header {
>>> + struct smem_proc_comm proc_comm[4];
>>> + u32 version[32];
>>> + u32 initialized;
>>> + u32 free_offset;
>>> + u32 available;
>>> + u32 reserved;
>>> + struct smem_global_entry toc[];
>>
>> Was it intentional to not have "toc[512]"?
>>
>
> Not really, I can add it to make it clear that it's a fixed amount.
My personal preference would be to have it toc[512] since whenever I see
an array with an empty size like this, my first thought is that it is a
dynamic array which is not the case here.
[..]
>>> +/* Timeout (ms) for the trylock of remote spinlocks */
>>> +#define HWSPINLOCK_TIMEOUT 1000
>>
>> I'm curious what made you pick 1 second as a timeout value?
>>
>
> Sorry, I don't have even a tiny bit of science behind this number. I
> figured it's long enough to not have any false negatives and it's short
> enough to not be intrusive if some remote processor actually dies with
> the lock held.
Darn. I've been pondering what value would be appropriate since
reviewing the Hardware Spinlock framework, and had hoped you had already
figured it out when I saw this. My gut feeling agrees with your assessment.
[..]
>>> + *
>>> + * To be used by smem clients as a quick way to determine if any new
>>> + * allocations has been made.
>>> + */
>>> +int qcom_smem_get_free_space(unsigned host)
>>> +{
>>> + struct smem_partition_header *phdr;
>>> + struct smem_header *header;
>>> + unsigned ret;
>>> +
>>> + if (!__smem)
>>> + return -EPROBE_DEFER;
>>> +
>>> + if (host < SMEM_HOST_COUNT && __smem->partitions[host]) {
>>> + phdr = __smem->partitions[host];
>>> + ret = phdr->offset_free_uncached;
>>
>> Hmm. This will work for the usecase that wants it, but its not really
>> correct based on how this function is described. Could we fix it up so
>> that it actually returns the free space remaining?
>>
>
> Right, this is wrong.
>
> A potential issue with this api is if a remote processor has a partition
> but even so allocates smd channels from the global space, then checking
> for free space related to said host would not detect any updates.
>
> What is your allocation strategy related to this, would this cause an
> issue for us?
SMEM would allow that, but SMD wouldn't expect it. SMD was one of the
major reasons what the partitions came about - SMD is a point-to-point
communication mechanism, so it doesn't make sense to allow C access to a
SMD channel between A and B.
From the SMD perspective, if the partition exists, it should be used.
I would consider the scenario you propose to be an error and unsupported.
The one possible exception to this, is what the remote processors used
to do for backwards compatibility for a time. A remote processor would
allocate the SMD channel from the partition, and also allocate it from
the global space, but the global space entry would actually point to the
allocation in the partition. I only mention this scenario for
completeness, since Linux is able to support the partitions, this
scenario is not valid anymore.
> If so a better implementation would be to drop the argument from this
> function and just sum the free space from all the partitions. At the
> cost of a few extra runs through the channel scanner. What do you think?
I think its unnecessary, and considering that such a calculation would
run for every interrupt, I'd like to avoid the extra cost.
--
Jeffrey Hugo
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
Forum, a Linux Foundation Collaborative Project
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists