lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180510151515.GH4698@codeaurora.org>
Date:   Thu, 10 May 2018 09:15:15 -0600
From:   Lina Iyer <ilina@...eaurora.org>
To:     Matthias Kaehlcke <mka@...omium.org>
Cc:     andy.gross@...aro.org, david.brown@...aro.org,
        linux-arm-msm@...r.kernel.org, linux-soc@...r.kernel.org,
        rnayak@...eaurora.org, bjorn.andersson@...aro.org,
        linux-kernel@...r.kernel.org, sboyd@...nel.org,
        evgreen@...omium.org, dianders@...omium.org, rplsssn@...eaurora.org
Subject: Re: [PATCH v8 05/10] drivers: qcom: rpmh-rsc: write sleep/wake
 requests to TCS

On Wed, May 09 2018 at 17:25 -0600, Matthias Kaehlcke wrote:
>Hi Lina,
>
>On Wed, May 09, 2018 at 11:01:54AM -0600, Lina Iyer wrote:
>> Sleep and wake requests are sent when the application processor
>> subsystem of the SoC is entering deep sleep states like in suspend.
>> These requests help lower the system power requirements when the
>> resources are not in use.
>>
>> Sleep and wake requests are written to the TCS slots but are not
>> triggered at the time of writing. The TCS are triggered by the firmware
>> after the last of the CPUs has executed its WFI. Since these requests
>> may come in different batches of requests, it is the job of this
>> controller driver to find and arrange the requests into the available
>> TCSes.
>>
>> Signed-off-by: Lina Iyer <ilina@...eaurora.org>
>> Reviewed-by: Evan Green <evgreen@...omium.org>
>>
>> ---
>>
>> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
>> index c0edf3850147..b5894b001ae1 100644
>> --- a/drivers/soc/qcom/rpmh-rsc.c
>> +++ b/drivers/soc/qcom/rpmh-rsc.c
>>
>> <snip>
>>
>> +static int find_match(const struct tcs_group *tcs, const struct tcs_cmd *cmd,
>> +		      int len)
>> +{
>> +	int i, j;
>> +
>> +	/* Check for already cached commands */
>> +	for_each_set_bit(i, tcs->slots, MAX_TCS_SLOTS) {
>> +		if (tcs->cmd_cache[i] != cmd[0].addr)
>> +			continue;
>> +		if (i + len >= MAX_TCS_SLOTS)
>> +			goto seq_err;
>
>The command cache can have less than MAX_TCS_SLOTS slot:
>

That's true. I forgot that I had optimized the cache slots. Thanks for
pointing out.

>static int rpmh_probe_tcs_config(struct platform_device *pdev,
>				 struct rsc_drv *drv)
>{
>	...
>	tcs->cmd_cache = devm_kcalloc(&pdev->dev,
>				      tcs->num_tcs * ncpt, sizeof(u32),
>				      GFP_KERNEL);
>	...
>}
>
>So the condition needs to be:
>
>if (i + len >= tcs->num_tcs * tcs->ncpt)
>
>> +static int find_slots(struct tcs_group *tcs, const struct tcs_request *msg,
>> +		      int *tcs_id, int *cmd_id)
>> +{
>> +	int slot, offset;
>> +	int i = 0;
>> +
>> +	/* Find if we already have the msg in our TCS */
>> +	slot = find_match(tcs, msg->cmds, msg->num_cmds);
>> +	if (slot >= 0)
>> +		goto copy_data;
>> +
>> +	/* Do over, until we can fit the full payload in a TCS */
>> +	do {
>> +		slot = bitmap_find_next_zero_area(tcs->slots, MAX_TCS_SLOTS,
>> +						  i, msg->num_cmds, 0);
>> +		if (slot == MAX_TCS_SLOTS)
>> +			return -ENOMEM;
>
>Like above, use 'tcs->num_tcs * tcs->ncpt' as maximum instead of
>MAX_TCS_SLOTS.
>
>> +static int tcs_ctrl_write(struct rsc_drv *drv, const struct tcs_request *msg)
>> +{
>> +	struct tcs_group *tcs;
>> +	int tcs_id = 0, cmd_id = 0;
>> +	unsigned long flags;
>> +	int ret;
>> +
>> +	tcs = get_tcs_for_msg(drv, msg);
>> +	if (IS_ERR(tcs))
>> +		return PTR_ERR(tcs);
>> +
>> +	spin_lock_irqsave(&tcs->lock, flags);
>> +	/* find the m-th TCS and the n-th position in the TCS to write to */
>
>The comment still refers to the old names 'm' and 'n'.
>
Really? :)
Will fix.

Thanks for your review,
Lina

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ