[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bda377bf-10f8-49d9-8e58-ec957a40e4d7@gmail.com>
Date: Fri, 20 Jun 2025 18:19:43 +0100
From: "Orlov, Ivan" <ivan.orlov0322@...il.com>
To: Jarkko Sakkinen <jarkko@...nel.org>, "Orlov, Ivan" <iorlov@...zon.co.uk>
Cc: "peterhuewe@....de" <peterhuewe@....de>, "jgg@...pe.ca" <jgg@...pe.ca>,
"linux-integrity@...r.kernel.org" <linux-integrity@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Woodhouse, David" <dwmw@...zon.co.uk>
Subject: Re: [PATCH] tpm: Fix the timeout & use ktime
On 11/06/2025 18:02, Jarkko Sakkinen wrote:
>> Instead, perform the check in the following way:
>>
>> 1. Read the current timestamp
>> 2. Read the completion status. If completed, return the result
>> 3. Sleep
>> 4. Check if the timestamp read at step 1 exceeds the timeout. Return
>> an error if it does
>> 5. Goto 1
>>
>> Also, use ktime instead of jiffes as a more reliable and precise timing
>> source.
>
> "also", i.e. a logically separate change which should be split up to
> a separate patch.
>
Got it, will send this enhancement as a separate patch.
>> + curr_time = ktime_get();
>> u8 status = tpm_chip_status(chip);
>> if ((status & chip->ops->req_complete_mask) ==
>> chip->ops->req_complete_val)
>> @@ -140,7 +149,7 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip, void *buf, size_t bufsiz)
>>
>> tpm_msleep(TPM_TIMEOUT_POLL);
>> rmb();
>> - } while (time_before(jiffies, stop));
>> + } while (ktime_before(curr_time, timeout));
>
>
> Wouldn't it be simpler fix to just check completion after dropping out
> of the loop?
>
yes, this should also solve the problem without taking additional space
on the stack with new vars. Will update in V2, thanks!
> And declare this before tpm_try_transmit():
>
> static bool tpm_transmit_completed(struct tpm_chip *chip)
> {
> u8 status = tpm_chip_status(chip);
>
> return (status & chip->ops->req_complete_mask) == chip->ops->req_complete_val;
> }
>
Cool, will do.
Thanks for the review and sorry for the late reply.
--
Kind regards,
Ivan Orlov
Powered by blists - more mailing lists