[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <744ad6e1-c4ad-1e6d-f94d-98aa5b105dc6@gmx.de>
Date: Wed, 27 Jul 2022 14:16:56 +0200
From: Lino Sanfilippo <LinoSanfilippo@....de>
To: Jarkko Sakkinen <jarkko@...nel.org>
Cc: peterhuewe@....de, jgg@...pe.ca, stefanb@...ux.vnet.ibm.com,
linux@...ewoehner.de, linux-integrity@...r.kernel.org,
linux-kernel@...r.kernel.org, l.sanfilippo@...bus.com,
lukas@...ner.de, p.rosenberger@...bus.com
Subject: Re: [PATCH v7 07/10] tmp, tmp_tis: Implement usage counter for
locality
On 11.07.22 04:50, Jarkko Sakkinen wrote:
> On Mon, Jul 04, 2022 at 07:45:12PM +0200, Lino Sanfilippo wrote:
>>
>>
>> On 01.07.22 01:29, Jarkko Sakkinen wrote:
>>
>>>
>>> I'm kind of thinking that should tpm_tis_data have a lock for its
>>> contents?
>>
>> Most of the tpm_tis_data structure elements are set once during init and
>> then never changed but only read. So no need for locking for these. The
>> exceptions I see are
>>
>> - flags
>> - locality_count
>> - locality
>
> I'd still go for single data struct lock, since this lock would
> be taken in every transmit flow. It makes the whole thing easier
> to maintain over time, and does not really affect scalability.
>
This means switching to a complete new locking scheme which affects many
parts of the TIS core code. It is also not directly related to what this patch series
is about, namely activating the interrupts for TPM TIS.
I suggest to first finish polishing this series especially since there have
only been minor issues in the last versions. Once the interrupts work we
still can think of implementing another lock handling in a follow up series.
> This brings me to another question: what does this lock protect
> against given that tpm_try_get_ops() already takes tpm_mutex?
> It's not clear and that should be somehow reasoned in the commit
> message.
>
> Anyway, *if* a lock is needed the granularity should be the whole
> struct.
>
> BR, Jarkko
Regards,
Lino
Powered by blists - more mailing lists