[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57b193a6-8340-c883-04bc-6cfbc3c638cd@kunbus.com>
Date: Fri, 4 Nov 2022 17:18:21 +0100
From: Lino Sanfilippo <l.sanfilippo@...bus.com>
To: Jarkko Sakkinen <jarkko@...nel.org>,
Lino Sanfilippo <LinoSanfilippo@....de>
Cc: peterhuewe@....de, jgg@...pe.ca, stefanb@...ux.vnet.ibm.com,
linux@...ewoehner.de, linux-integrity@...r.kernel.org,
linux-kernel@...r.kernel.org, jandryuk@...il.com,
pmenzel@...gen.mpg.de, lukas@...ner.de, p.rosenberger@...bus.com
Subject: Re: [PATCH v8 08/11] tpm, tpm: Implement usage counter for locality
On 01.11.22 02:06, Jarkko Sakkinen wrote:
> On Tue, Oct 25, 2022 at 02:15:51AM +0200, Lino Sanfilippo wrote:
>> Actually thats on me, since it took me much too long to send the v8 after the v7 review.
>>
>> However the reason that we need a mutex here is that we not only increase or decrease
>> the locality_counter under the mutex, but also do the locality request and release by
>> writing to the ACCESS register. Since in the SPI case each communication over the spi bus
>> is protected by the bus_lock_mutex of the SPI device we must not hold a spinlock when doing
>> the register accesses.
>>
>> Concerning covering the whole tpm_tis_data struct:
>> Most structure elements are set once at driver startup but never changed at driver
>> runtime. So no locking needed for these. The only exception is "flags" and "locality_count"
>> whereby "flags" is accessed by atomic bit manipulating functions and thus
>> does not need extra locking. So "locality_count" is AFAICS the only element that needs to be
>> protected by the mutex.
>
> OK, but you should should still address this in commit message, e.g.
> by mentioning that in the case of SPI bus mutex is required because
> the bus itself needs to be locked in the mutex.
>
> I.e. this a claim, definitely not an argument: "Ensure thread-safety by
> protecting the counter with a mutex."
>
Ok, I will rephrase the commit message accordingly.
Thanks for the review!
Regards,
Lino
Powered by blists - more mailing lists