lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 Jul 2022 16:41:54 +0300
From:   Jarkko Sakkinen <jarkko@...nel.org>
To:     Lino Sanfilippo <LinoSanfilippo@....de>
Cc:     peterhuewe@....de, jgg@...pe.ca, stefanb@...ux.vnet.ibm.com,
        linux@...ewoehner.de, linux-integrity@...r.kernel.org,
        linux-kernel@...r.kernel.org, l.sanfilippo@...bus.com,
        lukas@...ner.de, p.rosenberger@...bus.com
Subject: Re: [PATCH v7 07/10] tmp, tmp_tis: Implement usage counter for
 locality

On Mon, Jul 11, 2022 at 11:03:05PM +0200, Lino Sanfilippo wrote:
> 
> On 11.07.22 04:50, Jarkko Sakkinen wrote:
> > On Mon, Jul 04, 2022 at 07:45:12PM +0200, Lino Sanfilippo wrote:
> >>
> >>
> >> On 01.07.22 01:29, Jarkko Sakkinen wrote:
> >>
> >>>
> >>> I'm kind of thinking that should tpm_tis_data have a lock for its
> >>> contents?
> >>
> >> Most of the tpm_tis_data structure elements are set once during init and
> >> then never changed but only read. So no need for locking for these. The
> >> exceptions I see are
> >>
> >> - flags
> >> - locality_count
> >> - locality
> >
> > I'd still go for single data struct lock, since this lock would
> > be taken in every transmit flow.
> 
> Well in both cases, transmit and receive, we end up in wait_for_tmp_stat().
> Whatever lock we hold at this time cannot be taken in the interrupt
> handler, since this would deadlock (wait_for_tmp_stat() waits for the interrupt
> handler to complete but holds the lock that the interrupt handler needs to proceed).
> 
> So in the interrupt handler we need something that is not held during the whole
> transmit/receive flow.
> 
> This is the reason why the locality_count_mutex only protects the one thing we
> have to take care of in the interrupt handler, namely the locality counter.
> 
> 
> > It makes the whole thing easier
> > to maintain over time, and does not really affect scalability>
> > This brings me to another question: what does this lock protect
> > against given that tpm_try_get_ops() already takes tpm_mutex?
> > It's not clear and that should be somehow reasoned in the commit
> > message.
> 
> See above, we cannot take the tpm mutex in the interrupt handler for the same
> reason.

You should squash this then with the following patch.

Also, I'm not sure why you don't use kref for this.

> > Anyway, *if* a lock is needed the granularity should be the whole
> > struct.
> >
> > BR, Jarkko
> 
> Regards,
> Lino

BR, Jarkko

Powered by blists - more mailing lists