[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZiFBBh+kQy6Pu3BM@hu-bjorande-lv.qualcomm.com>
Date: Thu, 18 Apr 2024 08:49:26 -0700
From: Bjorn Andersson <quic_bjorande@...cinc.com>
To: Sibi Sankar <quic_sibis@...cinc.com>
CC: <sudeep.holla@....com>, <cristian.marussi@....com>, <andersson@...nel.org>,
<konrad.dybcio@...aro.org>, <jassisinghbrar@...il.com>,
<robh+dt@...nel.org>, <krzysztof.kozlowski+dt@...aro.org>,
<dmitry.baryshkov@...aro.org>, <linux-kernel@...r.kernel.org>,
<linux-arm-msm@...r.kernel.org>, <devicetree@...r.kernel.org>,
<quic_rgottimu@...cinc.com>, <quic_kshivnan@...cinc.com>,
<conor+dt@...nel.org>, <quic_gkohli@...cinc.com>,
<quic_nkela@...cinc.com>, <quic_psodagud@...cinc.com>
Subject: Re: [PATCH V3 2/5] mailbox: Add support for QTI CPUCP mailbox
controller
On Thu, Apr 18, 2024 at 01:01:50PM +0530, Sibi Sankar wrote:
> On 4/18/24 02:56, Bjorn Andersson wrote:
> > On Wed, Apr 17, 2024 at 06:58:53PM +0530, Sibi Sankar wrote:
> > > diff --git a/drivers/mailbox/qcom-cpucp-mbox.c b/drivers/mailbox/qcom-cpucp-mbox.c
[..]
> >
> > > + if (status & BIT(i)) {
> >
> > Can't you combine the for loop and this conditional into a
> > for_each_bit_set()?
>
> The only drawback I see here is if the number of channels increase to
> it's full capacity of 64 since for_each_set_bit expects unsigned long.
>
It takes a unsigned long * and it can take a size > BITS_PER_LONG. But
I've not convinced myself that the bit order across two of those matches
the u64 bits.
> >
> > > + val = readl(cpucp->rx_base + APSS_CPUCP_RX_MBOX_CMD(i) + APSS_CPUCP_MBOX_CMD_OFF);
> > > + chan = &cpucp->chans[i];
> > > + spin_lock_irqsave(&chan->lock, flags);
> >
> > Can you please add a comment here to document that the lock is taken
> > here to deal with races against client registration? (It wasn't obvious
> > to me...)
>
> This is was put in to handle irqs after channel closure. Meaning we
> don't want to send data on a closed/empty channel.
>
You're dealing with that through the chan->cl check below, not the lock
itself. So the lock here would be for synchronizing this code with
potentially concurrent execution of __mbox_bind_client() or e.g.
mbox_free_channel().
But if this is indeed the problem, then we're locking here to ensure
that mbox_chan_received_data() will not dereference chan->cl while it's
being modified elsewhere int he mailbox core.
If that's the case, I think this needs to be strongly documented in the
API, or perhaps better yet the lock being moved into
mbox_chan_received_data().
Regards,
Bjorn
Powered by blists - more mailing lists