lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Feb 2022 16:39:30 -0600
From:   Alex Elder <elder@...aro.org>
To:     Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>,
        mhi@...ts.linux.dev
Cc:     quic_hemantk@...cinc.com, quic_bbhatt@...cinc.com,
        quic_jhugo@...cinc.com, vinod.koul@...aro.org,
        bjorn.andersson@...aro.org, dmitry.baryshkov@...aro.org,
        quic_vbadigan@...cinc.com, quic_cang@...cinc.com,
        quic_skananth@...cinc.com, linux-arm-msm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 15/25] bus: mhi: ep: Add support for processing MHI
 endpoint interrupts

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for processing MHI endpoint interrupts such as control
> interrupt, command interrupt and channel interrupt from the host.
> 
> The interrupts will be generated in the endpoint device whenever host
> writes to the corresponding doorbell registers. The doorbell logic
> is handled inside the hardware internally.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>

Unless I'm mistaken, you have some bugs here.

Beyond that, I question whether you should be using workqueues
for handling all interrupts.  For now, it's fine, but there
might be room for improvement after this is accepted upstream
(using threaded interrupt handlers, for example).

					-Alex

> ---
>   drivers/bus/mhi/ep/main.c | 113 +++++++++++++++++++++++++++++++++++++-
>   include/linux/mhi_ep.h    |   2 +
>   2 files changed, 113 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index ccb3c2795041..072b872e735b 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -185,6 +185,56 @@ static void mhi_ep_ring_worker(struct work_struct *work)
>   	}
>   }
>   
> +static void mhi_ep_queue_channel_db(struct mhi_ep_cntrl *mhi_cntrl,
> +				    unsigned long ch_int, u32 ch_idx)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	struct mhi_ep_ring_item *item;
> +	struct mhi_ep_ring *ring;
> +	unsigned int i;

Why not u32 i?  And why is the ch_int argument unsigned long?  The value
passed in is a u32.

> +
> +	for_each_set_bit(i, &ch_int, 32) {
> +		/* Channel index varies for each register: 0, 32, 64, 96 */
> +		i += ch_idx;

This is a bug.  You should not be modifying the iterator
variable inside the loop.  Maybe do this instead:

	u32 ch_id = ch_idx + i;

	ring = &mhi_cntrl->mhi_chan[ch_id].ring

> +		ring = &mhi_cntrl->mhi_chan[i].ring;
> +

You are initializing all fields here so kmalloc() is fine
(rather than kzalloc()).  But if you ever add another field
to the mhi_ep_ring_item structure that's not guaranteed.
I think at least a comment here explaining why you're not
using kzalloc() would be helpful.

> +		item = kmalloc(sizeof(*item), GFP_ATOMIC);

Even an ATOMIC allocation can fail.  Check the return
pointer.

> +		item->ring = ring;
> +
> +		dev_dbg(dev, "Queuing doorbell interrupt for channel (%d)\n", i);

Use ch_id (or whatever you call it) here too.

> +		spin_lock(&mhi_cntrl->list_lock);
> +		list_add_tail(&item->node, &mhi_cntrl->ch_db_list);
> +		spin_unlock(&mhi_cntrl->list_lock);

Instead, create a list head on the stack and build up
this list without using the spinlock.  Then splice
everything you added into the ch_db_list at the end.

> +
> +		queue_work(mhi_cntrl->ring_wq, &mhi_cntrl->ring_work);

Maybe there's a small amount of latency saved by
doing this repeatedly, but you're queueing work
with the same work structure over and over again.

Instead, you could set a Boolean at the top:
	work = !!ch_int;

	for_each_set_bit() {
		. . .
	}

	if (work)
		queue_work(...);


> +	}
> +}
> +
> +/*
> + * Channel interrupt statuses are contained in 4 registers each of 32bit length.
> + * For checking all interrupts, we need to loop through each registers and then
> + * check for bits set.
> + */
> +static void mhi_ep_check_channel_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	u32 ch_int, ch_idx;
> +	int i;
> +
> +	mhi_ep_mmio_read_chdb_status_interrupts(mhi_cntrl);

You could have the above function could return a summary Boolean
value, which would indicate whether *any* channel interrupts
had occurred (skipping the below loop when we get just a control
or command doorbell interrupt).

> +
> +	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
> +		ch_idx = i * MHI_MASK_CH_EV_LEN;
> +
> +		/* Only process channel interrupt if the mask is enabled */
> +		ch_int = (mhi_cntrl->chdb[i].status & mhi_cntrl->chdb[i].mask);

Parentheses not needed.

> +		if (ch_int) {
> +			mhi_ep_queue_channel_db(mhi_cntrl, ch_int, ch_idx);
> +			mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_CLEAR_A7_n(i),
> +							mhi_cntrl->chdb[i].status);
> +		}
> +	}
> +}
> +
>   static void mhi_ep_state_worker(struct work_struct *work)
>   {
>   	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
> @@ -222,6 +272,53 @@ static void mhi_ep_state_worker(struct work_struct *work)
>   	}
>   }
>   
> +static void mhi_ep_process_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl,
> +					 enum mhi_state state)
> +{
> +	struct mhi_ep_state_transition *item = kmalloc(sizeof(*item), GFP_ATOMIC);
> +

Do not assume ATOMIC allocations succeed.

I don't have any further comments on the rest.

> +	item->state = state;
> +	spin_lock(&mhi_cntrl->list_lock);
> +	list_add_tail(&item->node, &mhi_cntrl->st_transition_list);
> +	spin_unlock(&mhi_cntrl->list_lock);
> +
> +	queue_work(mhi_cntrl->state_wq, &mhi_cntrl->state_work);
> +}
> +

. . .

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ