lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 9 Oct 2020 21:19:02 +0530
From:   Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
To:     Bhaumik Bhatt <bbhatt@...eaurora.org>
Cc:     linux-arm-msm@...r.kernel.org, hemantk@...eaurora.org,
        jhugo@...eaurora.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 02/10] bus: mhi: core: Move to using high priority
 workqueue

On Fri, Sep 18, 2020 at 07:02:27PM -0700, Bhaumik Bhatt wrote:
> MHI work is currently scheduled on the global/system workqueue and can
> encounter delays on a stressed system. To avoid those unforeseen
> delays which can hamper bootup or shutdown times, use a dedicated high
> priority workqueue instead of the global/system workqueue.
> 
> Signed-off-by: Bhaumik Bhatt <bbhatt@...eaurora.org>
> ---
>  drivers/bus/mhi/core/init.c | 7 +++++++
>  drivers/bus/mhi/core/pm.c   | 2 +-
>  include/linux/mhi.h         | 2 ++
>  3 files changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
> index 1b4161e..ca32563 100644
> --- a/drivers/bus/mhi/core/init.c
> +++ b/drivers/bus/mhi/core/init.c
> @@ -890,6 +890,11 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
>  	INIT_WORK(&mhi_cntrl->st_worker, mhi_pm_st_worker);
>  	init_waitqueue_head(&mhi_cntrl->state_event);
>  
> +	mhi_cntrl->hiprio_wq = alloc_ordered_workqueue
> +				("mhi_hiprio_wq", WQ_MEM_RECLAIM | WQ_HIGHPRI);
> +	if (!mhi_cntrl->hiprio_wq)

Printing an error here would be helpful.

> +		goto error_alloc_cmd;
> +
>  	mhi_cmd = mhi_cntrl->mhi_cmd;
>  	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++)
>  		spin_lock_init(&mhi_cmd->lock);
> @@ -977,10 +982,12 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
>  
>  error_alloc_dev:
>  	kfree(mhi_cntrl->mhi_cmd);
> +	destroy_workqueue(mhi_cntrl->hiprio_wq);

So you're destroying the queue two times? You don't need it here.

>  
>  error_alloc_cmd:
>  	vfree(mhi_cntrl->mhi_chan);
>  	kfree(mhi_cntrl->mhi_event);
> +	destroy_workqueue(mhi_cntrl->hiprio_wq);
>  
>  	return ret;
>  }
> diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
> index ce4d969..9d4789d 100644
> --- a/drivers/bus/mhi/core/pm.c
> +++ b/drivers/bus/mhi/core/pm.c
> @@ -597,7 +597,7 @@ int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
>  	list_add_tail(&item->node, &mhi_cntrl->transition_list);
>  	spin_unlock_irqrestore(&mhi_cntrl->transition_lock, flags);
>  
> -	schedule_work(&mhi_cntrl->st_worker);
> +	queue_work(mhi_cntrl->hiprio_wq, &mhi_cntrl->st_worker);
>  
>  	return 0;
>  }
> diff --git a/include/linux/mhi.h b/include/linux/mhi.h
> index fb45a0f..7677676 100644
> --- a/include/linux/mhi.h
> +++ b/include/linux/mhi.h
> @@ -338,6 +338,7 @@ struct mhi_controller_config {
>   * @wlock: Lock for protecting device wakeup
>   * @mhi_link_info: Device bandwidth info
>   * @st_worker: State transition worker
> + * @hiprio_wq: High priority workqueue

For what? Please state the purpose.

Thanks,
Mani

>   * @state_event: State change event
>   * @status_cb: CB function to notify power states of the device (required)
>   * @wake_get: CB function to assert device wake (optional)
> @@ -421,6 +422,7 @@ struct mhi_controller {
>  	spinlock_t wlock;
>  	struct mhi_link_info mhi_link_info;
>  	struct work_struct st_worker;
> +	struct workqueue_struct *hiprio_wq;
>  	wait_queue_head_t state_event;
>  
>  	void (*status_cb)(struct mhi_controller *mhi_cntrl,
> -- 
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ