[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f68657d6-027c-c842-ce35-5524cd782c7e@opensource.wdc.com>
Date: Wed, 19 Jan 2022 17:45:33 +0900
From: Damien Le Moal <damien.lemoal@...nsource.wdc.com>
To: cgel.zte@...il.com, jejb@...ux.ibm.com
Cc: martin.petersen@...cle.com, linux-scsi@...r.kernel.org,
linux-kernel@...r.kernel.org, Minghao Chi <chi.minghao@....com.cn>,
Zeal Robot <zealci@....com.cn>
Subject: Re: [PATCH] drivers/scsi/csiostor: do not sleep with a spin lock held
On 1/19/22 14:59, cgel.zte@...il.com wrote:
> From: Minghao Chi <chi.minghao@....com.cn>
>
> The might_sleep_if function in the mempool_alloc
> may cause a sleep lock.We can't mempool_alloc()
> with a spin lock held, so bring it up front.
But the allocation is GFP_ATOMIC, which does not have
__GFP_DIRECT_RECLAIM, so how come mempool_alloc() triggers the
might_sleep() warning ?
>
> Reported-by: Zeal Robot <zealci@....com.cn>
> Signed-off-by: Minghao Chi <chi.minghao@....com.cn>
> Signed-off-by: CGEL ZTE <cgel.zte@...il.com>
> ---
> drivers/scsi/csiostor/csio_attr.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/scsi/csiostor/csio_attr.c b/drivers/scsi/csiostor/csio_attr.c
> index 200e50089711..3d4ab439c756 100644
> --- a/drivers/scsi/csiostor/csio_attr.c
> +++ b/drivers/scsi/csiostor/csio_attr.c
> @@ -424,8 +424,8 @@ csio_fcoe_alloc_vnp(struct csio_hw *hw, struct csio_lnode *ln)
>
> /* Issue VNP cmd to alloc vport */
> /* Allocate Mbox request */
> - spin_lock_irq(&hw->lock);
> mbp = mempool_alloc(hw->mb_mempool, GFP_ATOMIC);
> + spin_lock_irq(&hw->lock);
> if (!mbp) {
> CSIO_INC_STATS(hw, n_err_nomem);
> ret = -ENOMEM;
> @@ -505,8 +505,8 @@ csio_fcoe_free_vnp(struct csio_hw *hw, struct csio_lnode *ln)
> /* Issue VNP cmd to free vport */
> /* Allocate Mbox request */
>
> - spin_lock_irq(&hw->lock);
> mbp = mempool_alloc(hw->mb_mempool, GFP_ATOMIC);
> + spin_lock_irq(&hw->lock);
> if (!mbp) {
> CSIO_INC_STATS(hw, n_err_nomem);
> ret = -ENOMEM;
--
Damien Le Moal
Western Digital Research
Powered by blists - more mailing lists