[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180406083016.weuoemmcgvgxlcxy@pathway.suse.cz>
Date: Fri, 6 Apr 2018 10:30:16 +0200
From: Petr Mladek <pmladek@...e.com>
To: wen.yang99@....com.cn
Cc: sergey.senozhatsky.work@...il.com, jejb@...ux.vnet.ibm.com,
martin.petersen@...cle.com, linux-scsi@...r.kernel.org,
linux-kernel@...r.kernel.org, Bart.VanAssche@....com,
tj@...nel.org, jiang.biao2@....com.cn, zhong.weidong@....com.cn,
tan.hu@....com.cn
Subject: Re: 答复: Re: [PATCH v2] scsi: Introduce sdev_printk_ratelimited to
throttlefrequent printk
On Tue 2018-04-03 14:19:43, wen.yang99@....com.cn wrote:
> On the other hand,queue_lock is big, looping doing something under spinlock
>
> may locked many things and taking a long time, may cause some problems.
>
> So This code needs to be optimized later:
>
> scsi_request_fn()
> {
> for (;;) {
> int rtn;
> /*
> * get next queueable request. We do this early to make sure
> * that the request is fully prepared even if we cannot
> * accept it.
> */
>
> req = blk_peek_request(q);
>
> if (!req)
> break;
>
> if (unlikely(!scsi_device_online(sdev))) {
> sdev_printk(KERN_ERR, sdev,
> "rejected I/O to offline device\n");
> scsi_kill_request(req, q);
> continue;
>
> ^^^^^^^^^ still under spinlock
> }
I wonder if the following might be the best solution after all:
if (unlikely(!scsi_device_online(sdev))) {
scsi_kill_request(req, q);
/*
* printk() might take a while on slow consoles.
* Prevent solftlockups by releasing the lock.
*/
spin_unlock_irq(q->queue_lock);
sdev_printk(KERN_ERR, sdev,
"rejecting I/O to offline device\n");
spin_lock_irq(q->queue_lock);
continue;
}
I see that the lock is released also in several other situations.
Therefore it looks safe. Also handling too many requests without
releasing the lock seems to be a bad idea in general. I think
that this solution was already suggested earlier.
Please, note that I moved scsi_kill_request() up. It looks natural
to remove it from the queue before we release the queue lock.
Best Regards,
Petr
BTW: Your mail had strange formatting. Please, try to avoid using
html.
Powered by blists - more mailing lists