lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ecb74c6c-8de6-4774-8159-2ec118437c57@kernel.dk>
Date: Mon, 6 Oct 2025 08:03:23 -0600
From: Jens Axboe <axboe@...nel.dk>
To: John Paul Adrian Glaubitz <glaubitz@...sik.fu-berlin.de>,
 linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: Andreas Larsson <andreas@...sler.com>,
 Anthony Yznaga <anthony.yznaga@...cle.com>, Sam James <sam@...too.org>,
 "David S . Miller" <davem@...emloft.net>,
 Michael Karcher <kernel@...rcher.dialup.fu-berlin.de>,
 sparclinux@...r.kernel.org
Subject: Re: [PATCH v2] Revert "sunvdc: Do not spin in an infinite loop when
 vio_ldc_send() returns EAGAIN"

On 10/6/25 7:56 AM, John Paul Adrian Glaubitz wrote:
> On Mon, 2025-10-06 at 07:46 -0600, Jens Axboe wrote:
>>
>>>> The nicer approach would be to have sunvdc punt retries back up to the
>>>> block stack, which would at least avoid a busy spin for the condition.
>>>> Rather than return BLK_STS_IOERR which terminates the request and
>>>> bubbles back the -EIO as per your log. IOW, if we've already spent
>>>> 10.5ms in that busy loop as per the above rough calculation, perhaps
>>>> we'd be better off restarting the queue and hence this operation after a
>>>> small sleeping delay instead. That would seem a lot saner than hammering
>>>> on it.
>>>
>>> I generally agree with this remark. I just wonder whether this
>>> behavior should apply for a logical domain as well. I guess if a
>>> request doesn't succeed immediately, it's an urgent problem if the
>>> logical domain locks up, is it?
>>
>> It's just bad behavior. Honestly most of this just looks like either a
>> bad implementation of the protocol as it's all based on busy looping, or
>> a badly designed protocol. And then the sunvdc usage of it just
>> proliferates that problem, rather than utilizing the tools that exist in
>> the block stack to take a breather rather than repeatedly hammering on
>> the hardware for conditions like this.
> 
> To be fair, the sunvdc driver is fairly old and I'm not sure whether these
> tools already existed back then. FWIW, Oracle engineers did actually work
> on the Linux for SPARC code for a while and it might be possible that their
> UEK kernel tree [1] contains some improvements in this regard.

Requeueing and retry has always been available on the block side. It's
not an uncommon thing for a driver to need, in case of resource
starvation. And sometimes those resources can be unrelated to the IO, eg
iommu shortages. Or this busy condition.

But that's fine, it's not uncommon for drivers to miss things like that,
and then we fix them up when noticed. It was probably written by someone
not super familiar with the IO stack.

>>>>> And unlike the change in adddc32d6fde ("sunvnet: Do not spin in an infinite
>>>>> loop when vio_ldc_send() returns EAGAIN"), we can't just drop data as this
>>>>> driver concerns a block device while the other driver concerns a network
>>>>> device. Dropping network packages is expected, dropping bytes from a block
>>>>> device driver is not.
>>>>
>>>> Right, but we can sanely retry it rather than sit in a tight loop.
>>>> Something like the entirely untested below diff.
>>>>
>>>> diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
>>>> index db1fe9772a4d..aa49dffb1b53 100644
>>>> --- a/drivers/block/sunvdc.c
>>>> +++ b/drivers/block/sunvdc.c
>>>> @@ -539,6 +539,7 @@ static blk_status_t vdc_queue_rq(struct blk_mq_hw_ctx *hctx,
>>>>  	struct vdc_port *port = hctx->queue->queuedata;
>>>>  	struct vio_dring_state *dr;
>>>>  	unsigned long flags;
>>>> +	int ret;
>>>>  
>>>>  	dr = &port->vio.drings[VIO_DRIVER_TX_RING];
>>>>  
>>>> @@ -560,7 +561,13 @@ static blk_status_t vdc_queue_rq(struct blk_mq_hw_ctx *hctx,
>>>>  		return BLK_STS_DEV_RESOURCE;
>>>>  	}
>>>>  
>>>> -	if (__send_request(bd->rq) < 0) {
>>>> +	ret = __send_request(bd->rq);
>>>> +	if (ret == -EAGAIN) {
>>>> +		spin_unlock_irqrestore(&port->vio.lock, flags);
>>>> +		/* already spun for 10msec, defer 10msec and retry */
>>>> +		blk_mq_delay_kick_requeue_list(hctx->queue, 10);
>>>> +		return BLK_STS_DEV_RESOURCE;
>>>> +	} else if (ret < 0) {
>>>>  		spin_unlock_irqrestore(&port->vio.lock, flags);
>>>>  		return BLK_STS_IOERR;
>>>>  	}
>>>
>>> We could add this particular change on top of mine after we have
>>> extensively tested it.
>>
>> It doesn't really make sense on top of yours, as that removes the
>> limited looping that sunvdc would do...
> 
> Why not? From what I understood, you're moving the limited looping to a
> different part of the driver where it can delegate the request back up
> the stack meaning that the current place to implement the limitation is
> not correct anyway, is it?

Because your change never gives up, hence it'd never trigger the softer
retry condition. It'll just keep busy looping.

>>> For now, I would propose to pick up my patch to revert the previous
>>> change. I can then pick up your proposed change and deploy it for
>>> extensive testing and see if it has any side effects.
>>
>> Why not just test this one and see if it works? As far as I can tell,
>> it's been 6.5 years since this change went in, I can't imagine there's a
>> huge sense of urgency to fix it up that can't wait for testing a more
>> proper patch rather than a work-around?
> 
> Well, the thing is that a lot of people have been running older kernels
> on SPARC because of issues like these and I have started working on trying
> to track down all of these issues now [2] for users to be able to run a
> current kernel. So, the 6.5 years existence of this change shouldn't
> be an argument I think.

While I agree that the bug is unfortunate, it's also a chance to
properly fix it rather than just go back to busy looping. How difficult
is it to test an iteration of the patch? It'd be annoying to queue a
bandaid only to have to revert that again for a real fix. If this was a
regression from the last release or two then that'd be a different
story, but the fact that this has persisted for 6.5 years and is only
bubbling back up to mainstream now would seem to indicate that we should
spend a bit of extra time to just get it right the first time.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ