[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230609115840-mutt-send-email-mst@kernel.org>
Date: Fri, 9 Jun 2023 12:02:36 -0400
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Greg KH <gregkh@...uxfoundation.org>
Cc: Xianting Tian <xianting.tian@...ux.alibaba.com>,
arei.gonglei@...wei.com, jasowang@...hat.com,
xuanzhuo@...ux.alibaba.com, herbert@...dor.apana.org.au,
davem@...emloft.net, amit@...nel.org, arnd@...db.de,
marcel@...tmann.org, johan.hedberg@...il.com, luiz.dentz@...il.com,
linux-bluetooth@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org,
Xianting Tian <tianxianting.txt@...baba-inc.com>
Subject: Re: [PATCH 1/3] virtio-crypto: fixup potential cpu stall when free
unused bufs
On Fri, Jun 09, 2023 at 04:05:57PM +0200, Greg KH wrote:
> On Fri, Jun 09, 2023 at 09:49:39PM +0800, Xianting Tian wrote:
> >
> > 在 2023/6/9 下午9:41, Greg KH 写道:
> > > On Fri, Jun 09, 2023 at 03:39:24PM +0200, Greg KH wrote:
> > > > On Fri, Jun 09, 2023 at 09:18:15PM +0800, Xianting Tian wrote:
> > > > > From: Xianting Tian <tianxianting.txt@...baba-inc.com>
> > > > >
> > > > > Cpu stall issue may happen if device is configured with multi queues
> > > > > and large queue depth, so fix it.
> > > > >
> > > > > Signed-off-by: Xianting Tian <xianting.tian@...ux.alibaba.com>
> > > > > ---
> > > > > drivers/crypto/virtio/virtio_crypto_core.c | 1 +
> > > > > 1 file changed, 1 insertion(+)
> > > > >
> > > > > diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
> > > > > index 1198bd306365..94849fa3bd74 100644
> > > > > --- a/drivers/crypto/virtio/virtio_crypto_core.c
> > > > > +++ b/drivers/crypto/virtio/virtio_crypto_core.c
> > > > > @@ -480,6 +480,7 @@ static void virtcrypto_free_unused_reqs(struct virtio_crypto *vcrypto)
> > > > > kfree(vc_req->req_data);
> > > > > kfree(vc_req->sgs);
> > > > > }
> > > > > + cond_resched();
> > > > that's not "fixing a stall", it is "call the scheduler because we are
> > > > taking too long". The CPU isn't stalled at all, just busy.
> > > >
> > > > Are you sure this isn't just a bug in the code? Why is this code taking
> > > > so long that you have to force the scheduler to run? This is almost
> > > > always a sign that something else needs to be fixed instead.
> > > And same comment on the other 2 patches, please fix this properly.
> > >
> > > Also, this is a tight loop that is just freeing memory, why is it taking
> > > so long? Why do you want it to take longer (which is what you are doing
> > > here), ideally it would be faster, not slower, so you are now slowing
> > > down the system overall with this patchset, right?
> >
> > yes, it is the similar fix with one for virtio-net
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/net/virtio_net.c?h=v6.4-rc5&id=f8bb5104394560e29017c25bcade4c6b7aabd108 <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/net/virtio_net.c?h=v6.4-rc5&id=f8bb5104394560e29017c25bcade4c6b7aabd108>
Well that one actually at least describes the configuration:
For multi-queue and large ring-size use case, the following error
occurred when free_unused_bufs:
rcu: INFO: rcu_sched self-detected stall on CPU.
So a similar fix but not a similar commit log, this one lacks Fixes tag and
description of what the problem is and when does it trigger.
> I would argue that this too is incorrect, because why does freeing
> memory take so long?
You are correct that even that one lacks detailed explanation
why does the patch help.
And the explanation why it takes so long is exactly that
we have very deep queues and a very large number of queues.
What the patch does is gives scheduler a chance
to do some work between the queues.
> And again, you are making it take longer, is that
> ok?
>
> thanks,
>
> greg k-h
Powered by blists - more mailing lists