[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <94321592.37021268.1608104830197.JavaMail.zimbra@redhat.com>
Date: Wed, 16 Dec 2020 02:47:10 -0500 (EST)
From: Jason Wang <jasowang@...hat.com>
To: wangyunjian <wangyunjian@...wei.com>
Cc: netdev@...r.kernel.org, mst@...hat.com,
willemdebruijn kernel <willemdebruijn.kernel@...il.com>,
virtualization@...ts.linux-foundation.org,
"Lilijun (Jerry)" <jerry.lilijun@...wei.com>,
chenchanghu <chenchanghu@...wei.com>,
xudingke <xudingke@...wei.com>,
"huangbin (J)" <brian.huangbin@...wei.com>
Subject: Re: [PATCH net 2/2] vhost_net: fix high cpu load when sendmsg fails
----- Original Message -----
> > -----Original Message-----
> > From: Jason Wang [mailto:jasowang@...hat.com]
> > Sent: Wednesday, December 16, 2020 1:57 PM
> > To: wangyunjian <wangyunjian@...wei.com>
> > Cc: netdev@...r.kernel.org; mst@...hat.com; willemdebruijn kernel
> > <willemdebruijn.kernel@...il.com>;
> > virtualization@...ts.linux-foundation.org;
> > Lilijun (Jerry) <jerry.lilijun@...wei.com>; chenchanghu
> > <chenchanghu@...wei.com>; xudingke <xudingke@...wei.com>; huangbin (J)
> > <brian.huangbin@...wei.com>
> > Subject: Re: [PATCH net 2/2] vhost_net: fix high cpu load when sendmsg
> > fails
> >
> >
> >
> > ----- Original Message -----
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jason Wang [mailto:jasowang@...hat.com]
> > > > Sent: Tuesday, December 15, 2020 12:10 PM
> > > > To: wangyunjian <wangyunjian@...wei.com>; netdev@...r.kernel.org;
> > > > mst@...hat.com; willemdebruijn.kernel@...il.com
> > > > Cc: virtualization@...ts.linux-foundation.org; Lilijun (Jerry)
> > > > <jerry.lilijun@...wei.com>; chenchanghu <chenchanghu@...wei.com>;
> > > > xudingke <xudingke@...wei.com>; huangbin (J)
> > > > <brian.huangbin@...wei.com>
> > > > Subject: Re: [PATCH net 2/2] vhost_net: fix high cpu load when sendmsg
> > > > fails
> > > >
> > > >
> > > > On 2020/12/15 上午9:48, wangyunjian wrote:
> > > > > From: Yunjian Wang <wangyunjian@...wei.com>
> > > > >
> > > > > Currently we break the loop and wake up the vhost_worker when
> > sendmsg
> > > > > fails. When the worker wakes up again, we'll meet the same error.
> > > > > This
> > > > > will cause high CPU load. To fix this issue, we can skip this
> > > > > description by ignoring the error. When we exceeds sndbuf, the return
> > > > > value of sendmsg is -EAGAIN. In the case we don't skip the
> > > > > description
> > > > > and don't drop packet.
> > > > >
> > > > > Signed-off-by: Yunjian Wang <wangyunjian@...wei.com>
> > > > > ---
> > > > > drivers/vhost/net.c | 21 +++++++++------------
> > > > > 1 file changed, 9 insertions(+), 12 deletions(-)
> > > > >
> > > > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index
> > > > > c8784dfafdd7..f966592d8900 100644
> > > > > --- a/drivers/vhost/net.c
> > > > > +++ b/drivers/vhost/net.c
> > > > > @@ -827,16 +827,13 @@ static void handle_tx_copy(struct vhost_net
> > *net,
> > > > struct socket *sock)
> > > > > msg.msg_flags &= ~MSG_MORE;
> > > > > }
> > > > >
> > > > > - /* TODO: Check specific error and bomb out unless ENOBUFS?
> > */
> > > > > err = sock->ops->sendmsg(sock, &msg, len);
> > > > > - if (unlikely(err < 0)) {
> > > > > + if (unlikely(err == -EAGAIN)) {
> > > > > vhost_discard_vq_desc(vq, 1);
> > > > > vhost_net_enable_vq(net, vq);
> > > > > break;
> > > > > - }
> > > >
> > > >
> > > > As I've pointed out in last version. If you don't discard descriptor,
> > > > you
> > > > probably
> > > > need to add the head to used ring. Otherwise this descriptor will be
> > > > always
> > > > inflight that may confuse drivers.
> > >
> > > Sorry for missing the comment.
> > >
> > > After deleting discard descriptor and break, the next processing will be
> > > the
> > > same
> > > as the normal success of sendmsg(), and vhost_zerocopy_signal_used() or
> > > vhost_add_used_and_signal() method will be called to add the head to used
> > > ring.
> >
> > It's the next head not the one that contains the buggy packet?
>
> In the modified code logic, the head added to used ring is exectly the
> one that contains the buggy packet.
-ENOTEA :( You're right, I misread the code.
Thanks
>
> Thanks
>
> >
> > Thanks
> >
> > >
> > > Thanks
> > > >
> > > >
> > > > > - if (err != len)
> > > > > - pr_debug("Truncated TX packet: len %d != %zd\n",
> > > > > - err, len);
> > > > > + } else if (unlikely(err < 0 || err != len))
> > > >
> > > >
> > > > It looks to me err != len covers err < 0.
> > >
> > > OK
> > >
> > > >
> > > > Thanks
> > > >
> > > >
> > > > > + vq_err(vq, "Fail to sending packets err : %d, len : %zd\n",
> > err,
> > > > > +len);
> > > > > done:
> > > > > vq->heads[nvq->done_idx].id = cpu_to_vhost32(vq, head);
> > > > > vq->heads[nvq->done_idx].len = 0;
> > > > > @@ -922,7 +919,6 @@ static void handle_tx_zerocopy(struct vhost_net
> > > > *net, struct socket *sock)
> > > > > msg.msg_flags &= ~MSG_MORE;
> > > > > }
> > > > >
> > > > > - /* TODO: Check specific error and bomb out unless ENOBUFS?
> > */
> > > > > err = sock->ops->sendmsg(sock, &msg, len);
> > > > > if (unlikely(err < 0)) {
> > > > > if (zcopy_used) {
> > > > > @@ -931,13 +927,14 @@ static void handle_tx_zerocopy(struct
> > vhost_net
> > > > *net, struct socket *sock)
> > > > > nvq->upend_idx = ((unsigned)nvq->upend_idx - 1)
> > > > > % UIO_MAXIOV;
> > > > > }
> > > > > - vhost_discard_vq_desc(vq, 1);
> > > > > - vhost_net_enable_vq(net, vq);
> > > > > - break;
> > > > > + if (err == -EAGAIN) {
> > > > > + vhost_discard_vq_desc(vq, 1);
> > > > > + vhost_net_enable_vq(net, vq);
> > > > > + break;
> > > > > + }
> > > > > }
> > > > > if (err != len)
> > > > > - pr_debug("Truncated TX packet: "
> > > > > - " len %d != %zd\n", err, len);
> > > > > + vq_err(vq, "Fail to sending packets err : %d, len : %zd\n",
> > err,
> > > > > +len);
> > > > > if (!zcopy_used)
> > > > > vhost_add_used_and_signal(&net->dev, vq, head, 0);
> > > > > else
> > >
> > >
>
>
Powered by blists - more mailing lists