[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <251f4d24-f8e2-7549-064e-94b4d6c6f4bd@lab.ntt.co.jp>
Date: Wed, 4 Jul 2018 10:21:03 +0900
From: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
To: Jason Wang <jasowang@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
"Michael S. Tsirkin" <mst@...hat.com>
Cc: netdev@...r.kernel.org, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
Tonghao Zhang <xiangxia.m.yue@...il.com>
Subject: Re: [PATCH v2 net-next 4/4] vhost_net: Avoid rx vring kicks during
busyloop
On 2018/07/03 18:05, Jason Wang wrote:
> On 2018年07月03日 15:31, Toshiaki Makita wrote:
>> We may run out of avail rx ring descriptor under heavy load but busypoll
>> did not detect it so busypoll may have exited prematurely. Avoid this by
>> checking rx ring full during busypoll.
>
> Actually, we're checking whether it was empty in fact?
My understanding is that on rx empty avail means no free descriptors
from the perspective of host so the ring is conceptually full.
>> Signed-off-by: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
>> ---
>> drivers/vhost/net.c | 10 +++++++---
>> 1 file changed, 7 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
>> index 791bc8b..b224036 100644
>> --- a/drivers/vhost/net.c
>> +++ b/drivers/vhost/net.c
>> @@ -658,6 +658,7 @@ static int vhost_net_rx_peek_head_len(struct
>> vhost_net *net, struct sock *sk,
>> {
>> struct vhost_net_virtqueue *rnvq = &net->vqs[VHOST_NET_VQ_RX];
>> struct vhost_net_virtqueue *tnvq = &net->vqs[VHOST_NET_VQ_TX];
>> + struct vhost_virtqueue *rvq = &rnvq->vq;
>> struct vhost_virtqueue *tvq = &tnvq->vq;
>> unsigned long uninitialized_var(endtime);
>> int len = peek_head_len(rnvq, sk);
>> @@ -677,7 +678,8 @@ static int vhost_net_rx_peek_head_len(struct
>> vhost_net *net, struct sock *sk,
>> *busyloop_intr = true;
>> break;
>> }
>> - if (sk_has_rx_data(sk) ||
>> + if ((sk_has_rx_data(sk) &&
>> + !vhost_vq_avail_empty(&net->dev, rvq)) ||
>> !vhost_vq_avail_empty(&net->dev, tvq))
>> break;
>> cpu_relax();
>> @@ -827,7 +829,6 @@ static void handle_rx(struct vhost_net *net)
>>
>
> I thought below codes should belong to patch 3. Or I may miss something.
That codes are added for the case that busypoll is waiting for rx vq
avail but interrupted by another work. At the point of patch 3 busypoll
does not wait for rx vq avail so I don't think we move them to patch 3.
>
> Thanks
>
>> while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk,
>> &busyloop_intr))) {
>> - busyloop_intr = false;
>> sock_len += sock_hlen;
>> vhost_len = sock_len + vhost_hlen;
>> headcount = get_rx_bufs(vq, vq->heads + nvq->done_idx,
>> @@ -838,7 +839,9 @@ static void handle_rx(struct vhost_net *net)
>> goto out;
>> /* OK, now we need to know about added descriptors. */
>> if (!headcount) {
>> - if (unlikely(vhost_enable_notify(&net->dev, vq))) {
>> + if (unlikely(busyloop_intr)) {
>> + vhost_poll_queue(&vq->poll);
>> + } else if (unlikely(vhost_enable_notify(&net->dev, vq))) {
>> /* They have slipped one in as we were
>> * doing that: check again. */
>> vhost_disable_notify(&net->dev, vq);
>> @@ -848,6 +851,7 @@ static void handle_rx(struct vhost_net *net)
>> * they refilled. */
>> goto out;
>> }
>> + busyloop_intr = false;
>> if (nvq->rx_ring)
>> msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
>> /* On overrun, truncate and discard */
--
Toshiaki Makita
Powered by blists - more mailing lists