lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJaqyWetutMj=GrR+ieS265_aRr7OhoP+7O5rWgPnP+ZAyxbPg@mail.gmail.com>
Date:   Thu, 22 Dec 2022 10:19:00 +0100
From:   Eugenio Perez Martin <eperezma@...hat.com>
To:     Jason Wang <jasowang@...hat.com>
Cc:     mst@...hat.com, davem@...emloft.net, edumazet@...gle.com,
        kuba@...nel.org, pabeni@...hat.com,
        virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org, maxime.coquelin@...hat.com,
        alvaro.karsz@...id-run.com
Subject: Re: [RFC PATCH 4/4] virtio-net: sleep instead of busy waiting for cvq command

On Thu, Dec 22, 2022 at 7:05 AM Jason Wang <jasowang@...hat.com> wrote:
>
> We used to busy waiting on the cvq command this tends to be
> problematic since:
>
> 1) CPU could wait for ever on a buggy/malicous device
> 2) There's no wait to terminate the process that triggers the cvq
>    command
>
> So this patch switch to use sleep with a timeout (1s) instead of busy
> polling for the cvq command forever. This gives the scheduler a breath
> and can let the process can respond to a signal.
>
> Signed-off-by: Jason Wang <jasowang@...hat.com>
> ---
>  drivers/net/virtio_net.c | 15 ++++++++-------
>  1 file changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 8225496ccb1e..69173049371f 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -405,6 +405,7 @@ static void disable_rx_mode_work(struct virtnet_info *vi)
>         vi->rx_mode_work_enabled = false;
>         spin_unlock_bh(&vi->rx_mode_lock);
>
> +       virtqueue_wake_up(vi->cvq);
>         flush_work(&vi->rx_mode_work);
>  }
>
> @@ -1497,6 +1498,11 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
>         return !oom;
>  }
>
> +static void virtnet_cvq_done(struct virtqueue *cvq)
> +{
> +       virtqueue_wake_up(cvq);
> +}
> +
>  static void skb_recv_done(struct virtqueue *rvq)
>  {
>         struct virtnet_info *vi = rvq->vdev->priv;
> @@ -2024,12 +2030,7 @@ static bool virtnet_send_command(struct virtnet_info *vi, u8 class, u8 cmd,
>         if (unlikely(!virtqueue_kick(vi->cvq)))
>                 return vi->ctrl->status == VIRTIO_NET_OK;
>
> -       /* Spin for a response, the kick causes an ioport write, trapping
> -        * into the hypervisor, so the request should be handled immediately.
> -        */
> -       while (!virtqueue_get_buf(vi->cvq, &tmp) &&
> -              !virtqueue_is_broken(vi->cvq))
> -               cpu_relax();
> +       virtqueue_wait_for_used(vi->cvq, &tmp);
>
>         return vi->ctrl->status == VIRTIO_NET_OK;
>  }
> @@ -3524,7 +3525,7 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
>
>         /* Parameters for control virtqueue, if any */
>         if (vi->has_cvq) {
> -               callbacks[total_vqs - 1] = NULL;
> +               callbacks[total_vqs - 1] = virtnet_cvq_done;

If we're using CVQ callback, what is the actual use of the timeout?

I'd say there is no right choice neither in the right timeout value
nor in the action to take. Why not simply trigger the cmd and do all
the changes at command return?

I suspect the reason is that it complicates the code. For example,
having the possibility of many in flight commands, races between their
completion, etc. The virtio standard does not even cover unordered
used commands if I'm not wrong.

Is there any other fundamental reason?

Thanks!

>                 names[total_vqs - 1] = "control";
>         }
>
> --
> 2.25.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ