lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAPM=9tyRYFJkPr3DcjfVM4soB-ErTZWXWmXxsFKaSq4MqqjPNQ@mail.gmail.com>
Date:	Thu, 28 May 2015 13:38:11 +1000
From:	Dave Airlie <airlied@...il.com>
To:	Frediano Ziglio <fziglio@...hat.com>
Cc:	spice-devel <spice-devel@...ts.freedesktop.org>,
	David Airlie <airlied@...ux.ie>,
	dri-devel <dri-devel@...ts.freedesktop.org>,
	Dave Airlie <airlied@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [Spice-devel] [PATCH] Do not loop on ERESTARTSYS using
 interruptible waits

On 19 May 2015 at 19:54, Frediano Ziglio <fziglio@...hat.com> wrote:
> This problem happens using KMS surfaces and QXL driver.
> To easy reproduce use KDE Plasma (which use surfaces a lot) and assure
> you are using KMS surfaces (QXL driver on Fedora/RedHat has a patch to
> stop using them). Open some complex application like LibreOffice and
> after a while your machine get stuck using 100% CPU on Xorg.
> The problem occurs as creating new surfaces not interruptible wait
> are used however instead of returning ERESTARTSYS back to userspace
> you try to loop but wait routines always keep returning ERESTARTSYS
> once the signal is marked.
> On out of memory conditions TTM module try to move objects to system
> memory and QXL assure surface is updated before the move.
> The fix handle differently this case using no interruptible wait so
> wait functions will wait instead of returning ERESTARTSYS.
> Note the when the loop occurs driver will send a lot of update requests
> causing more CPU usage on Qemu side too.

I actually don't think we should be enabling surfaces upstream, I
don't mind fixing
the kernel driver to not be crap, but I really don't think surfaces
really help the
SPICE protocol.

I should h ave pushed the disable surface in all cases upstream, feel free to do
so, they were a bad experiment, and nobody ever showed they were faster, or
at least when they hit eviction paths they didn't plummet down the
side of a massive
cliff.

The reason this loops in -ERESTARTSYS is that the hw craps itself if you try
and redo an operation, so you can't go back out to userspace and re-enter the
kernel, just one of many bad design points in the QXL hw.

you should probably drop wait_for_io_cmd completely.

Dave.

>
> Signed-off-by: Frediano Ziglio <fziglio@...hat.com>
> ---
>  qxl/qxl_cmd.c   | 12 +++---------
>  qxl/qxl_drv.h   |  2 +-
>  qxl/qxl_ioctl.c |  2 +-
>  3 files changed, 5 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/drivers/gpu/drm/qxl/qxl_cmd.c b/qxl/qxl_cmd.c
> index 9782364..bd5404e 100644
> --- a/drivers/gpu/drm/qxl/qxl_cmd.c
> +++ b/drivers/gpu/drm/qxl/qxl_cmd.c
> @@ -317,14 +317,11 @@ static void wait_for_io_cmd(struct qxl_device *qdev, uint8_t val, long port)
>  {
>         int ret;
>
> -restart:
>         ret = wait_for_io_cmd_user(qdev, val, port, false);
> -       if (ret == -ERESTARTSYS)
> -               goto restart;
>  }
>
>  int qxl_io_update_area(struct qxl_ddevice *qdev, struct qxl_bo *surf,
> -                       const struct qxl_rect *area)
> +                       const struct qxl_rect *area, bool intr)
>  {
>         int surface_id;
>         uint32_t surface_width, surface_height;
> @@ -350,7 +347,7 @@ int qxl_io_update_area(struct qxl_device *qdev, struct qxl_bo *surf,
>         mutex_lock(&qdev->update_area_mutex);
>         qdev->ram_header->update_area = *area;
>         qdev->ram_header->update_surface = surface_id;
> -       ret = wait_for_io_cmd_user(qdev, 0, QXL_IO_UPDATE_AREA_ASYNC, true);
> +       ret = wait_for_io_cmd_user(qdev, 0, QXL_IO_UPDATE_AREA_ASYNC, intr);
>         mutex_unlock(&qdev->update_area_mutex);
>         return ret;
>  }
> @@ -588,10 +585,7 @@ int qxl_update_surface(struct qxl_device *qdev, struct qxl_bo *surf)
>         rect.right = surf->surf.width;
>         rect.top = 0;
>         rect.bottom = surf->surf.height;
> -retry:
> -       ret = qxl_io_update_area(qdev, surf, &rect);
> -       if (ret == -ERESTARTSYS)
> -               goto retry;
> +       ret = qxl_io_update_area(qdev, surf, &rect, false);
>         return ret;
>  }
>
> diff --git a/drivers/gpu/drm/drivers/gpu/drm/qxl/qxl_drv.h b/qxl/qxl_drv.h
> index 7c6cafe..6745c44 100644
> --- a/drivers/gpu/drm/qxl/qxl_drv.h
> +++ b/drivers/gpu/drm/qxl/qxl_drv.h
> @@ -462,7 +462,7 @@ void qxl_io_memslot_add(struct qxl_device *qdev, uint8_t id);
>  void qxl_io_notify_oom(struct qxl_device *qdev);
>
>  int qxl_io_update_area(struct qxl_device *qdev, struct qxl_bo *surf,
> -                      const struct qxl_rect *area);
> +                      const struct qxl_rect *area, bool intr);
>
>  void qxl_io_reset(struct qxl_device *qdev);
>  void qxl_io_monitors_config(struct qxl_device *qdev);
> diff --git a/drivers/gpu/drm/qxl/qxl_ioctl.c b/qxl/qxl_ioctl.c
> index b110883..afd7297 100644
> --- a/drivers/gpu/drm/qxl/qxl_ioctl.c
> +++ b/drivers/gpu/drm/qxl/qxl_ioctl.c
> @@ -348,7 +348,7 @@ static int qxl_update_area_ioctl(struct drm_device *dev, void *data,
>                 goto out2;
>         if (!qobj->surface_id)
>                 DRM_ERROR("got update area for surface with no id %d\n", update_area->handle);
> -       ret = qxl_io_update_area(qdev, qobj, &area);
> +       ret = qxl_io_update_area(qdev, qobj, &area, true);
>
>  out2:
>         qxl_bo_unreserve(qobj);
> --
> 2.1.0
> _______________________________________________
> Spice-devel mailing list
> Spice-devel@...ts.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/spice-devel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ