[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <561C9427.8020001@vmware.com>
Date: Tue, 13 Oct 2015 07:18:31 +0200
From: Thomas Hellstrom <thellstrom@...are.com>
To: Dan Williams <dan.j.williams@...el.com>
CC: David Airlie <airlied@...ux.ie>, <linux-kernel@...r.kernel.org>,
<dri-devel@...ts.freedesktop.org>, Sinclair Yeh <syeh@...are.com>
Subject: Re: [PATCH] drm/vmwgfx: switch from ioremap_cache to memremap
Hi!
On 10/13/2015 12:35 AM, Dan Williams wrote:
> Per commit 2e586a7e017a "drm/vmwgfx: Map the fifo as cached" the driver
> expects the fifo registers to be cacheable. In preparation for
> deprecating ioremap_cache() convert its usage in vmwgfx to memremap().
>
> Cc: David Airlie <airlied@...ux.ie>
> Cc: Thomas Hellstrom <thellstrom@...are.com>
> Cc: Sinclair Yeh <syeh@...are.com>
> Cc: dri-devel@...ts.freedesktop.org
> Signed-off-by: Dan Williams <dan.j.williams@...el.com>
While I have nothing against the conversion, what's stopping the
compiler from reordering writes on a generic architecture and caching
and reordering reads on x86 in particular? At the very least it looks to
me like the memory accesses of the memremap'd memory needs to be
encapsulated within READ_ONCE and WRITE_ONCE.
How is this handled in the other conversions?
Thanks,
Thomas
> ---
>
> This is part of the tree-wide conversion of ioremap_cache() to
> memremap() targeted for v4.4 [1]
>
> As noted in that cover letter feel free to apply or ack. These
> individual conversions can go in independently given the base memremap()
> implementation is already upstream in v4.3-rc1.
>
> This passes a clean run of sparse, but I have not tried to run the
> result.
>
> [1]: https://urldefense.proofpoint.com/v2/url?u=https-3A__lkml.org_lkml_2015_10_9_699&d=BQICaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=vpukPkBtpoNQp2IUKuFviOmPNYWVKmen3Jeeu55zmEA&m=XuVtQ3_28jin5hdK6wBcLigEiRc-1TuelYa3zm94m44&s=kN3-2XStWWU0f20wNGpmQiwZTTiBBzwD4LShvfokwkQ&e=
>
> drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 8 +--
> drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 2 -
> drivers/gpu/drm/vmwgfx/vmwgfx_fence.c | 23 ++++---
> drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c | 102 ++++++++++++++++-----------------
> drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c | 9 +--
> drivers/gpu/drm/vmwgfx/vmwgfx_irq.c | 8 +--
> drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 24 ++++----
> 7 files changed, 84 insertions(+), 92 deletions(-)
>
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
> index 2c7a25c71af2..d6152cd7c634 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
> @@ -752,8 +752,8 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset)
> ttm_lock_set_kill(&dev_priv->fbdev_master.lock, false, SIGTERM);
> dev_priv->active_master = &dev_priv->fbdev_master;
>
> - dev_priv->mmio_virt = ioremap_cache(dev_priv->mmio_start,
> - dev_priv->mmio_size);
> + dev_priv->mmio_virt = memremap(dev_priv->mmio_start,
> + dev_priv->mmio_size, MEMREMAP_WB);
>
> if (unlikely(dev_priv->mmio_virt == NULL)) {
> ret = -ENOMEM;
> @@ -907,7 +907,7 @@ out_no_irq:
> out_no_device:
> ttm_object_device_release(&dev_priv->tdev);
> out_err4:
> - iounmap(dev_priv->mmio_virt);
> + memunmap(dev_priv->mmio_virt);
> out_err3:
> vmw_ttm_global_release(dev_priv);
> out_err0:
> @@ -958,7 +958,7 @@ static int vmw_driver_unload(struct drm_device *dev)
> pci_release_regions(dev->pdev);
>
> ttm_object_device_release(&dev_priv->tdev);
> - iounmap(dev_priv->mmio_virt);
> + memunmap(dev_priv->mmio_virt);
> if (dev_priv->ctx.staged_bindings)
> vmw_binding_state_free(dev_priv->ctx.staged_bindings);
> vmw_ttm_global_release(dev_priv);
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
> index f19fd39b43e1..7ff1db23de80 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
> @@ -375,7 +375,7 @@ struct vmw_private {
> uint32_t stdu_max_height;
> uint32_t initial_width;
> uint32_t initial_height;
> - u32 __iomem *mmio_virt;
> + u32 *mmio_virt;
> uint32_t capabilities;
> uint32_t max_gmr_ids;
> uint32_t max_gmr_pages;
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
> index 567ddede51d1..97f75adc080d 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
> @@ -141,9 +141,9 @@ static bool vmw_fence_enable_signaling(struct fence *f)
>
> struct vmw_fence_manager *fman = fman_from_fence(fence);
> struct vmw_private *dev_priv = fman->dev_priv;
> + u32 *fifo_mem = dev_priv->mmio_virt;
> + u32 seqno = *(fifo_mem + SVGA_FIFO_FENCE);
>
> - u32 __iomem *fifo_mem = dev_priv->mmio_virt;
> - u32 seqno = ioread32(fifo_mem + SVGA_FIFO_FENCE);
> if (seqno - fence->base.seqno < VMW_FENCE_WRAP)
> return false;
>
>
Here, for example.
/Thomas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists