[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190806155044.GC25050@lst.de>
Date: Tue, 6 Aug 2019 17:50:44 +0200
From: Christoph Hellwig <hch@....de>
To: Rob Clark <robdclark@...omium.org>
Cc: Christoph Hellwig <hch@....de>, Rob Clark <robdclark@...il.com>,
dri-devel <dri-devel@...ts.freedesktop.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <maxime.ripard@...tlin.com>,
Sean Paul <sean@...rly.run>, David Airlie <airlied@...ux.ie>,
Daniel Vetter <daniel@...ll.ch>,
Allison Randal <allison@...utok.net>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-arm-kernel@...ts.infradead.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] drm: add cache support for arm64
On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> Agreed that drm_cflush_* isn't a great API. In this particular case
> (IIUC), I need wb+inv so that there aren't dirty cache lines that drop
> out to memory later, and so that I don't get a cache hit on
> uncached/wc mmap'ing.
So what is the use case here? Allocate pages using the page allocator
(or CMA for that matter), and then mmaping them to userspace and never
touching them again from the kernel?
> Tying it in w/ iommu seems a bit weird to me.. but maybe that is just
> me, I'm certainly willing to consider proposals or to try things and
> see how they work out.
This was just my through as the fit seems easy. But maybe you'll
need to explain your use case(s) a bit more so that we can figure out
what a good high level API is.
> Exposing the arch_sync_* API and using that directly (bypassing
> drm_cflush_*) actually seems pretty reasonable and pragmatic. I did
> have one doubt, as phys_to_virt() is only valid for kernel direct
> mapped memory (AFAIU), what happens for pages that are not in kernel
> linear map? Maybe it is ok to ignore those pages, since they won't
> have an aliased mapping?
They could have an aliased mapping in vmalloc/vmap space for example,
if you created one. We have the flush_kernel_vmap_range /
invalidate_kernel_vmap_range APIs for those, that are implement
on architectures with VIVT caches.
Powered by blists - more mailing lists