[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK8P3a2CvvBMZ8UkkgyrczgUccSGZ35RpG8V2dGUXbuOh9AZ5A@mail.gmail.com>
Date: Fri, 26 Apr 2019 20:42:42 +0200
From: Arnd Bergmann <arnd@...db.de>
To: Guo Ren <guoren@...nel.org>
Cc: Christoph Hellwig <hch@....de>, Gary Guo <gary@...yguo.net>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
Palmer Dabbelt <palmer@...ive.com>,
Andrew Waterman <andrew@...ive.com>,
Anup Patel <anup.patel@....com>,
Xiang Xiaoyan <xiaoyan_xiang@...ky.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
Vincent Chen <vincentc@...estech.com>,
Greentime Hu <green.hu@...il.com>,
"ren_guo@...ky.com" <ren_guo@...ky.com>,
"linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
Scott Wood <swood@...hat.com>,
"tech-privileged@...ts.riscv.org" <tech-privileged@...ts.riscv.org>
Subject: Re: [PATCH] riscv: Support non-coherency memory model
On Fri, Apr 26, 2019 at 6:06 PM Guo Ren <guoren@...nel.org> wrote:
> On Thu, Apr 25, 2019 at 11:50:11AM +0200, Arnd Bergmann wrote:
> > On Wed, Apr 24, 2019 at 4:23 PM Christoph Hellwig <hch@....de> wrote:
> >
> > You could probably get away with allowing uncached mappings only
> > for huge pages, and using one or two of the bits the PMD for it.
> > This should cover most use cases, since in practice coherent allocations
> > tend to be either small and rare (device descriptors) or very big
> > (frame buffer etc), and both cases can be handled with hugepages
> > and gen_pool_alloc, possibly CMA added in since there will likely
> > not be an IOMMU either on the systems that lack cache coherent DMA.
>
> Generally attributs in huge-tlb-entry and leaf-tlb-entry should be the
> same. Only put _PAGE_CACHE and _PAGE_BUF bits in huge-tlb-entry sounds
> a bit strange.
Well, the point is that we can't really change the meaning of the existing
low bits, but because of the alignment contraints on hugepages, the extra bits
are currently unused for hugepage TLBs.
There are other architectures that reuse the bits in clever ways, e.g.
allowing larger physical address ranges to be used with hugepages than
normal pages.
> The gen_pool_alloc only 256KB by default, but a huge tlb entry is 4MB.
> Hardware couldn't setup vitual-4MB to a phys-256KB range mapping in TLB.
I expect the size would be easily changed, as long as there is sufficient
physical memory. If the entire system has 32MB or less, setting 2MB aside
would have a fairly significant impact of course.
> > - you need to decide what is supposed to happen when there are
> > multiple conflicting mappings for the same physical address.
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> What's the mulitple confilcing mappings ?
I mean when you have the linear mapping as cacheable and another
mapping for the same physical page as uncacheable, and then access
virtual address in both. This is usually a bad idea, but architectures
go to different lengths to prevent it.
The safest way would be for the CPU to produce a checkstop as soon
as there are TLB entries for the same physical address but different
caching settings. You can also do that if you have a cache-bypassing
load/store that hits a live cache line.
The other extreme would be to not do anything special and try to come
up with sane behavior, e.g. allow accesses in both ways but ensure that
a cache-bypassing load/store always flushes and invalidates cache
lines for the same physical address before its access.
Arnd
Powered by blists - more mailing lists