[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190313160529.GB15134@infradead.org>
Date: Wed, 13 Mar 2019 09:05:29 -0700
From: Christoph Hellwig <hch@...radead.org>
To: James Bottomley <James.Bottomley@...senPartnership.com>
Cc: Andrea Arcangeli <aarcange@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
David Miller <davem@...emloft.net>, hch@...radead.org,
kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
peterx@...hat.com, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org, linux-parisc@...r.kernel.org
Subject: Re: [RFC PATCH V2 0/5] vhost: accelerate metadata access through
vmap()
On Tue, Mar 12, 2019 at 01:53:37PM -0700, James Bottomley wrote:
> I've got to say: optimize what? What code do we ever have in the
> kernel that kmap's a page and then doesn't do anything with it? You can
> guarantee that on kunmap the page is either referenced (needs
> invalidating) or updated (needs flushing). The in-kernel use of kmap is
> always
>
> kmap
> do something with the mapped page
> kunmap
>
> In a very short interval. It seems just a simplification to make
> kunmap do the flush if needed rather than try to have the users
> remember. The thing which makes this really simple is that on most
> architectures flush and invalidate is the same operation. If you
> really want to optimize you can use the referenced and dirty bits on
> the kmapped pte to tell you what operation to do, but if your flush is
> your invalidate, you simply assume the data needs flushing on kunmap
> without checking anything.
I agree that this would be a good way to simplify the API. Now
we'd just need volunteers to implement this for all architectures
that need cache flushing and then remove the explicit flushing in
the callers..
> > Which means after we fix vhost to add the flush_dcache_page after
> > kunmap, Parisc will get a double hit (but it also means Parisc was
> > the only one of those archs needed explicit cache flushes, where
> > vhost worked correctly so far.. so it kinds of proofs your point of
> > giving up being the safe choice).
>
> What double hit? If there's no cache to flush then cache flush is a
> no-op. It's also a highly piplineable no-op because the CPU has the L1
> cache within easy reach. The only event when flush takes a large
> amount time is if we actually have dirty data to write back to main
> memory.
I've heard people complaining that on some microarchitectures even
no-op cache flushes are relatively expensive. Don't ask me why,
but if we can easily avoid double flushes we should do that.
Powered by blists - more mailing lists