[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180515162003.GA26489@bombadil.infradead.org>
Date: Tue, 15 May 2018 09:20:03 -0700
From: Matthew Wilcox <willy@...radead.org>
To: Huaisheng HS1 Ye <yehs1@...ovo.com>
Cc: Jeff Moyer <jmoyer@...hat.com>,
Dan Williams <dan.j.williams@...el.com>,
Michal Hocko <mhocko@...e.com>,
linux-nvdimm <linux-nvdimm@...ts.01.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
NingTing Cheng <chengnt@...ovo.com>,
Dave Hansen <dave.hansen@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"pasha.tatashin@...cle.com" <pasha.tatashin@...cle.com>,
Linux MM <linux-mm@...ck.org>,
"colyli@...e.de" <colyli@...e.de>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Sasha Levin <alexander.levin@...izon.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Vlastimil Babka <vbabka@...e.cz>,
Ocean HY1 He <hehy1@...ovo.com>
Subject: Re: [External] Re: [RFC PATCH v1 0/6] use mm to manage NVDIMM
(pmem) zone
On Tue, May 15, 2018 at 04:07:28PM +0000, Huaisheng HS1 Ye wrote:
> > From: owner-linux-mm@...ck.org [mailto:owner-linux-mm@...ck.org] On Behalf Of Matthew
> > Wilcox
> > No. In the current situation, the user knows that either the entire
> > page was written back from the pagecache or none of it was (at least
> > with a journalling filesystem). With your proposal, we may have pages
> > splintered along cacheline boundaries, with a mix of old and new data.
> > This is completely unacceptable to most customers.
>
> Dear Matthew,
>
> Thanks for your great help, I really didn't consider this case.
> I want to make it a little bit clearer to me. So, correct me if anything wrong.
>
> Is that to say this mix of old and new data in one page, which only has chance to happen when CPU failed to flush all dirty data from LLC to NVDIMM?
> But if an interrupt can be reported to CPU, and CPU successfully flush all dirty data from cache lines to NVDIMM within interrupt response function, this mix of old and new data can be avoided.
If you can keep the CPU and the memory (and all the busses between them)
alive for long enough after the power signal hs been tripped, yes.
Talk to your hardware designers about what it will take to achieve this
:-) Be sure to ask about the number of retries which may be necessary
on the CPU interconnect to flush all data to an NV-DIMM attached to a
remote CPU.
> Current X86_64 uses N-way set associative cache, and every cache line has 64 bytes.
> For 4096 bytes page, one page shall be splintered to 64 (4096/64) lines. Is it right?
That's correct.
> > > > Then there's the problem of reconnecting the page cache (which is
> > > > pointed to by ephemeral data structures like inodes and dentries) to
> > > > the new inodes.
> > > Yes, it is not easy.
> >
> > Right ... and until we have that ability, there's no point in this patch.
> We are focusing to realize this ability.
But is it the right approach? So far we have (I think) two parallel
activities. The first is for local storage, using DAX to store files
directly on the pmem. The second is a physical block cache for network
filesystems (both NAS and SAN). You seem to be wanting to supplant the
second effort, but I think it's much harder to reconnect the logical cache
(ie the page cache) than it is the physical cache (ie the block cache).
Powered by blists - more mailing lists