[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <HK2PR03MB1684B34F9D1DF18A8CDE18F292930@HK2PR03MB1684.apcprd03.prod.outlook.com>
Date: Tue, 15 May 2018 16:07:28 +0000
From: Huaisheng HS1 Ye <yehs1@...ovo.com>
To: Matthew Wilcox <willy@...radead.org>
CC: Jeff Moyer <jmoyer@...hat.com>,
Dan Williams <dan.j.williams@...el.com>,
Michal Hocko <mhocko@...e.com>,
linux-nvdimm <linux-nvdimm@...ts.01.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
NingTing Cheng <chengnt@...ovo.com>,
Dave Hansen <dave.hansen@...el.com>,
"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>,
"pasha.tatashin@...cle.com" <pasha.tatashin@...cle.com>,
Linux MM <linux-mm@...ck.org>,
"colyli@...e.de" <colyli@...e.de>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Sasha Levin <alexander.levin@...izon.com>,
"Mel Gorman" <mgorman@...hsingularity.net>,
Vlastimil Babka <vbabka@...e.cz>,
"Ocean HY1 He" <hehy1@...ovo.com>
Subject: RE: [External] Re: [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem)
zone
> From: owner-linux-mm@...ck.org [mailto:owner-linux-mm@...ck.org] On Behalf Of Matthew
> Wilcox
> Sent: Friday, May 11, 2018 12:28 AM
> On Wed, May 09, 2018 at 04:47:54AM +0000, Huaisheng HS1 Ye wrote:
> > > On Tue, May 08, 2018 at 02:59:40AM +0000, Huaisheng HS1 Ye wrote:
> > > > Currently in our mind, an ideal use scenario is that, we put all page caches to
> > > > zone_nvm, without any doubt, page cache is an efficient and common cache
> > > > implement, but it has a disadvantage that all dirty data within it would has risk
> > > > to be missed by power failure or system crash. If we put all page caches to NVDIMMs,
> > > > all dirty data will be safe.
> > >
> > > That's a common misconception. Some dirty data will still be in the
> > > CPU caches. Are you planning on building servers which have enough
> > > capacitance to allow the CPU to flush all dirty data from LLC to NV-DIMM?
> > >
> > Sorry for not being clear.
> > For CPU caches if there is a power failure, NVDIMM has ADR to guarantee an interrupt
> will be reported to CPU, an interrupt response function should be responsible to flush
> all dirty data to NVDIMM.
> > If there is a system crush, perhaps CPU couldn't have chance to execute this response.
> >
> > It is hard to make sure everything is safe, what we can do is just to save the dirty
> data which is already stored to Pagecache, but not in CPU cache.
> > Is this an improvement than current?
>
> No. In the current situation, the user knows that either the entire
> page was written back from the pagecache or none of it was (at least
> with a journalling filesystem). With your proposal, we may have pages
> splintered along cacheline boundaries, with a mix of old and new data.
> This is completely unacceptable to most customers.
Dear Matthew,
Thanks for your great help, I really didn't consider this case.
I want to make it a little bit clearer to me. So, correct me if anything wrong.
Is that to say this mix of old and new data in one page, which only has chance to happen when CPU failed to flush all dirty data from LLC to NVDIMM?
But if an interrupt can be reported to CPU, and CPU successfully flush all dirty data from cache lines to NVDIMM within interrupt response function, this mix of old and new data can be avoided.
Current X86_64 uses N-way set associative cache, and every cache line has 64 bytes.
For 4096 bytes page, one page shall be splintered to 64 (4096/64) lines. Is it right?
> > > Then there's the problem of reconnecting the page cache (which is
> > > pointed to by ephemeral data structures like inodes and dentries) to
> > > the new inodes.
> > Yes, it is not easy.
>
> Right ... and until we have that ability, there's no point in this patch.
We are focusing to realize this ability.
Sincerely,
Huaisheng Ye
Powered by blists - more mailing lists