lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 10 May 2018 09:57:59 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Huaisheng Ye <yehs1@...ovo.com>
Cc:     akpm@...ux-foundation.org, linux-mm@...ck.org, willy@...radead.org,
        vbabka@...e.cz, mgorman@...hsingularity.net,
        pasha.tatashin@...cle.com, alexander.levin@...izon.com,
        hannes@...xchg.org, penguin-kernel@...ove.SAKURA.ne.jp,
        colyli@...e.de, chengnt@...ovo.com, hehy1@...ovo.com,
        linux-kernel@...r.kernel.org, linux-nvdimm@...ts.01.org
Subject: Re: [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone

On Tue 08-05-18 10:30:22, Huaisheng Ye wrote:
> Traditionally, NVDIMMs are treated by mm(memory management) subsystem as
> DEVICE zone, which is a virtual zone and both its start and end of pfn
> are equal to 0, mm wouldn’t manage NVDIMM directly as DRAM, kernel uses
> corresponding drivers, which locate at \drivers\nvdimm\ and
> \drivers\acpi\nfit and fs, to realize NVDIMM memory alloc and free with
> memory hot plug implementation.
> 
> With current kernel, many mm’s classical features like the buddy
> system, swap mechanism and page cache couldn’t be supported to NVDIMM.
> What we are doing is to expand kernel mm’s capacity to make it to handle
> NVDIMM like DRAM. Furthermore we make mm could treat DRAM and NVDIMM
> separately, that means mm can only put the critical pages to NVDIMM
> zone, here we created a new zone type as NVM zone.

How do you define critical pages? Who is allowed to allocate from them?
You do not seem to add _any_ user of GFP_NVM.

> That is to say for
> traditional(or normal) pages which would be stored at DRAM scope like
> Normal, DMA32 and DMA zones. But for the critical pages, which we hope
> them could be recovered from power fail or system crash, we make them
> to be persistent by storing them to NVM zone.

This brings more questions than it answers. First of all is this going
to be any guarantee? Let's say I want GFP_NVM, can I get memory from
other zones? In other words is such a request allowed to fallback to
succeed? Are we allowed to reclaim memory from the new zone? What should
happen on the OOM? How is the user expected to restore the previous
content after reboot/crash?

I am sorry if these questions are answered in the respective patches but
it would be great to have this in the cover letter to have a good
overview of the whole design. From my quick glance over patches my
previous concerns about an additional zone still hold, though.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ