lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180507184622.GB12361@bombadil.infradead.org>
Date:   Mon, 7 May 2018 11:46:22 -0700
From:   Matthew Wilcox <willy@...radead.org>
To:     Huaisheng Ye <yehs1@...ovo.com>
Cc:     akpm@...ux-foundation.org, linux-mm@...ck.org, mhocko@...e.com,
        vbabka@...e.cz, mgorman@...hsingularity.net,
        pasha.tatashin@...cle.com, alexander.levin@...izon.com,
        hannes@...xchg.org, penguin-kernel@...ove.SAKURA.ne.jp,
        colyli@...e.de, chengnt@...ovo.com, linux-kernel@...r.kernel.org,
        linux-nvdimm@...ts.01.org
Subject: Re: [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone

On Mon, May 07, 2018 at 10:50:21PM +0800, Huaisheng Ye wrote:
> Traditionally, NVDIMMs are treated by mm(memory management) subsystem as
> DEVICE zone, which is a virtual zone and both its start and end of pfn 
> are equal to 0, mm wouldn’t manage NVDIMM directly as DRAM, kernel uses
> corresponding drivers, which locate at \drivers\nvdimm\ and 
> \drivers\acpi\nfit and fs, to realize NVDIMM memory alloc and free with
> memory hot plug implementation.

You probably want to let linux-nvdimm know about this patch set.
Adding to the cc.  Also, I only received patch 0 and 4.  What happened
to 1-3,5 and 6?

> With current kernel, many mm’s classical features like the buddy
> system, swap mechanism and page cache couldn’t be supported to NVDIMM.
> What we are doing is to expand kernel mm’s capacity to make it to handle
> NVDIMM like DRAM. Furthermore we make mm could treat DRAM and NVDIMM
> separately, that means mm can only put the critical pages to NVDIMM
> zone, here we created a new zone type as NVM zone. That is to say for 
> traditional(or normal) pages which would be stored at DRAM scope like
> Normal, DMA32 and DMA zones. But for the critical pages, which we hope
> them could be recovered from power fail or system crash, we make them
> to be persistent by storing them to NVM zone.
> 
> We installed two NVDIMMs to Lenovo Thinksystem product as development
> platform, which has 125GB storage capacity respectively. With these 
> patches below, mm can create NVM zones for NVDIMMs.
> 
> Here is dmesg info,
>  Initmem setup node 0 [mem 0x0000000000001000-0x000000237fffffff]
>  On node 0 totalpages: 36879666
>    DMA zone: 64 pages used for memmap
>    DMA zone: 23 pages reserved
>    DMA zone: 3999 pages, LIFO batch:0
>  mminit::memmap_init Initialising map node 0 zone 0 pfns 1 -> 4096 
>    DMA32 zone: 10935 pages used for memmap
>    DMA32 zone: 699795 pages, LIFO batch:31
>  mminit::memmap_init Initialising map node 0 zone 1 pfns 4096 -> 1048576
>    Normal zone: 53248 pages used for memmap
>    Normal zone: 3407872 pages, LIFO batch:31
>  mminit::memmap_init Initialising map node 0 zone 2 pfns 1048576 -> 4456448
>    NVM zone: 512000 pages used for memmap
>    NVM zone: 32768000 pages, LIFO batch:31
>  mminit::memmap_init Initialising map node 0 zone 3 pfns 4456448 -> 37224448
>  Initmem setup node 1 [mem 0x0000002380000000-0x00000046bfffffff]
>  On node 1 totalpages: 36962304
>    Normal zone: 65536 pages used for memmap
>    Normal zone: 4194304 pages, LIFO batch:31
>  mminit::memmap_init Initialising map node 1 zone 2 pfns 37224448 -> 41418752
>    NVM zone: 512000 pages used for memmap
>    NVM zone: 32768000 pages, LIFO batch:31
>  mminit::memmap_init Initialising map node 1 zone 3 pfns 41418752 -> 74186752
> 
> This comes /proc/zoneinfo
> Node 0, zone      NVM
>   pages free     32768000
>         min      15244
>         low      48012
>         high     80780
>         spanned  32768000
>         present  32768000
>         managed  32768000
>         protection: (0, 0, 0, 0, 0, 0)
>         nr_free_pages 32768000
> Node 1, zone      NVM
>   pages free     32768000
>         min      15244
>         low      48012
>         high     80780
>         spanned  32768000
>         present  32768000
>         managed  32768000
> 
> Huaisheng Ye (6):
>   mm/memblock: Expand definition of flags to support NVDIMM
>   mm/page_alloc.c: get pfn range with flags of memblock
>   mm, zone_type: create ZONE_NVM and fill into GFP_ZONE_TABLE
>   arch/x86/kernel: mark NVDIMM regions from e820_table
>   mm: get zone spanned pages separately for DRAM and NVDIMM
>   arch/x86/mm: create page table mapping for DRAM and NVDIMM both
> 
>  arch/x86/include/asm/e820/api.h |  3 +++
>  arch/x86/kernel/e820.c          | 20 +++++++++++++-
>  arch/x86/kernel/setup.c         |  8 ++++++
>  arch/x86/mm/init_64.c           | 16 +++++++++++
>  include/linux/gfp.h             | 57 ++++++++++++++++++++++++++++++++++++---
>  include/linux/memblock.h        | 19 +++++++++++++
>  include/linux/mm.h              |  4 +++
>  include/linux/mmzone.h          |  3 +++
>  mm/Kconfig                      | 16 +++++++++++
>  mm/memblock.c                   | 46 +++++++++++++++++++++++++++----
>  mm/nobootmem.c                  |  5 ++--
>  mm/page_alloc.c                 | 60 ++++++++++++++++++++++++++++++++++++++++-
>  12 files changed, 245 insertions(+), 12 deletions(-)
> 
> -- 
> 1.8.3.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ