lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 25 Apr 2019 08:05:38 +0000
From:   "Du, Fan" <fan.du@...el.com>
To:     Michal Hocko <mhocko@...nel.org>
CC:     "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "Wu, Fengguang" <fengguang.wu@...el.com>,
        "Williams, Dan J" <dan.j.williams@...el.com>,
        "Hansen, Dave" <dave.hansen@...el.com>,
        "xishi.qiuxishi@...baba-inc.com" <xishi.qiuxishi@...baba-inc.com>,
        "Huang, Ying" <ying.huang@...el.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Du, Fan" <fan.du@...el.com>
Subject: RE: [RFC PATCH 0/5] New fallback workflow for heterogeneous memory
 system



>-----Original Message-----
>From: owner-linux-mm@...ck.org [mailto:owner-linux-mm@...ck.org] On
>Behalf Of Michal Hocko
>Sent: Thursday, April 25, 2019 3:54 PM
>To: Du, Fan <fan.du@...el.com>
>Cc: akpm@...ux-foundation.org; Wu, Fengguang <fengguang.wu@...el.com>;
>Williams, Dan J <dan.j.williams@...el.com>; Hansen, Dave
><dave.hansen@...el.com>; xishi.qiuxishi@...baba-inc.com; Huang, Ying
><ying.huang@...el.com>; linux-mm@...ck.org; linux-kernel@...r.kernel.org
>Subject: Re: [RFC PATCH 0/5] New fallback workflow for heterogeneous
>memory system
>
>On Thu 25-04-19 07:41:40, Du, Fan wrote:
>>
>>
>> >-----Original Message-----
>> >From: Michal Hocko [mailto:mhocko@...nel.org]
>> >Sent: Thursday, April 25, 2019 2:37 PM
>> >To: Du, Fan <fan.du@...el.com>
>> >Cc: akpm@...ux-foundation.org; Wu, Fengguang
><fengguang.wu@...el.com>;
>> >Williams, Dan J <dan.j.williams@...el.com>; Hansen, Dave
>> ><dave.hansen@...el.com>; xishi.qiuxishi@...baba-inc.com; Huang, Ying
>> ><ying.huang@...el.com>; linux-mm@...ck.org;
>linux-kernel@...r.kernel.org
>> >Subject: Re: [RFC PATCH 0/5] New fallback workflow for heterogeneous
>> >memory system
>> >
>> >On Thu 25-04-19 09:21:30, Fan Du wrote:
>> >[...]
>> >> However PMEM has different characteristics from DRAM,
>> >> the more reasonable or desirable fallback style would be:
>> >> DRAM node 0 -> DRAM node 1 -> PMEM node 2 -> PMEM node 3.
>> >> When DRAM is exhausted, try PMEM then.
>> >
>> >Why and who does care? NUMA is fundamentally about memory nodes
>with
>> >different access characteristics so why is PMEM any special?
>>
>> Michal, thanks for your comments!
>>
>> The "different" lies in the local or remote access, usually the underlying
>> memory is the same type, i.e. DRAM.
>>
>> By "special", PMEM is usually in gigantic capacity than DRAM per dimm,
>> while with different read/write access latency than DRAM.
>
>You are describing a NUMA in general here. Yes access to different NUMA
>nodes has a different read/write latency. But that doesn't make PMEM
>really special from a regular DRAM. 

Not the numa distance b/w cpu and PMEM node make PMEM different than
DRAM. The difference lies in the physical layer. The access latency characteristics
comes from media level.

>There are few other people trying to
>work with PMEM as NUMA nodes and these kind of arguments are repeating
>again and again. So far I haven't really heard much beyond hand waving.
>Please go and read through those discussion so that we do not have to go
>throug the same set of arguments again.
>
>I absolutely do see and understand people want to find a way to use
>their shiny NVIDIMs but please step back and try to think in more
>general terms than PMEM is special and we have to treat it that way.
>We currently have ways to use it as DAX device and a NUMA node then
>focus on how to improve our NUMA handling so that we can get maximum
>out
>of the HW rather than make a PMEM NUMA node a special snow flake.
>
>Thank you.
>
>--
>Michal Hocko
>SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ