[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<PH0PR08MB7955E9F08CCB64F23963B5C3A860A@PH0PR08MB7955.namprd08.prod.outlook.com>
Date: Wed, 3 Jan 2024 05:26:32 +0000
From: Srinivasulu Thanneeru <sthanneeru@...ron.com>
To: "Huang, Ying" <ying.huang@...el.com>, gregory.price
<gregory.price@...verge.com>
CC: Srinivasulu Opensrc <sthanneeru.opensrc@...ron.com>,
"linux-cxl@...r.kernel.org" <linux-cxl@...r.kernel.org>, "linux-mm@...ck.org"
<linux-mm@...ck.org>, "aneesh.kumar@...ux.ibm.com"
<aneesh.kumar@...ux.ibm.com>, "dan.j.williams@...el.com"
<dan.j.williams@...el.com>, "mhocko@...e.com" <mhocko@...e.com>,
"tj@...nel.org" <tj@...nel.org>, "john@...alactic.com" <john@...alactic.com>,
Eishan Mirakhur <emirakhur@...ron.com>, Vinicius Tavares Petrucci
<vtavarespetr@...ron.com>, Ravis OpenSrc <Ravis.OpenSrc@...ron.com>,
"Jonathan.Cameron@...wei.com" <Jonathan.Cameron@...wei.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Johannes
Weiner <hannes@...xchg.org>, Wei Xu <weixugc@...gle.com>
Subject: RE: [EXT] Re: [RFC PATCH v2 0/2] Node migration between memory tiers
Micron Confidential
Hi Huang, Ying,
My apologies for wrong mail reply format, my mail client settings got changed on my PC.
Please find comments bellow inline.
Regards,
Srini
Micron Confidential
> -----Original Message-----
> From: Huang, Ying <ying.huang@...el.com>
> Sent: Monday, December 18, 2023 11:26 AM
> To: gregory.price <gregory.price@...verge.com>
> Cc: Srinivasulu Opensrc <sthanneeru.opensrc@...ron.com>; linux-
> cxl@...r.kernel.org; linux-mm@...ck.org; Srinivasulu Thanneeru
> <sthanneeru@...ron.com>; aneesh.kumar@...ux.ibm.com;
> dan.j.williams@...el.com; mhocko@...e.com; tj@...nel.org;
> john@...alactic.com; Eishan Mirakhur <emirakhur@...ron.com>; Vinicius
> Tavares Petrucci <vtavarespetr@...ron.com>; Ravis OpenSrc
> <Ravis.OpenSrc@...ron.com>; Jonathan.Cameron@...wei.com; linux-
> kernel@...r.kernel.org; Johannes Weiner <hannes@...xchg.org>; Wei Xu
> <weixugc@...gle.com>
> Subject: [EXT] Re: [RFC PATCH v2 0/2] Node migration between memory tiers
>
> CAUTION: EXTERNAL EMAIL. Do not click links or open attachments unless
> you recognize the sender and were expecting this message.
>
>
> Gregory Price <gregory.price@...verge.com> writes:
>
> > On Fri, Dec 15, 2023 at 01:02:59PM +0800, Huang, Ying wrote:
> >> <sthanneeru.opensrc@...ron.com> writes:
> >>
> >> > =============
> >> > Version Notes:
> >> >
> >> > V2 : Changed interface to memtier_override from adistance_offset.
> >> > memtier_override was recommended by
> >> > 1. John Groves <john@...alactic.com>
> >> > 2. Ravi Shankar <ravis.opensrc@...ron.com>
> >> > 3. Brice Goglin <Brice.Goglin@...ia.fr>
> >>
> >> It appears that you ignored my comments for V1 as follows ...
> >>
> >>
> https://lore.k/
> ernel.org%2Flkml%2F87o7f62vur.fsf%40yhuang6-
> desk2.ccr.corp.intel.com%2F&data=05%7C02%7Csthanneeru%40micron.com
> %7C5e614e5f028342b6b59c08dbff8e3e37%7Cf38a5ecd28134862b11bac1d56
> 3c806f%7C0%7C0%7C638384758666895965%7CUnknown%7CTWFpbGZsb3d
> 8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3
> D%7C3000%7C%7C%7C&sdata=OpMkYCar%2Fv8uHb7AvXbmaNltnXeTvcNUTi
> bLhwV12Fg%3D&reserved=0
Thank you, Huang, Ying for pointing to this.
https://lpc.events/event/16/contributions/1209/attachments/1042/1995/Live%20In%20a%20World%20With%20Multiple%20Memory%20Types.pdf
In the presentation above, the adistance_offsets are per memtype.
We believe that adistance_offset per node is more suitable and flexible.
since we can change it per node. If we keep adistance_offset per memtype,
then we cannot change it for a specific node of a given memtype.
> >>
> https://lore.k/
> ernel.org%2Flkml%2F87jzpt2ft5.fsf%40yhuang6-
> desk2.ccr.corp.intel.com%2F&data=05%7C02%7Csthanneeru%40micron.com
> %7C5e614e5f028342b6b59c08dbff8e3e37%7Cf38a5ecd28134862b11bac1d56
> 3c806f%7C0%7C0%7C638384758666895965%7CUnknown%7CTWFpbGZsb3d
> 8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3
> D%7C3000%7C%7C%7C&sdata=O0%2B6T%2FgU0TicCEYBac%2FAyjOLwAeouh
> D%2BcMI%2BflOsI1M%3D&reserved=0
Yes, memory_type would be grouping the related memories together as single tier.
We should also have a flexibility to move nodes between tiers, to address the issues.
described in use cases above.
> >>
> https://lore.k/
> ernel.org%2Flkml%2F87a5qp2et0.fsf%40yhuang6-
> desk2.ccr.corp.intel.com%2F&data=05%7C02%7Csthanneeru%40micron.com
> %7C5e614e5f028342b6b59c08dbff8e3e37%7Cf38a5ecd28134862b11bac1d56
> 3c806f%7C0%7C0%7C638384758666895965%7CUnknown%7CTWFpbGZsb3d
> 8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3
> D%7C3000%7C%7C%7C&sdata=W%2FWcAD4b9od%2BS0zIak%2Bv5hkjFG1Xcf
> 6p8q3xwmspUiI%3D&reserved=0
This patch provides a way to move a node to the correct tier.
We observed in test setups where DRAM and CXL are put under the same.
tier (memory_tier4).
By using this patch, we can move the CXL node away from the DRAM-linked (memory_tier4)
and put it in the desired tier.
> >>
> >
> > Not speaking for the group, just chiming in because i'd discussed it
> > with them.
> >
> > "Memory Type" is a bit nebulous. Is a Micron Type-3 with performance X
> > and an SK Hynix Type-3 with performance Y a "Different type", or are
> > they the "Same Type" given that they're both Type 3 backed by some form
> > of DDR? Is socket placement of those devices relevant for determining
> > "Type"? Is whether they are behind a switch relevant for determining
> > "Type"? "Type" is frustrating when everything we're talking about
> > managing is "Type-3" with difference performance.
> >
> > A concrete example:
> > To the system, a Multi-Headed Single Logical Device (MH-SLD) looks
> > exactly the same as an standard SLD. I may want to have some
> > combination of local memory expansion devices on the majority of my
> > expansion slots, but reserve 1 slot on each socket for a connection to
> > the MH-SLD. As of right now: There is no good way to differentiate the
> > devices in terms of "Type" - and even if you had that, the tiering
> > system would still lump them together.
> >
> > Similarly, an initial run of switches may or may not allow enumeration
> > of devices behind it (depends on the configuration), so you may end up
> > with a static numa node that "looks like" another SLD - despite it being
> > some definition of "GFAM". Do number of hops matter in determining
> > "Type"?
>
> In the original design, the memory devices of same memory type are
> managed by the same device driver, linked with system in same way
> (including switches), built with same media. So, the performance is
> same too. And, same as memory tiers, memory types are orthogonal to
> sockets. Do you think the definition itself is clear enough?
>
> I admit "memory type" is a confusing name. Do you have some better
> suggestion?
>
> > So I really don't think "Type" is useful for determining tier placement.
> >
> > As of right now, the system lumps DRAM nodes as one tier, and pretty
> > much everything else as "the other tier". To me, this patch set is an
> > initial pass meant to allow user-control over tier composition while
> > the internal mechanism is sussed out and the environment develops.
>
> The patchset to identify the performance of memory devices and put them
> in proper "memory types" and memory tiers via HMAT has been merged by
> v6.7-rc1.
>
> 07a8bdd4120c (memory tiering: add abstract distance calculation
> algorithms management, 2023-09-26)
> d0376aac59a1 (acpi, hmat: refactor hmat_register_target_initiators(),
> 2023-09-26)
> 3718c02dbd4c (acpi, hmat: calculate abstract distance with HMAT, 2023-09-
> 26)
> 6bc2cfdf82d5 (dax, kmem: calculate abstract distance with general
> interface, 2023-09-26)
>
> > In general, a release valve that lets you redefine tiers is very welcome
> > for testing and validation of different setups while the industry evolves.
> >
> > Just my two cents.
>
> --
> Best Regards,
> Huang, Ying
Powered by blists - more mailing lists