[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87edjwc6vi.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Tue, 22 Aug 2023 07:28:49 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Alistair Popple <apopple@...dia.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <linux-cxl@...r.kernel.org>,
<nvdimm@...ts.linux.dev>, <linux-acpi@...r.kernel.org>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Wei Xu <weixugc@...gle.com>,
Dan Williams <dan.j.williams@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
"Davidlohr Bueso" <dave@...olabs.net>,
Johannes Weiner <hannes@...xchg.org>,
"Jonathan Cameron" <Jonathan.Cameron@...wei.com>,
Michal Hocko <mhocko@...nel.org>,
Yang Shi <shy828301@...il.com>,
Rafael J Wysocki <rafael.j.wysocki@...el.com>
Subject: Re: [PATCH RESEND 3/4] acpi, hmat: calculate abstract distance with
HMAT
Alistair Popple <apopple@...dia.com> writes:
> "Huang, Ying" <ying.huang@...el.com> writes:
>
>> Alistair Popple <apopple@...dia.com> writes:
>>
>>> Huang Ying <ying.huang@...el.com> writes:
>>>
>>>> A memory tiering abstract distance calculation algorithm based on ACPI
>>>> HMAT is implemented. The basic idea is as follows.
>>>>
>>>> The performance attributes of system default DRAM nodes are recorded
>>>> as the base line. Whose abstract distance is MEMTIER_ADISTANCE_DRAM.
>>>> Then, the ratio of the abstract distance of a memory node (target) to
>>>> MEMTIER_ADISTANCE_DRAM is scaled based on the ratio of the performance
>>>> attributes of the node to that of the default DRAM nodes.
>>>
>>> The problem I encountered here with the calculations is that HBM memory
>>> ended up in a lower-tiered node which isn't what I wanted (at least when
>>> that HBM is attached to a GPU say).
>>
>> I have tested the series on a server machine with HBM (pure HBM, not
>> attached to a GPU). Where, HBM is placed in a higher tier than DRAM.
>
> Good to know.
>
>>> I suspect this is because the calculations are based on the CPU
>>> point-of-view (access1) which still sees lower bandwidth to remote HBM
>>> than local DRAM, even though the remote GPU has higher bandwidth access
>>> to that memory. Perhaps we need to be considering access0 as well?
>>> Ie. HBM directly attached to a generic initiator should be in a higher
>>> tier regardless of CPU access characteristics?
>>
>> What's your requirements for memory tiers on the machine? I guess you
>> want to put GPU attache HBM in a higher tier and put DRAM in a lower
>> tier. So, cold HBM pages can be demoted to DRAM when there are memory
>> pressure on HBM? This sounds reasonable from GPU point of view.
>
> Yes, that is what I would like to implement.
>
>> The above requirements may be satisfied via calculating abstract
>> distance based on access0 (or combined with access1). But I suspect
>> this will be a general solution. I guess that any memory devices that
>> are used mainly by the memory initiators other than CPUs want to put
>> themselves in a higher memory tier than DRAM, regardless of its
>> access0.
>
> Right. I'm still figuring out how ACPI HMAT fits together but that
> sounds reasonable.
>
>> One solution is to put GPU HBM in the highest memory tier (with smallest
>> abstract distance) always in GPU device driver regardless its HMAT
>> performance attributes. Is it possible?
>
> It's certainly possible and easy enough to do, although I think it would
> be good to provide upper and lower bounds for HMAT derived adistances to
> make that easier. It does make me wonder what the point of HMAT is if we
> have to ignore it in some scenarios though. But perhaps I need to dig
> deeper into the GPU values to figure out how it can be applied correctly
> there.
In the original design (page 11 of [1]),
[1] https://lpc.events/event/16/contributions/1209/attachments/1042/1995/Live%20In%20a%20World%20With%20Multiple%20Memory%20Types.pdf
the default memory tier hierarchy is based on the performance from CPU
point of view. Then the abstract distance of a memory type (e.g., GPU
HBM) can be adjusted via a sysfs knob
(<memory_type>/abstract_distance_offset) based on the requirements of
GPU.
That's another possible solution.
--
Best Regards,
Huang, Ying
Powered by blists - more mailing lists