lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 12 Apr 2024 15:28:16 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Bharata B Rao <bharata@....com>
Cc: <linux-mm@...ck.org>,  <linux-kernel@...r.kernel.org>,
  <akpm@...ux-foundation.org>,  <mingo@...hat.com>,
  <peterz@...radead.org>,  <mgorman@...hsingularity.net>,
  <raghavendra.kt@....com>,  <dave.hansen@...ux.intel.com>,
  <hannes@...xchg.org>
Subject: Re: [RFC PATCH 0/2] Hot page promotion optimization for large
 address space

Bharata B Rao <bharata@....com> writes:

> On 03-Apr-24 2:10 PM, Huang, Ying wrote:
>> Bharata B Rao <bharata@....com> writes:
>> 
>>> On 02-Apr-24 7:33 AM, Huang, Ying wrote:
>>>> Bharata B Rao <bharata@....com> writes:
>>>>
>>>>> On 29-Mar-24 6:44 AM, Huang, Ying wrote:
>>>>>> Bharata B Rao <bharata@....com> writes:
>>>>> <snip>
>>>>>>> I don't think the pages are cold but rather the existing mechanism fails
>>>>>>> to categorize them as hot. This is because the pages were scanned way
>>>>>>> before the accesses start happening. When repeated accesses are made to
>>>>>>> a chunk of memory that has been scanned a while back, none of those
>>>>>>> accesses get classified as hot because the scan time is way behind
>>>>>>> the current access time. That's the reason we are seeing the value
>>>>>>> of latency ranging from 20s to 630s as shown above.
>>>>>>
>>>>>> If repeated accesses continue, the page will be identified as hot when
>>>>>> it is scanned next time even if we don't expand the threshold range.  If
>>>>>> the repeated accesses only last very short time, it makes little sense
>>>>>> to identify the pages as hot.  Right?
>>>>>
>>>>> The total allocated memory here is 192G and the chunk size is 1G. Each
>>>>> time one such 1G chunk is taken up randomly for generating memory accesses.
>>>>> Within that 1G, 262144 random accesses are performed and 262144 such
>>>>> accesses are repeated for 512 times. I thought that should be enough
>>>>> to classify that chunk of memory as hot.
>>>>
>>>> IIUC, some pages are accessed in very short time (maybe within 1ms).
>>>> This isn't repeated access in a long period.  I think that pages
>>>> accessed repeatedly in a long period are good candidates for promoting.
>>>> But pages accessed frequently in only very short time aren't.
>>>
>>> Here are the numbers for the 192nd chunk:
>>>
>>> Each iteration of 262144 random accesses takes around ~10ms
>>> 512 such iterations are taking ~5s
>>> numa_scan_seq is 16 when this chunk is accessed.
>>> And no page promotions were done from this chunk. All the
>>> time should_numa_migrate_memory() found the NUMA hint fault
>>> latency to be higher than threshold.
>>>
>>> Are these time periods considered too short for the pages
>>> to be detected as hot and promoted?
>> 
>> Yes.  I think so.  This is burst accessing, not repeated accessing.
>> IIUC, NUMA balancing based promotion only works for repeated accessing
>> for long time, for example, >100s.
>
> Hmm... When a page is accessed 512 times over a period of 5s and it is
> still not detected as hot. This is understandable if fresh scanning couldn't
> be done as the accesses were bursty and hence they couldn't be captured via
> NUMA hint faults. But here the access captured via hint fault is being rejected
> as not hot because the scanning was done a while back. But I do see the challenge
> here since we depend on scanning time to obtain the frequency-of-access metric.

Consider some pages that will be accessed once every 1 hour, should we
consider it hot or not?  Will your proposed method deal with that
correctly?

> BTW, for the above same scenario with numa_balancing_mode=1, the remote
> accesses get detected and migration to source node is tried. It is a different
> matter that eventually pages can't be migrated in this specific scenario as
> the src node is already full.

--
Best Regards,
Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ