lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87o7asfrm1.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Tue, 02 Apr 2024 10:03:34 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Bharata B Rao <bharata@....com>
Cc: <linux-mm@...ck.org>,  <linux-kernel@...r.kernel.org>,
  <akpm@...ux-foundation.org>,  <mingo@...hat.com>,
  <peterz@...radead.org>,  <mgorman@...hsingularity.net>,
  <raghavendra.kt@....com>,  <dave.hansen@...ux.intel.com>,
  <hannes@...xchg.org>
Subject: Re: [RFC PATCH 0/2] Hot page promotion optimization for large
 address space

Bharata B Rao <bharata@....com> writes:

> On 29-Mar-24 6:44 AM, Huang, Ying wrote:
>> Bharata B Rao <bharata@....com> writes:
> <snip>
>>> I don't think the pages are cold but rather the existing mechanism fails
>>> to categorize them as hot. This is because the pages were scanned way
>>> before the accesses start happening. When repeated accesses are made to
>>> a chunk of memory that has been scanned a while back, none of those
>>> accesses get classified as hot because the scan time is way behind
>>> the current access time. That's the reason we are seeing the value
>>> of latency ranging from 20s to 630s as shown above.
>> 
>> If repeated accesses continue, the page will be identified as hot when
>> it is scanned next time even if we don't expand the threshold range.  If
>> the repeated accesses only last very short time, it makes little sense
>> to identify the pages as hot.  Right?
>
> The total allocated memory here is 192G and the chunk size is 1G. Each
> time one such 1G chunk is taken up randomly for generating memory accesses.
> Within that 1G, 262144 random accesses are performed and 262144 such
> accesses are repeated for 512 times. I thought that should be enough
> to classify that chunk of memory as hot.

IIUC, some pages are accessed in very short time (maybe within 1ms).
This isn't repeated access in a long period.  I think that pages
accessed repeatedly in a long period are good candidates for promoting.
But pages accessed frequently in only very short time aren't.

> But as we see, often times
> the scan time is lagging the access time by a large value.
>
> Let me instrument the code further to learn more insights (if possible)
> about the scanning/fault time behaviors here.
>
> Leaving the fault count based threshold apart, do you think there is
> value in updating the scan time for skipped pages/PTEs during every
> scan so that the scan time remains current for all the pages?

No, I don't think so.  That makes hint page fault latency more
inaccurate.

>> 
>> The bits to record scan time or hint page fault is limited, so it's
>> possible for it to overflow anyway.  We scan scale time stamp if
>> necessary (for example, from 1ms to 10ms).  But it's hard to scale fault
>> counter.  And nobody can guarantee the frequency of hint page fault must
>> be less 1/ms, if it's 10/ms, it can record even short interval.
>
> Yes, with the approach I have taken, the time factor is out of the
> equation and the notion of hotness is purely a factor of the number of
> faults (or accesses)

Sorry, I don't get your idea here.  I think that the fault count may be
worse than time in quite some cases.

--
Best Regards,
Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ