lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 21 Aug 2023 16:10:08 +0800
From:   Baolin Wang <baolin.wang@...ux.alibaba.com>
To:     "Huang, Ying" <ying.huang@...el.com>
Cc:     akpm@...ux-foundation.org, mgorman@...hsingularity.net,
        shy828301@...il.com, david@...hat.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] Extend migrate_misplaced_page() to support batch
 migration



On 8/21/2023 10:29 AM, Huang, Ying wrote:
> Baolin Wang <baolin.wang@...ux.alibaba.com> writes:
> 
>> Hi,
>>
>> Currently, on our ARM servers with NUMA enabled, we found the cross-die latency
>> is a little larger that will significantly impact the workload's performance.
>> So on ARM servers we will rely on the NUMA balancing to avoid the cross-die
>> accessing. And I posted a patchset[1] to support speculative numa fault to
>> improve the NUMA balancing's performance according to the principle of data
>> locality. Moreover, thanks to Huang Ying's patchset[2], which introduced batch
>> migration as a way to reduce the cost of TLB flush, and it will also benefit
>> the migration of multiple pages all at once during NUMA balancing.
>>
>> So we plan to continue to support batch migration in do_numa_page() to improve
>> the NUMA balancing's performance, but before adding complicated batch migration
>> algorithm for NUMA balancing, some cleanup and preparation work need to do firstly,
>> which are done in this patch set. In short, this patchset extends the
>> migrate_misplaced_page() interface to support batch migration, and no functional
>> changes intended.
> 
> Will these cleanup benefit anything except batching migration?  If not,

I hope these cleanup can also benefit the compound page's NUMA 
balancing, which was discussed in the thread[1]. IIUC, for the compound 
page's NUMA balancing, it is possible that partial pages were 
successfully migrated, so it is necessary to return the number of pages 
that were successfully migrated from migrate_misplaced_page(). (But I 
did not look this in detail yet, please correct me if I missed 
something, and I will find some time to look this in detail). That is 
why I think these cleanups are straightforward.

Yes, I will post the batch migration patches after more polish and 
testing, but I think these cleanups are separate and straightforward, so 
I plan to submit the patches separately.

[1] 
https://lore.kernel.org/all/f8d47176-03a8-99bf-a813-b5942830fd73@arm.com/

> I suggest you to post the whole series.  In this way, people will be
> more clear about why we need these cleanup.
> 
> --
> Best Regards,
> Huang, Ying
> 
>> [1] https://lore.kernel.org/lkml/cover.1639306956.git.baolin.wang@linux.alibaba.com/t/#mc45929849b5d0e29b5fdd9d50425f8e95b8f2563
>> [2] https://lore.kernel.org/all/20230213123444.155149-1-ying.huang@intel.com/T/#u
>>
>> Baolin Wang (4):
>>    mm: migrate: move migration validation into numa_migrate_prep()
>>    mm: migrate: move the numamigrate_isolate_page() into do_numa_page()
>>    mm: migrate: change migrate_misplaced_page() to support multiple pages
>>      migration
>>    mm: migrate: change to return the number of pages migrated
>>      successfully
>>
>>   include/linux/migrate.h | 15 ++++++++---
>>   mm/huge_memory.c        | 19 +++++++++++---
>>   mm/memory.c             | 34 +++++++++++++++++++++++-
>>   mm/migrate.c            | 58 ++++++++---------------------------------
>>   4 files changed, 71 insertions(+), 55 deletions(-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ