lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 20 Jun 2024 10:30:27 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Chris Li <chrisl@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,  Kairui Song
 <kasong@...cent.com>,  Ryan Roberts <ryan.roberts@....com>,  Kalesh Singh
 <kaleshsingh@...gle.com>,  linux-kernel@...r.kernel.org,
  linux-mm@...ck.org,  Barry Song <baohua@...nel.org>
Subject: Re: [PATCH v3 0/2] mm: swap: mTHP swap allocator base on swap
 cluster order

Chris Li <chrisl@...nel.org> writes:

> This is the short term solutiolns "swap cluster order" listed
> in my "Swap Abstraction" discussion slice 8 in the recent
> LSF/MM conference.
>
> When commit 845982eb264bc "mm: swap: allow storage of all mTHP
> orders" is introduced, it only allocates the mTHP swap entries
> from new empty cluster list.  It has a fragmentation issue
> reported by Barry.
>
> https://lore.kernel.org/all/CAGsJ_4zAcJkuW016Cfi6wicRr8N9X+GJJhgMQdSMp+Ah+NSgNQ@mail.gmail.com/
>
> The reason is that all the empty cluster has been exhausted while
> there are planty of free swap entries to in the cluster that is
> not 100% free.
>
> Remember the swap allocation order in the cluster.
> Keep track of the per order non full cluster list for later allocation.
>
> User impact: For users that allocate and free mix order mTHP swapping,
> It greatly improves the success rate of the mTHP swap allocation after the
> initial phase.
>
> Barry provides a test program to show the effect:
> https://lore.kernel.org/linux-mm/20240615084714.37499-1-21cnbao@gmail.com/
>
> Without:
> $ mthp-swapout                                                          
> Iteration 1: swpout inc: 222, swpout fallback inc: 0, Fallback percentage: 0.00%   
> Iteration 2: swpout inc: 219, swpout fallback inc: 0, Fallback percentage: 0.00%                                                    
> Iteration 3: swpout inc: 222, swpout fallback inc: 0, Fallback percentage: 0.00%   
> Iteration 4: swpout inc: 219, swpout fallback inc: 0, Fallback percentage: 0.00%            
> Iteration 5: swpout inc: 110, swpout fallback inc: 117, Fallback percentage: 51.54%                                                 
> Iteration 6: swpout inc: 0, swpout fallback inc: 230, Fallback percentage: 100.00%          
> Iteration 7: swpout inc: 0, swpout fallback inc: 229, Fallback percentage: 100.00% 
> Iteration 8: swpout inc: 0, swpout fallback inc: 223, Fallback percentage: 100.00%                                                  
> Iteration 9: swpout inc: 0, swpout fallback inc: 224, Fallback percentage: 100.00%                                                  
> Iteration 10: swpout inc: 0, swpout fallback inc: 216, Fallback percentage: 100.00%                                                 
> Iteration 11: swpout inc: 0, swpout fallback inc: 212, Fallback percentage: 100.00%                                                 
> Iteration 12: swpout inc: 0, swpout fallback inc: 224, Fallback percentage: 100.00%                                                 
> Iteration 13: swpout inc: 0, swpout fallback inc: 214, Fallback percentage: 100.00%        
>
> $ mthp-swapout -s
> Iteration 1: swpout inc: 222, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 2: swpout inc: 227, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 3: swpout inc: 222, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 4: swpout inc: 224, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 5: swpout inc: 33, swpout fallback inc: 197, Fallback percentage: 85.65%
> Iteration 6: swpout inc: 0, swpout fallback inc: 229, Fallback percentage: 100.00%
> Iteration 7: swpout inc: 0, swpout fallback inc: 223, Fallback percentage: 100.00%
> Iteration 8: swpout inc: 0, swpout fallback inc: 219, Fallback percentage: 100.00%
> Iteration 9: swpout inc: 0, swpout fallback inc: 212, Fallback percentage: 100.00%
>
> With:
> $ mthp-swapout       
> Iteration 1: swpout inc: 222, swpout fallback inc: 0, Fallback percentage: 0.00%                                                    
> Iteration 2: swpout inc: 219, swpout fallback inc: 0, Fallback percentage: 0.00%                                                    
> Iteration 3: swpout inc: 222, swpout fallback inc: 0, Fallback percentage: 0.00%                                                    
> Iteration 4: swpout inc: 219, swpout fallback inc: 0, Fallback percentage: 0.00%       
> Iteration 5: swpout inc: 227, swpout fallback inc: 0, Fallback percentage: 0.00% 
> Iteration 6: swpout inc: 230, swpout fallback inc: 0, Fallback percentage: 0.00%            
> ...
> Iteration 94: swpout inc: 224, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 95: swpout inc: 221, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 96: swpout inc: 229, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 97: swpout inc: 219, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 98: swpout inc: 222, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 99: swpout inc: 223, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 100: swpout inc: 224, swpout fallback inc: 0, Fallback percentage: 0.00%
>
> $ mthp-swapout -s
> Iteration 1: swpout inc: 222, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 2: swpout inc: 227, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 3: swpout inc: 222, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 4: swpout inc: 224, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 5: swpout inc: 230, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 6: swpout inc: 229, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 7: swpout inc: 223, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 8: swpout inc: 219, swpout fallback inc: 0, Fallback percentage: 0.00%
> ...
> Iteration 94: swpout inc: 223, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 95: swpout inc: 212, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 96: swpout inc: 220, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 97: swpout inc: 220, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 98: swpout inc: 216, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 99: swpout inc: 223, swpout fallback inc: 0, Fallback percentage: 0.00%
> Iteration 100: swpout inc: 225, swpout fallback inc: 0, Fallback percentage: 0.00%

Unfortunately, the data is gotten using a special designed test program
which always swap-in pages with swapped-out size.  I don't know whether
such workloads exist in reality.  Otherwise, you need to wait for mTHP
swap-in to be merged firstly, and people reach consensus that we should
always swap-in pages with swapped-out size.

Alternately, we can make some design adjustment to make the patchset
work in current situation (mTHP swap-out, normal page swap-in).

- One non-full cluster list for each order (same as current design)

- When one swap entry is freed, check whether one "order+1" swap entry
  becomes free, if so, move the cluster to "order+1" non-full cluster
  list.

- When allocate swap entry with "order", get cluster from free, "order",
  "order+1", ... non-full cluster list.  If all are empty, fallback to
  order 0.

Do you think that this works?

> Reported-by: Barry Song <21cnbao@...il.com>
> Signed-off-by: Chris Li <chrisl@...nel.org>
> ---
> Changes in v3:
> - Using V1 as base.
> - Rename "next" to "list" for the list field, suggested by Ying.
> - Update comment for the locking rules for cluster fields and list,
>   suggested by Ying.
> - Allocate from the nonfull list before attempting free list, suggested
>   by Kairui.

Haven't looked into this.  It appears that this breaks the original
discard behavior which helps performance of some SSD, please refer to
commit 2a8f94493432 ("swap: change block allocation algorithm for SSD").

And as pointed out by Ryan, this may reduce the opportunity of the
sequential block device writing during swap-out, which may hurt
performance of SSD too.

[snip]

--
Best Regards,
Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ