lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 29 Jan 2021 14:29:27 -0800
From:   Harshad Shirwadkar <>
Cc:, Harshad Shirwadkar <>
Subject: [PATCH 0/4] Improve group scanning in CR 0 and CR 1 passes

This patch series improves cr 0 and cr 1 passes of the allocator
signficantly. Currently, at cr 0 and 1, we perform linear lookups to
find the matching groups. That's very inefficient for large file
systems where there are millions of block groups. At cr 0, we only
care about the groups that have the largest free order >= the
request's order and at cr 1 we only care about groups where average
fragment size > the request size. so, this patch introduces new data
structure that allow us to perform cr 0 lookup in constant time and cr
1 lookup in log (number of groups) time instead of linear.

For cr 0, we add a list for each order and all the groups are enqueued
to the appropriate list. This allows us to lookup a match at cr 0 in
constant time.

For cr 1, we add a new rb tree of groups sorted by largest fragment
size. This allows us to lookup a match for cr 1 in log (num groups)

Verified that there are no regressions in smoke tests (-g quick -c 4k).

Also, to demonstrate the effectiveness for the patch series, following
experiment was performed:

Created a highly fragmented disk of size 65TB. The disk had no
contiguous 2M regions. Following command was run consecutively for 3

time dd if=/dev/urandom of=file bs=2M count=10

Here are the results with and without cr 0/1 optimizations:

|         | Without CR 0/1 Optimizations | With CR 0/1 Optimizations |
| 1st run | 5m1.871s                     | 2m47.642s                 |
| 2nd run | 2m28.390s                    | 0m0.611s                  |
| 3rd run | 2m26.530s                    | 0m1.255s                  |

Signed-off-by: Harshad Shirwadkar <>

Harshad Shirwadkar (4):
  ext4: add MB_NUM_ORDERS macro
  ext4: drop s_mb_bal_lock and convert protected fields to atomic
  ext4: improve cr 0 / cr 1 group scanning
  ext4: add proc files to monitor new structures

 fs/ext4/ext4.h    |  12 +-
 fs/ext4/mballoc.c | 330 ++++++++++++++++++++++++++++++++++++++++++----
 fs/ext4/mballoc.h |   6 +
 fs/ext4/sysfs.c   |   2 +
 4 files changed, 324 insertions(+), 26 deletions(-)


Powered by blists - more mailing lists