lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220907174333.tnnq3xk4o3w76s5x@riteshh-domain>
Date:   Wed, 7 Sep 2022 23:13:33 +0530
From:   "Ritesh Harjani (IBM)" <ritesh.list@...il.com>
To:     Jan Kara <jack@...e.cz>
Cc:     Ted Tso <tytso@....edu>, linux-ext4@...r.kernel.org,
        Thorsten Leemhuis <regressions@...mhuis.info>,
        Ojaswin Mujoo <ojaswin@...ux.ibm.com>,
        Stefan Wahren <stefan.wahren@...e.com>,
        Andreas Dilger <adilger.kernel@...ger.ca>,
        stable@...r.kernel.org
Subject: Re: [PATCH 1/5] ext4: Make mballoc try target group first even with
 mb_optimize_scan

On 22/09/06 05:29PM, Jan Kara wrote:
> One of the side-effects of mb_optimize_scan was that the optimized
> functions to select next group to try were called even before we tried
> the goal group. As a result we no longer allocate files close to
> corresponding inodes as well as we don't try to expand currently
> allocated extent in the same group. This results in reaim regression
> with workfile.disk workload of upto 8% with many clients on my test
> machine:
> 
>                      baseline               mb_optimize_scan
> Hmean     disk-1       2114.16 (   0.00%)     2099.37 (  -0.70%)
> Hmean     disk-41     87794.43 (   0.00%)    83787.47 *  -4.56%*
> Hmean     disk-81    148170.73 (   0.00%)   135527.05 *  -8.53%*
> Hmean     disk-121   177506.11 (   0.00%)   166284.93 *  -6.32%*
> Hmean     disk-161   220951.51 (   0.00%)   207563.39 *  -6.06%*
> Hmean     disk-201   208722.74 (   0.00%)   203235.59 (  -2.63%)
> Hmean     disk-241   222051.60 (   0.00%)   217705.51 (  -1.96%)
> Hmean     disk-281   252244.17 (   0.00%)   241132.72 *  -4.41%*
> Hmean     disk-321   255844.84 (   0.00%)   245412.84 *  -4.08%*
> 
> Also this is causing huge regression (time increased by a factor of 5 or
> so) when untarring archive with lots of small files on some eMMC storage
> cards.
> 
> Fix the problem by making sure we try goal group first.
> 

Yup, this is definitely a bug. We were never trying goal group then,
except maybe for rotational devices (due to ac_groups_linear_remaining).

Looks right to me.
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@...il.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ