lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Sep 2022 14:02:07 +0200
From:   Jan Kara <jack@...e.cz>
To:     Boyang Xue <bxue@...hat.com>
Cc:     linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        Lukas Czerner <lczerner@...hat.com>
Subject: Re: [bug report] disk quota exceed after multiple write/delete loops

Hello!

On Tue 23-08-22 12:16:46, Boyang Xue wrote:
> On the latest kernel 6.0.0-0.rc2, I find the user quota limit in an
> ext4 mount is unstable, that after several successful "write file then
> delete" loops, it will finally fail with "Disk quota exceeded". This
> bug can be reproduced on at least kernel-6.0.0-0.rc2 and
> kernel-5.14.0-*, but can't be reproduced on kernel-4.18.0 based RHEL8
> kernel.

<snip reproducer> 

> Run log on kernel-6.0.0-0.rc2
> ```
> (...skip successful Run#[1-2]...)
> *** Run#3 ***
> --- Quota before writing file ---
> Disk quotas for user quota_test_user1 (uid 1003):
>      Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
>      /dev/loop0       0  200000  300000               0    2000    3000
> --- ---
> dd: error writing '/mntpt/test_300m': Disk quota exceeded
> 299997+0 records in
> 299996+0 records out
> 307195904 bytes (307 MB, 293 MiB) copied, 1.44836 s, 212 MB/s

So this shows that we have failed allocating the last filesystem block.  I
suspect this happens because the file gets allocted from several free space
extens and so one extra indirect tree block needs to be allocated (or
something like that). To verify that you can check the created file with
"filefrag -v".

Anyway I don't think it is quite correct to assume the filesystem can fit
300000 data blocks within 300000 block quota because the metadata overhead
gets accounted into quota as well and the user has no direct control over
that. So you should probably give filesystem some slack space in your
tests for metadata overhead.

> --- Quota after writing file ---
> Disk quotas for user quota_test_user1 (uid 1003):
>      Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
>      /dev/loop0  300000* 200000  300000   7days       1    2000    3000
> --- ---
> --- Quota after deleting file ---
> Disk quotas for user quota_test_user1 (uid 1003):
>      Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
>      /dev/loop0       0  200000  300000               0    2000    3000
> --- ---
> ```

								Honza


-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ