lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zh8gct29.fsf@mail.parknet.co.jp>
Date:   Sat, 04 Jul 2020 04:11:26 +0900
From:   OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
To:     Anupam Aggarwal <anupam.al@...sung.com>
Cc:     AMIT SAHRAWAT <a.sahrawat@...sung.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] fs: fat: add check for dir size in fat_calc_dir_size

Anupam Aggarwal <anupam.al@...sung.com> writes:

>>So what was the root cause of slowness on big directory?
>
> Problem happened on FAT32 formatted 32GB USB 3.0 pendrive, which has
> 20GB of data, cluster size is 16KB It has one corrupted directory
> whose size calculated by fat_calc_dir_size() is 1146896384 bytes
> i.e. 1.06 GB.
>
> When directory traversal of corrupted directory starts, directory
> entries looks to be corrupted and lookup fails for these directory
> entries.  Some directory entries name are having format abc/xyz,
> following are the few observed directory entry names:

[...]

> During search for single name in fat_search_long() function, whole
> corrupted directory of size 1.06GB is traversed, which takes around
> 230 to 240 secs, which finally ends up with returning ENOENT.
> 
> Now multiple lookups in corrupted directory makes “ls -lR”
> never-ending e.g. in overnite test of running “ls –lR” on USB having
> corrupted directory, around 200 such lookups in corrupted directory
> took 14hrs and still ”ls –lR” is running.

Sounds like totally corrupted FAT image, and the directory may have the
non-simple loop (e.g. there is hardlink of directory).

If so, I'm not sure if we can detect without heavyweight check.  Well,
although user should run fsck before mount. However, if fs can detect
and stop early, it would be better.

BTW, if you run fsck, the corrupted directories and issue are gone at
least?

Anyway, fsck would be main way. And on other hand, if we want to add
mitigation for corruption, we would have to see much more details of
this corruption.  Can you put somewhere to access the corrupted image
(need the only metadata) to reproduce?

> Total number of directory entries in corrupted directory of size
> 1146896384 bytes = 1146896384/32 = 35840512, so lookup for 35840512
> looks very exhaustive, therefore we have put size check of directory
> in fat_calc_dir_size() and prevented the directory traversal by
> returning -EIO.
> 
> While browsing corrupted directory(\CorruptedDIR) on Windows 10 PC,
> 2623 directory entries were listed and timestamps were wrong

What happens if you recursively traversed directories on Windows? This
issue happens on Windows too?

Thanks.
-- 
OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ