lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 30 Jun 2020 18:03:55 +0530
From:   AMIT SAHRAWAT <a.sahrawat@...sung.com>
To:     OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>,
        Anupam Aggarwal <anupam.al@...sung.com>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE:(2) [PATCH] fs: fat: add check for dir size in fat_calc_dir_size

There are many implementation that doesn't follow the spec strictly. And
when I tested in past, Windows also allowed to read the directory beyond
that limit. I can't recall though if there is in real case or just test
case though.
>> Thanks Ogawa, yes there are many implementations, preferably going around with different variants.
But, using standard linux version on the systems and having such USB connected on such systems is introducing issues(importantly because these being used on Windows also by users).
I am not sure, if this is something which is new from Windows part.
But, surely extending the directory beyond limit is causing regression with FAT usage on linux.
It is making FAT filesystem related storage virtually unresponsive for minutes in these cases,
and importantly keep on putting pressure on memory due to increasing buffer heads (already a known one with FAT fs).
 
So if there is no strong reason to apply the limit, I don't think it is
good to limit it. 
>> The reason for us to share this is because of the unresponsive behaviour observed with FAT fs on our systems.
This is not a new issue, we have been observing this for quite sometime (may be around 1year+).
Finally, we got hold of disk which is making this 100% reproducible.
We thought of applying this to the mainline, as our FAT is aligned with main kernel.

(btw, the current code should detect the corruption of
infinite loop already)
>>
No, there are no such error reported on our side. 
We had to trace to check the point of stuck in simple 'ls -lR'.

Thanks & Regards,
Amit Sahrawat
 
 
--------- Original Message ---------
Sender : OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
Date : 2020-06-30 16:38 (GMT+5:30)
Title : Re: [PATCH] fs: fat: add check for dir size in fat_calc_dir_size
 
Anupam Aggarwal <anupam.al@...sung.com> writes:
 
> Max directory size of FAT filesystem is FAT_MAX_DIR_SIZE(2097152 bytes)
> It is possible that, due to corruption, directory size calculated in
> fat_calc_dir_size() can be greater than FAT_MAX_DIR_SIZE, i.e.
> can be in GBs, hence directory traversal can take long time.
> for example when command "ls -lR" is executed on corrupted FAT
> formatted USB, fat_search_long() function will lookup for a filename from
> position 0 till end of corrupted directory size, multiple such lookups
> will lead to long directory traversal
>
> Added sanity check for directory size fat_calc_dir_size(),
> and return EIO error, which will prevent lookup in corrupted directory
>
> Signed-off-by: Anupam Aggarwal <anupam.al@...sung.com>
> Signed-off-by: Amit Sahrawat <a.sahrawat@...sung.com>
 
There are many implementation that doesn't follow the spec strictly. And
when I tested in past, Windows also allowed to read the directory beyond
that limit. I can't recall though if there is in real case or just test
case though.
 
So if there is no strong reason to apply the limit, I don't think it is
good to limit it. (btw, the current code should detect the corruption of
infinite loop already)
 
Thanks.
 
> ---
>  fs/fat/inode.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/fs/fat/inode.c b/fs/fat/inode.c
> index a0cf99d..9b2e81e 100644
> --- a/fs/fat/inode.c
> +++ b/fs/fat/inode.c
> @@ -490,6 +490,13 @@ static int fat_calc_dir_size(struct inode *inode)
>                  return ret;
>          inode->i_size = (fclus + 1) << sbi->cluster_bits;
>  
> +        if (i_size_read(inode) > FAT_MAX_DIR_SIZE) {
> +                fat_fs_error(inode->i_sb,
> +                             "%s corrupted directory (invalid size %lld)\n",
> +                             __func__, i_size_read(inode));
> +                return -EIO;
> +        }
> +
>          return 0;
>  }
 
-- 
OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ