lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <l2ckvuxugdhoq3wf3s7hufwn7q3togt7tususj23te4fc75h5d@itemgw27odar> Date: Tue, 6 May 2025 13:33:36 +0200 From: Jan Kara <jack@...e.cz> To: Zhang Yi <yi.zhang@...weicloud.com> Cc: Jan Kara <jack@...e.cz>, Matthew Wilcox <willy@...radead.org>, Liebes Wang <wanghaichi0403@...il.com>, ojaswin@...ux.ibm.com, Theodore Ts'o <tytso@....edu>, linux-fsdevel@...r.kernel.org, syzkaller@...glegroups.com, Ext4 Developers List <linux-ext4@...r.kernel.org> Subject: Re: kernel BUG in zero_user_segments On Tue 06-05-25 10:25:06, Zhang Yi wrote: > On 2025/5/1 19:19, Jan Kara wrote: > > On Wed 30-04-25 04:14:32, Matthew Wilcox wrote: > >> On Tue, Apr 29, 2025 at 03:55:18PM +0800, Zhang Yi wrote: > >>> After debugging, I found that this problem is caused by punching a hole > >>> with an offset variable larger than max_end on a corrupted ext4 inode, > >>> whose i_size is larger than maxbyte. It will result in a negative length > >>> in the truncate_inode_partial_folio(), which will trigger this problem. > >> > >> It seems to me like we're asking for trouble when we allow an inode with > >> an i_size larger than max_end to be instantiated. There are probably > >> other places which assume it is smaller than max_end. We should probably > >> decline to create the bad inode in the first place? > > > > Indeed somewhat less quirky fix could be to make ext4_max_bitmap_size() > > return one block smaller limit. Something like: > > > > /* Compute how many blocks we can address by block tree */ > > res += ppb; > > res += ppb * ppb; > > res += ((loff_t)ppb) * ppb * ppb; > > + /* > > + * Hole punching assumes it can map the block past end of hole to > > + * tree offsets > > + */ > > + res -= 1; > > /* Compute how many metadata blocks are needed */ > > meta_blocks = 1; > > meta_blocks += 1 + ppb; > > > > The slight caveat is that in theory there could be filesystems out there > > with so large files and then we'd stop allowing access to such files. But I > > guess the chances are so low that it's probably worth trying. > > > > Hmm, I suppose this approach could pose some risks to our legacy products, > and it makes me feel uneasy. Personally, I am more inclined toward the > current solution, unless we decide to fix the ext4_ind_remove_space() > directly. :) OK. I'm just curious, are you using indirect-block based inodes and using them upto the current s_bitmap_maxbytes size? :) Honza -- Jan Kara <jack@...e.com> SUSE Labs, CR
Powered by blists - more mailing lists