lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250729093558.GG222315@ZenIV>
Date: Tue, 29 Jul 2025 10:35:58 +0100
From: Al Viro <viro@...iv.linux.org.uk>
To: Edward Adam Davis <eadavis@...com>
Cc: hirofumi@...l.parknet.co.jp, linkinjeon@...nel.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	sj1557.seo@...sung.com,
	syzbot+d3c29ed63db6ddf8406e@...kaller.appspotmail.com,
	syzkaller-bugs@...glegroups.com
Subject: Re: [PATCH V2] fat: Prevent the race of read/write the FAT32 entry

On Tue, Jul 29, 2025 at 02:17:10PM +0800, Edward Adam Davis wrote:
> syzbot reports data-race in fat32_ent_get/fat32_ent_put. 
> 
> 	CPU0(Task A)			CPU1(Task B)
> 	====				====
> 	vfs_write
> 	new_sync_write
> 	generic_file_write_iter
> 	fat_write_begin
> 	block_write_begin		vfs_statfs
> 	fat_get_block			statfs_by_dentry
> 	fat_add_cluster			fat_statfs
> 	fat_ent_write			fat_count_free_clusters
> 	fat32_ent_put			fat32_ent_get
> 
> Task A's write operation on CPU0 and Task B's read operation on CPU1 occur
> simultaneously, generating an race condition.
> 
> Add READ/WRITE_ONCE to solve the race condition that occurs when accessing
> FAT32 entry.

	Solve it in which sense?  fat32_ent_get() and fat32_ent_put()
are already atomic wrt each other; neither this nor your previous
variant change anything whatsoever.  And if you are talking about the
results of *multiple* fat32_ent_get(), with some assumptions made by
fat_count_free_clusters() that somehow get screwed by the modifications
from fat_add_cluster(), your patch does not prevent any of that (not
that you explained what kind of assumptions would those be).

	Long story short - accesses to individual entries are already
atomic wrt each other; the fact that they happen simultaneously _might_
be a symptom of insufficient serialization, but neither version of your
patch resolves that in any way - it just prevents the tool from reporting
its suspicions.

	It does not give fat_count_free_clusters() a stable state of
the entire table, assuming it needs one.  It might, at that - I hadn't
looked into that code since way back.  But unless I'm missing something,
the only thing your patch does is making your (rather blunt) tool STFU.

	If there is a race, explain what sequence of events leads to
incorrect behaviour and explain why your proposed change prevents that
incorrect behaviour.

	Note that if that behaviour is "amount of free space reported
by statfs(2) depends upon how far did the ongoing write(2) get", it
is *not* incorrect - that's exactly what the userland has asked for.
If it's "statfs(2) gets confused into reporting an amount of free space
that wouldn't have been accurate for any moment of time (or, worse yet,
crashes, etc.)" - yes, that would be a problem, but it could not be
solved by preventing simultaneous access to *single* entries, if it
happens at all.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ