[<prev] [next>] [day] [month] [year] [list]
Message-ID: <76bedae6-22ea-4abc-8c06-b424ceb39217@t-8ch.de>
Date: Tue, 20 Sep 2022 12:33:31 +0200
From: Thomas Weißschuh <thomas@...ch.de>
To: linux-fsdevel@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
linux-nfs@...r.kernel.org, thomas.weissschuh@...deus.com
Subject: O_LARGEFILE / EOVERFLOW on tmpfs / NFS
Hi everybody,
it seems there is some inconsistency about how large files that are opened
*without* O_LARGEFILE on different filesystems.
On ext4/btrfs/xfs a large file openend without O_LARGEFILE results in an
EOVERFLOW error to be reported (as documented by open(2) and open(3p)).
On tmpfs/NFS the file is opened successfully but the values returned for
lseek() are bogus.
(See the reproducer attached to this mail.)
This has been reproduced on 5.19.8 but the sources look the same on current
torvalds/master.
Is this intentional? To me it seems this should fail with EOVERFLOW everywhere.
Looking at the sources, the O_LARGEFILE flag is checked in generic_file_open()
but not all filesystems call this function.
If this is a bug would it make sense to hoist this check into the VFS layer so
not all filesystems have to call this manually?
Another question would be about backwards-compatibility becaus fixing it would
prevent applications from opening files they could open before.
On the other hand they could have experienced silent data corruption before.
Thanks,
Thomas
View attachment "test.c" of type "text/plain" (775 bytes)
Powered by blists - more mailing lists