lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 13 Oct 2018 21:25:09 +0000
From:   bugzilla-daemon@...zilla.kernel.org
To:     linux-ext4@...r.kernel.org
Subject: [Bug 200681] [inline_data] read() does not see what write() has just
 written through different FD in the same thread

https://bugzilla.kernel.org/show_bug.cgi?id=200681

--- Comment #8 from Theodore Tso (tytso@....edu) ---
Thanks for the repro.   It looks like the critical factor is not the size of
the filesystem, but the blocksize.  It just so happens that using a file system
of size of 8M causes mke2fs to default to a 1k block size.  This can be seen by
trying testing with a file system created by:

mke2fs -Fq -t ext4 -O inline_data -I 1024 -b 1024 /dev/vdc 

I've done some initial investigation, and something really strange is going on.
 If an fsync(2) is forced write after the writes, the problem goes away.  By
changing the size of the reads, it's the second read which is returning all
zeros --- up to the point where when the first read is made to be larger than
16k, at which point it looks like it's the last 4k page which is all zeros.

Unfortuantely, it's not clear what is happening.   The best I can say at this
point is that it looks the problem is related to inline_data, where when doing
a large write which forces a conversion from an inline_data to normal file, and
where the blocksize is != the page size, something is going very wrong.

To date, I've been only testing inline_data with a blocksize of 4k, and that's
probably why I haven't seen any problems like this.

I'm curious how you found this bug; were you deliberately using a 1k block
size, or were you trying to use inline_data with very small file systems?

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

Powered by blists - more mailing lists