lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bb5f1ed84df1686aebdba5d60ab0e162@3xo.fr>
Date: Wed, 23 Apr 2025 18:28:47 +0200
From: Nicolas Baranger <nicolas.baranger@....fr>
To: Paulo Alcantara <pc@...guebit.com>
Cc: Christoph Hellwig <hch@...radead.org>, hch@....de, David Howells
 <dhowells@...hat.com>, netfs@...ts.linux.dev, linux-cifs@...r.kernel.org,
 linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org, Steve French
 <smfrench@...il.com>, Jeff Layton <jlayton@...nel.org>, Christian Brauner
 <brauner@...nel.org>
Subject: Re: [netfs/cifs - Linux 6.14] loop on file cat + file copy when files
 are on CIFS share

Hi Paolo

Thanks for answer, all explanations and help

I'm happy you found those 2 bugs and starting to patch them.
Reading your answer, I want to remember that I already found a bug in 
cifs DIO starting from Linux 6.10 (when cifs statring to use netfs to do 
its IO) and it was fixed by David and Christoph
full story here: 
https://lore.kernel.org/all/14271ed82a5be7fcc5ceea5f68a10bbd@manguebit.com/T/

> I've noticed that you disabled caching with 'cache=none', is there any
> particular reason for that?


Yes, it's related with the precedent use case describes in the other 
bug:
For backuping servers, I've got some KSMBD cifs share on which there are 
some 4TB+ sparses files (back-files) which are LUKS + BTRFS formatted.
The cifs share is mounted on servers and each server mount its own 
back-file as a block device and make its backup inside this crypted disk 
file
Due to performance issues, it is required that the disk files are using 
4KB block and are mounted in servers using losetup DIO option (+ 4K 
block size options)
When I use something else than 'cache=none', sometimes the BTRFS 
filesystem on the back file get corrupted and I also need to mount the 
BTRFS filesystem with 'space_cache=v2' to avoid filesystem corruption

> Have you also set rsize, wsize and bsize mount options?  If so, why?

After a lot of testing, the mounts buffers values: rsize=65536, 
wsize=65536, bsize=16777216, are the one which provide the best 
performances with no corruptions on the back-file filesystem and with 
these options a ~2TB backup is possible in few hours during  timeframe 
~1 -> ~5 AM each night

For me it's important that kernel async DIO on netfs continue to work as 
it's used by all my production backup system (transfer speed ratio 
compared with and without DIO is between 10 to 25)

I will try the patch "[PATCH] netfs: Fix setting of transferred bytes 
with short DIO reads", thanks

Let me know if you need further explanations,

Kind regards
Nicolas Baranger


Le 2025-04-22 01:45, Paulo Alcantara a écrit :

> Nicolas Baranger <nicolas.baranger@....fr> writes:
> 
>> If you need more traces or details on (both?) issues :
>> 
>> - 1) infinite loop issue during 'cat' or 'copy' since Linux 6.14.0
>> 
>> - 2) (don't know if it's related) the very high number of several 
>> bytes
>> TCP packets transmitted in SMB transaction (more than a hundred) for a 
>> 5
>> bytes file transfert under Linux 6.13.8
> 
> According to your mount options and network traces, cat(1) is 
> attempting
> to read 16M from 'toto' file, in which case netfslib will create 256
> subrequests to handle 64K (rsize=65536) reads from 'toto' file.
> 
> The first 64K read at offset 0 succeeds and server returns 5 bytes, the
> client then sets NETFS_SREQ_HIT_EOF to indicate that this subrequest 
> hit
> the EOF.  The next subrequests will still be processed by netfslib and
> sent to the server, but they all fail with STATUS_END_OF_FILE.
> 
> So, the problem is with short DIO reads in netfslib that are not being
> handled correctly.  It is returning a fixed number of bytes read to
> every read(2) call in your cat command, 16711680 bytes which is the
> offset of last subrequest.  This will make cat(1) retry forever as
> netfslib is failing to return the correct number of bytes read,
> including EOF.
> 
> While testing a potential fix, I also found other problems with DIO in
> cifs.ko, so I'm working with Dave to get the proper fixes for both
> netfslib and cifs.ko.
> 
> I've noticed that you disabled caching with 'cache=none', is there any
> particular reason for that?
> 
> Have you also set rsize, wsize and bsize mount options?  If so, why?
> 
> If you want to keep 'cache=none', then a possible workaround for you
> would be making rsize and wsize always greater than bsize.  The default
> values (rsize=4194304,wsize=4194304,bsize=1048576) would do it.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ