lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <ZDuNqQgpHUw+gi9G@infradead.org> Date: Sat, 15 Apr 2023 22:54:49 -0700 From: Christoph Hellwig <hch@...radead.org> To: "Darrick J. Wong" <djwong@...nel.org> Cc: Christoph Hellwig <hch@...radead.org>, Miklos Szeredi <miklos@...redi.hu>, Bernd Schubert <bschubert@....com>, axboe@...nel.dk, io-uring@...r.kernel.org, linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org, linux-xfs@...r.kernel.org, dsingh@....com Subject: Re: [PATCH 1/2] fs: add FMODE_DIO_PARALLEL_WRITE flag On Fri, Apr 14, 2023 at 08:36:12AM -0700, Darrick J. Wong wrote: > IIUC uring wants to avoid the situation where someone sends 300 writes > to the same file, all of which end up in background workers, and all of > which then contend on exclusive i_rwsem. Hence it has some hashing > scheme that executes io requests serially if they hash to the same value > (which iirc is the inode number?) to prevent resource waste. > > This flag turns off that hashing behavior on the assumption that each of > those 300 writes won't serialize on the other 299 writes, hence it's ok > to start up 300 workers. > > (apologies for precoffee garbled response) It might be useful if someone (Jens?) could clearly document the assumptions for this flag.
Powered by blists - more mailing lists