[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <08d11895-cc40-43da-0437-09d3a831b27b@fastmail.fm>
Date: Fri, 17 Jun 2022 11:25:49 +0200
From: Bernd Schubert <bernd.schubert@...tmail.fm>
To: Miklos Szeredi <miklos@...redi.hu>,
Dharmendra Singh <dharamhans87@...il.com>
Cc: Vivek Goyal <vgoyal@...hat.com>, linux-fsdevel@...r.kernel.org,
fuse-devel <fuse-devel@...ts.sourceforge.net>,
linux-kernel@...r.kernel.org, Bernd Schubert <bschubert@....com>,
Dharmendra Singh <dsingh@....com>
Subject: Re: [PATCH v5 1/1] Allow non-extending parallel direct writes on the
same file.
Hi Miklos,
On 6/17/22 09:36, Miklos Szeredi wrote:
> On Fri, 17 Jun 2022 at 09:10, Dharmendra Singh <dharamhans87@...il.com> wrote:
>
>> This patch relaxes the exclusive lock for direct non-extending writes
>> only. File size extending writes might not need the lock either,
>> but we are not entirely sure if there is a risk to introduce any
>> kind of regression. Furthermore, benchmarking with fio does not
>> show a difference between patch versions that take on file size
>> extension a) an exclusive lock and b) a shared lock.
>
> I'm okay with this, but ISTR Bernd noted a real-life scenario where
> this is not sufficient. Maybe that should be mentioned in the patch
> header?
the above comment is actually directly from me.
We didn't check if fio extends the file before the runs, but even if it
would, my current thinking is that before we serialized n-threads, now
we have an alternation of
- "parallel n-1 threads running" + 1 waiting thread
- "blocked n-1 threads" + 1 running
I think if we will come back anyway, if we should continue to see slow
IO with MPIIO. Right now we want to get our patches merged first and
then will create an updated module for RHEL8 (+derivatives) customers.
Our benchmark machines are also running plain RHEL8 kernels - without
back porting the modules first we don' know yet what we will be the
actual impact to things like io500.
Shall we still extend the commit message or are we good to go?
Thanks,
Bernd
Powered by blists - more mailing lists