[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGudoHHWP2o+sqih1Ra4WVAW4Fvoq9VSufRA6j7Ex4F1RJ66sw@mail.gmail.com>
Date: Thu, 27 Feb 2025 17:34:43 +0100
From: Mateusz Guzik <mjguzik@...il.com>
To: "Sapkal, Swapnil" <swapnil.sapkal@....com>
Cc: Oleg Nesterov <oleg@...hat.com>, Manfred Spraul <manfred@...orfullife.com>,
Linus Torvalds <torvalds@...ux-foundation.org>, Christian Brauner <brauner@...nel.org>,
David Howells <dhowells@...hat.com>, WangYuli <wangyuli@...ontech.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
K Prateek Nayak <kprateek.nayak@....com>, "Shenoy, Gautham Ranjal" <gautham.shenoy@....com>, Neeraj.Upadhyay@....com
Subject: Re: [PATCH] pipe_read: don't wake up the writer if the pipe is still full
On Thu, Feb 27, 2025 at 5:20 PM Sapkal, Swapnil <swapnil.sapkal@....com> wrote:
> I tried reproducing the issue with both the scenarios mentioned below.
>
> > 1. with 1 fd instead of 20:
> >
> > /usr/bin/hackbench -g 16 -f 1 --threads --pipe -l 100000 -s 100
> >
>
> With this I was not able to reproduce the issue. I tried almost 5000
> iterations.
>
Ok, noted.
> > 2. with a size which divides 4096 evenly (e.g., 128):
> >
> > /usr/bin/hackbench -g 1 -f 20 --threads --pipe -l 100000 -s 128
>
> I was not able to reproduce the issue with 1 group. But I thought you
> wanted to change only the message size to 128 bytes.
Yes indeed, thanks for catching the problem.
> When I retain the number of groups to 16 and change the message size to
> 128, it took me around 150 iterations to reproduce this issue (with 100
> bytes it was 20 iterations). The exact command was
>
> /usr/bin/hackbench -g 16 -f 20 --threads --pipe -l 100000 -s 128
>
> I will try to sprinkle some trace_printk's in the code where the state
> of the pipe changes. I will report here if I find something.
>
Thanks.
So to be clear, this is Oleg's bug, I am only looking from the side
out of curiosity what's up. As it usually goes with these, after the
dust settles I very much expect the fix will be roughly a one liner.
:)
--
Mateusz Guzik <mjguzik gmail.com>
Powered by blists - more mailing lists