lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 18 Apr 2022 12:43:43 -0700
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Zhihao Cheng <chengzhihao1@...wei.com>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Jens Axboe <axboe@...nel.dk>
Cc:     Al Viro <viro@...iv.linux.org.uk>, Christoph Hellwig <hch@....de>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        yukuai3@...wei.com
Subject: Re: [PATCH v2] fs-writeback: writeback_sb_inodes:Recalculate 'wrote' according skipped pages

[ Adding some scheduler people - the background here is a ABBA
deadlock because a plug never gets unplugged and the IO never starts
and the buffer lock thus never gets released. That's simplified, see
     https://lore.kernel.org/all/20220415013735.1610091-1-chengzhihao1@huawei.com/
  and
     https://bugzilla.kernel.org/show_bug.cgi?id=215837
   for details ]

On Mon, Apr 18, 2022 at 2:14 AM Zhihao Cheng <chengzhihao1@...wei.com> wrote:
>
> In my test, 'need_resched()' (which is imported by 590dca3a71 "fs-writeback:
> unplug before cond_resched in writeback_sb_inodes") in function
> 'writeback_sb_inodes()' seldom comes true, unless cond_resched() is deleted
> from write_cache_pages().

So I'm not reacting to the patch, but just to this part of the message...

I forget the exact history of plugging, but at some point (long long
ago - we're talking pre-git days) it was device-specific and always
released on a timeout (or, obviously, explicitly unplugged).

And then later it became per-process, and always released by task-work
on any schedule() call.

But over time, that "any schedule" has gone away. It did so gradually,
over time, and long ago:

  73c101011926 ("block: initial patch for on-stack per-task plugging")
  6631e635c65d ("block: don't flush plugged IO on forced preemtion scheduling")

And that's *mostly* perfectly fine, but the problem ends up being that
not everything necessarily triggers the flushing at all.

In fact, if you call "__schedule()" directly (rather than
"schedule()") I think you may end up avoiding flush entirely. I'm
looking at  do_task_dead() and schedule_idle() and the
preempt_schedule() cases.

Similarly, tsk_is_pi_blocked() will disable the plug flush.

Back when it was a timer, the flushing was eventually guaranteed.

And then we would flush on any re-schedule, even if it was about
preemption and the process might stay on the CPU.

But these days we can be in the situation where we really don't flush
at all - the process may be scheduled away, but if it's still
runnable, the blk plug won't be flushed.

To make things *really* confusing, doing an io_schedule() will force a
plug flush, even  if the process might stay runnable. So io_schedule()
has those old legacy "unconditional flush" guarantees that a normal
schedule does not any more.

Also note how the plug is per-process, so when another process *does*
block (because it's waiting for some resource), that doesn't end up
really unplugging the actual IO which was started by somebody else.
Even if that other process is using io_schedule().

Which all brings us back to how we have that hacky thing in
writeback_sb_inodes() that does

        if (need_resched()) {
                /*
                 * We're trying to balance between building up a nice
                 * long list of IOs to improve our merge rate, and
                 * getting those IOs out quickly for anyone throttling
                 * in balance_dirty_pages().  cond_resched() doesn't
                 * unplug, so get our IOs out the door before we
                 * give up the CPU.
                 */
                blk_flush_plug(current->plug, false);
                cond_resched();
        }

and that currently *mostly* ends up protecting us and flushing the
plug when doing big writebacks, but as you can see from the email I'm
quoting, it then doesn't always work very well, because
"need_resched()" may end up being cleared by some other scheduling
point, and is entirely meaningless when preemption is on anyway.

So I think that's basically just a random voodoo programming thing
that has protected us in the past in some situations.

Now, Zhihao has a patch that fixes the problem by limiting the
writeback by being better at accounting:

    https://lore.kernel.org/all/20220418092824.3018714-1-chengzhihao1@huawei.com/

which is the email I'm answering, but I did want to bring in the
scheduler people to the discussion to see if people have ideas.

I think the writeback accounting fix is the right thing to do
regardless, but that whole need_resched() dance in
writeback_sb_inodes() is, I think, a sign that we do have real issues
here. That whole "flush plug if we need to reschedule" is simply a
fundamentally broken concept, when there are other rescheduling
points.

Comments?

The answer may just be that "the code in writeback_sb_inodes() is
fundamentally broken and should be removed".

But the fact that we have that code at all makes me quite nervous
about this. And we clearly *do* have situations where the writeback
code seems to cause nasty unplugging delays.

So I'm not convinced that "fix up the writeback accounting" is the
real and final fix.

I don't really have answers or suggestions, I just wanted people to
look at this in case they have ideas.

                    Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ