lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAD=FV=V8m-mpJsFntCciqtq7xnvhmnvPdTvxNuBGBT3-cDdabQ@mail.gmail.com>
Date:   Tue, 2 May 2023 14:08:10 -0700
From:   Doug Anderson <dianders@...omium.org>
To:     Hillf Danton <hdanton@...a.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Christian Brauner <brauner@...nel.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Matthew Wilcox <willy@...radead.org>,
        Yu Zhao <yuzhao@...gle.com>,
        Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH v3] migrate_pages: Avoid blocking for IO in MIGRATE_SYNC_LIGHT

Hi,

On Sat, Apr 29, 2023 at 3:14 AM Hillf Danton <hdanton@...a.com> wrote:
>
> On 28 Apr 2023 13:54:38 -0700 Douglas Anderson <dianders@...omium.org>
> > The MIGRATE_SYNC_LIGHT mode is intended to block for things that will
> > finish quickly but not for things that will take a long time. Exactly
> > how long is too long is not well defined, but waits of tens of
> > milliseconds is likely non-ideal.
> >
> > When putting a Chromebook under memory pressure (opening over 90 tabs
> > on a 4GB machine) it was fairly easy to see delays waiting for some
> > locks in the kcompactd code path of > 100 ms. While the laptop wasn't
> > amazingly usable in this state, it was still limping along and this
> > state isn't something artificial. Sometimes we simply end up with a
> > lot of memory pressure.
>
> Was kcompactd waken up for PAGE_ALLOC_COSTLY_ORDER?

I put some more traces in and reproduced it again. I saw something
that looked like this:

1. balance_pgdat() called wakeup_kcompactd() with order=10 and that
caused us to get all the way to the end and wakeup kcompactd (there
were previous calls to wakeup_kcompactd() that returned early).

2. kcompactd started and completed kcompactd_do_work() without blocking.

3. kcompactd called proactive_compact_node() and there blocked for
~92ms in one case, ~120ms in another case, ~131ms in another case.


> > Putting the same Chromebook under memory pressure while it was running
> > Android apps (though not stressing them) showed a much worse result
> > (NOTE: this was on a older kernel but the codepaths here are similar).
> > Android apps on ChromeOS currently run from a 128K-block,
> > zlib-compressed, loopback-mounted squashfs disk. If we get a page
> > fault from something backed by the squashfs filesystem we could end up
> > holding a folio lock while reading enough from disk to decompress 128K
> > (and then decompressing it using the somewhat slow zlib algorithms).
> > That reading goes through the ext4 subsystem (because it's a loopback
> > mount) before eventually ending up in the block subsystem. This extra
> > jaunt adds extra overhead. Without much work I could see cases where
> > we ended up blocked on a folio lock for over a second. With more
> > extreme memory pressure I could see up to 25 seconds.
>
> In the same kcompactd code path above?

It was definitely in kcompactd. I can go back and trace through this
too, if it's useful, but I suspect it's the same.


> > We considered adding a timeout in the case of MIGRATE_SYNC_LIGHT for
> > the two locks that were seen to be slow [1] and that generated much
> > discussion. After discussion, it was decided that we should avoid
> > waiting for the two locks during MIGRATE_SYNC_LIGHT if they were being
> > held for IO. We'll continue with the unbounded wait for the more full
> > SYNC modes.
> >
> > With this change, I couldn't see any slow waits on these locks with my
> > previous testcases.
>
> Well this is the upside after this change, but given the win, what is
> the lose/cost paid? For example the changes in compact fail and success [1].
>
> [1] https://lore.kernel.org/lkml/20230418191313.268131-1-hannes@cmpxchg.org/

That looks like an interesting series. Obviously it would need to be
tested, but my hunch is that ${SUBJECT} patch would work well with
that series. Specifically with Johannes's series it seems more
important for the kcompactd thread to be working fruitfully. Having it
blocked for a long time when there is other useful work it could be
doing still seems wrong. With ${SUBJECT} patch it's not that we'll
never come back and try again, but we'll just wait until a future
iteration when (hopefully) the locks are easier to acquire. In the
meantime, we're looking for other pages to migrate.

-Doug

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ