[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YUhztA8TmplTluyQ@casper.infradead.org>
Date: Mon, 20 Sep 2021 12:42:44 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Linux-MM <linux-mm@...ck.org>, NeilBrown <neilb@...e.de>,
Theodore Ts'o <tytso@....edu>,
Andreas Dilger <adilger.kernel@...ger.ca>,
"Darrick J . Wong" <djwong@...nel.org>,
Michal Hocko <mhocko@...e.com>,
Dave Chinner <david@...morbit.com>,
Rik van Riel <riel@...riel.com>,
Vlastimil Babka <vbabka@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
Jonathan Corbet <corbet@....net>,
Linux-fsdevel <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 0/5] Remove dependency on congestion_wait in mm/
On Mon, Sep 20, 2021 at 09:54:31AM +0100, Mel Gorman wrote:
> This has been lightly tested only and the testing was useless as the
> relevant code was not executed. The workload configurations I had that
> used to trigger these corner cases no longer work (yey?) and I'll need
> to implement a new synthetic workload. If someone is aware of a realistic
> workload that forces reclaim activity to the point where reclaim stalls
> then kindly share the details.
The stereeotypical "stalling on I/O" problem is to plug in one of the
crap USB drives you were given at a trade show and simply
dd if=/dev/zero of=/dev/sdb
sync
You can also set up qemu to have extremely slow I/O performance:
https://serverfault.com/questions/675704/extremely-slow-qemu-storage-performance-with-qcow2-images
Powered by blists - more mailing lists