lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 20 Dec 2012 11:12:08 +0000
From:	Mel Gorman <>
To:	Zlatko Calusic <>
Cc:	Linus Torvalds <>,
	Andrew Morton <>,
	Hugh Dickins <>, linux-mm <>,
	Linux Kernel Mailing List <>
Subject: Re: [PATCH] mm: do not sleep in balance_pgdat if there's no i/o

On Thu, Dec 20, 2012 at 12:17:07AM +0100, Zlatko Calusic wrote:
> On a 4GB RAM machine, where Normal zone is much smaller than
> DMA32 zone, the Normal zone gets fragmented in time. This requires
> relatively more pressure in balance_pgdat to get the zone above the
> required watermark. Unfortunately, the congestion_wait() call in there
> slows it down for a completely wrong reason, expecting that there's
> a lot of writeback/swapout, even when there's none (much more common).
> After a few days, when fragmentation progresses, this flawed logic
> translates to a very high CPU iowait times, even though there's no
> I/O congestion at all. If THP is enabled, the problem occurs sooner,
> but I was able to see it even on !THP kernels, just by giving it a bit
> more time to occur.
> The proper way to deal with this is to not wait, unless there's
> congestion. Thanks to Mel Gorman, we already have the function that
> perfectly fits the job. The patch was tested on a machine which
> nicely revealed the problem after only 1 day of uptime, and it's been
> working great.
> ---
>  mm/vmscan.c |   12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)

Acked-by: Mel Gorman <

Mel Gorman
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists