lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <exportbomb.1186077923@pinky>
Date:	Thu, 02 Aug 2007 19:17:42 +0100
From:	Andy Whitcroft <apw@...dowen.org>
To:	Andrew Morton <akpm@...l.org>
Cc:	Mel Gorman <mel@....ul.ie>, Andy Whitcroft <apw@...dowen.org>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: [PATCH 0/2] Synchronous Lumpy Reclaim V3

[This is a re-spin based on feedback from akpm.]

As pointed out by Mel when reclaim is applied at higher orders a
significant amount of IO may be started.  As this takes finite time
to drain reclaim will consider more areas than ultimatly needed
to satisfy the request.  This leads to more reclaim than strictly
required and reduced success rates.

I was able to confirm Mel's test results on systems locally.
These show that even under light load the success rates drop off far
more than expected.  Testing with a modified version of his patch
(which follows) I was able to allocate almost all of ZONE_MOVABLE
with a near idle system.  I ran 5 test passes sequentially following
system boot (the system has 29 hugepages in ZONE_MOVABLE):

  2.6.23-rc1              11  8  6  7  7
  sync_lumpy              28 28 29 29 26

These show that although hugely better than the near 0% success
normally expected we can only allocate about a 1/4 of the zone.
Using synchronous reclaim for these allocations we get close to 100%
as expected.

I have also run our standard high order tests and these show no
regressions in allocation success rates at rest, and some significant
improvements under load.

Following this email are two patches, both should be considered as
bug fixes to lumpy reclaim for 2.6.23:

ensure-we-count-pages-transitioning-inactive-via-clear_active_flags:
  this a bug fix for Lumpy Reclaim fixing up a bug in VM Event
  accounting when it marks pages inactive, and

Wait-for-page-writeback-when-directly-reclaiming-contiguous-areas:
  updates reclaim making direct reclaim synchronous when applied
  at orders above PAGE_ALLOC_COSTLY_ORDER.

Patches against 2.6.23-rc1.  Andrew please consider for -mm and
for pushing to mainline.

-apw
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ