lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20121206190114.GE17258@suse.de> Date: Thu, 6 Dec 2012 19:01:14 +0000 From: Mel Gorman <mgorman@...e.de> To: Linus Torvalds <torvalds@...ux-foundation.org> Cc: Henrik Rydberg <rydberg@...omail.se>, Jan Kara <jack@...e.cz>, linux-mm <linux-mm@...ck.org>, Linux Kernel Mailing List <linux-kernel@...r.kernel.org> Subject: Re: Oops in 3.7-rc8 isolate_free_pages_block() On Thu, Dec 06, 2012 at 10:41:14AM -0800, Linus Torvalds wrote: > On Thu, Dec 6, 2012 at 10:32 AM, Henrik Rydberg <rydberg@...omail.se> wrote: > >> > >> Henrik, does that - corrected - patch (*instead* of the previous one, > >> not in addition to) also fix your issue? > > > > Yes - I can no longer trigger the failpath, so it seems to work. Mel, > > enjoy the rest of the talk. ;-) > > > > Generally, I am a bit surprised that noone hit this before, given that > > it was quite easy to trigger. I will check 3.6 as well. > > Actually, looking at it some more, I think that two-liner patch had > *ANOTHER* bug. > > Because the other line seems buggy as well. > > Instead of > > end_pfn = ALIGN(pfn + pageblock_nr_pages, pageblock_nr_pages); > > I think it should be > > end_pfn = ALIGN(pfn+1, pageblock_nr_pages); > > instead. ALIGN() already aligns upwards (but the "+1" is needed in > case pfn is already at a pageblock_nr_pages boundary, at which point > ALIGN() would have just returned that same boundary. > > Hmm? Mel, please confirm. FFS. Yes, confirmed. In answer to Henrik's wondering why others have reported this -- reproducing this requires a large enough hole with the right aligment to have compaction walk into a PFN range with no memmap. Size and alignment depends in the memory model - 4M for FLATMEM and 128M for SPARSEMEM on x86. It needs a "lucky" machine. ---8<--- mm: compaction: check pfn_valid when entering a new MAX_ORDER_NR_PAGES block during isolation for free Commit 0bf380bc (mm: compaction: check pfn_valid when entering a new MAX_ORDER_NR_PAGES block during isolation for migration) added a check for pfn_valid() when isolating pages for migration as the scanner does not necessarily start pageblock-aligned. Since commit c89511ab (mm: compaction: Restart compaction from near where it left off), the free scanner has the same problem. This patch makes sure that the pfn range passed to isolate_freepages_block() is within the same block so that pfn_valid() checks are unnecessary. Reported-by: Henrik Rydberg <rydberg@...omail.se> Signed-off-by: Mel Gorman <mgorman@...e.de> --- mm/compaction.c | 10 +++++++++- 1 files changed, 9 insertions(+), 1 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 9eef558..694eaab 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -713,7 +713,15 @@ static void isolate_freepages(struct zone *zone, /* Found a block suitable for isolating free pages from */ isolated = 0; - end_pfn = min(pfn + pageblock_nr_pages, zone_end_pfn); + + /* + * As pfn may not start aligned, pfn+pageblock_nr_page + * may cross a MAX_ORDER_NR_PAGES boundary and miss + * a pfn_valid check. Ensure isolate_freepages_block() + * only scans within a pageblock + */ + end_pfn = ALIGN(pfn + 1, pageblock_nr_pages); + end_pfn = min(end_pfn, zone_end_pfn); isolated = isolate_freepages_block(cc, pfn, end_pfn, freelist, false); nr_freepages += isolated; -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists