[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111018185913.GE15908@one.firstfloor.org>
Date: Tue, 18 Oct 2011 20:59:13 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Dave Jones <davej@...hat.com>, Andi Kleen <andi@...stfloor.org>,
p.herz@...fihost.ag, linux-kernel@...r.kernel.org
Subject: Re: Vanilla-Kernel 3 - page allocation failure
> We get reports like this fairly regularly, usually accompanied by
> "But I had lots of free memory and/or swap!"
I think the backtrace is also really bad. It just makes it look like a crash.
It just cries "please report me", even though there's usually no good
reason for it.
I understand it can be useful sometimes for debugging, but most of the
time it is unnecessary and just confusing. One good thing probably
would be some heuristic to see when to print the backtrace, and don't
print it in common situations.
>
> The order/mode stuff is completely opaque to end-users, who have no
> clue that there are different types of memory, and exhausting one type
> can happen even when plenty of other memory is free.
order should be probably replaced with a user readable size, agreed.
order:2 = "16 KB"
[note if anybody wants to reply now it should be "16 KiB", don't bother;
i'll ignore you]
>
> I've been toying with the idea of hacking up a patch to turn those mode
> flags into printing things like "mode:GFP_ATOMIC|GFP_NOIO" instead though, as I can
> never remember those flags off the top of my head.
> Still won't help end-users, but it would at least speed up diagnosing reports.
Better decode it: "from interrupt handler", "inside a file system"
Unfortunately there's no flag for GFP_ATOMIC but not in a interrupt handler,
but some code with broken locking abusing it. Perhaps there should be.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists