[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091031201158.GB29536@elf.ucw.cz>
Date: Sat, 31 Oct 2009 21:11:59 +0100
From: Pavel Machek <pavel@....cz>
To: David Rientjes <rientjes@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>, stable@...nel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Frans Pop <elendil@...net.nl>, Jiri Kosina <jkosina@...e.cz>,
Sven Geggus <lists@...hsschwanzdomain.de>,
Karol Lewandowski <karol.k.lewandowski@...il.com>,
Tobias Oetiker <tobi@...iker.ch>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Pekka Enberg <penberg@...helsinki.fi>,
Rik van Riel <riel@...hat.com>,
Christoph Lameter <cl@...ux-foundation.org>,
Stephan von Krawczynski <skraw@...net.com>,
kernel-testers@...r.kernel.org
Subject: Re: [PATCH 2/3] page allocator: Do not allow interrupts to use
ALLOC_HARDER
On Sat 2009-10-31 12:51:14, David Rientjes wrote:
> On Sat, 31 Oct 2009, Pavel Machek wrote:
>
> > > Giving rt tasks access to memory reserves is necessary to reduce latency,
> > > the privilege does not apply to interrupts that subsequently get run on
> > > the same cpu.
> >
> > If rt task needs to allocate memory like that, then its broken,
> > anyway...
>
> Um, no, it's a matter of the kernel implementation. We allow such tasks
> to allocate deeper into reserves to avoid the page allocator from
> incurring a significant penalty when direct reclaim is required.
> Background reclaim has already commenced at this point in the
>slowpath.
But we can't guarantee that enough memory will be ready in the
reserves. So if realtime task relies on it, it is broken, and will
fail to meet its deadlines from time to time.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists