[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9a8748490803021529m695f91egcc9e4dba13a5c911@mail.gmail.com>
Date: Mon, 3 Mar 2008 00:29:19 +0100
From: "Jesper Juhl" <jesper.juhl@...il.com>
To: "Peter Zijlstra" <a.p.zijlstra@...llo.nl>
Cc: LKML <linux-kernel@...r.kernel.org>,
"Linus Torvalds" <torvalds@...ux-foundation.org>,
linux-mm@...ck.org, "Ingo Molnar" <mingo@...e.hu>
Subject: Re: [PATCH] leak less memory in failure paths of alloc_rt_sched_group()
On 03/03/2008, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
>
> On Mon, 2008-03-03 at 00:09 +0100, Jesper Juhl wrote:
> > In kernel/sched.c b/kernel/sched.c::alloc_rt_sched_group() we currently do
> > some paired memory allocations, and if one fails we bail out without
> > freeing the previous one.
> >
> > If we fail inside the loop we should proably roll the whole thing back.
> > This patch does not do that, it simply frees the first member of the
> > paired alloc if the second fails. This is not perfect, but it's a simple
> > change that will, at least, result in us leaking a little less than we
> > currently do when an allocation fails.
> >
> > So, not perfect, but better than what we currently have.
> > Please consider applying.
>
>
> Doesn't the following handle that:
>
> sched_create_group()
> {
> ...
> if (!alloc_rt_sched_group())
> goto err;
> ...
>
> err:
> free_sched_group();
> }
>
>
> free_sched_group()
> {
> ...
> free_rt_sched_group();
> ...
> }
>
> free_rt_sched_group()
> {
> free all relevant stuff
> }
>
Hmmm, it might. I must admit I only looked at alloc_rt_sched_group()
isolated, and what I saw looked like leaks. It seems I need to do a
more thorough reading of the code to be dead sure.
--
Jesper Juhl <jesper.juhl@...il.com>
Don't top-post http://www.catb.org/~esr/jargon/html/T/top-post.html
Plain text mails only, please http://www.expita.com/nomime.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists