[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x497hyz9d2g.fsf@segfault.boston.devel.redhat.com>
Date: Fri, 26 Jun 2009 12:25:59 -0400
From: Jeff Moyer <jmoyer@...hat.com>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: linux-kernel@...r.kernel.org, akpm@...ux-foundation.org
Subject: Re: [PATCH 2/2] cfq-iosched: get rid of the need for __GFP_FAIL in cfq_find_alloc_queue()
Jens Axboe <jens.axboe@...cle.com> writes:
> Setup an emergency fallback cfqq that we allocate at IO scheduler init
> time. If the slab allocation fails in cfq_find_alloc_queue(), we'll just
> punt IO to that cfqq instead. This ensures that cfq_find_alloc_queue()
> never fails without having to ensure free memory.
>
> Signed-off-by: Jens Axboe <jens.axboe@...cle.com>
> ---
> block/cfq-iosched.c | 124 +++++++++++++++++++++++++++-----------------------
> 1 files changed, 67 insertions(+), 57 deletions(-)
>
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index c760ae7..91e7e0b 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> + /*
> + * Fallback dummy cfqq for extreme OOM conditions
> + */
> + struct cfq_queue oom_cfqq;
OK, so you're embedding a cfqq into the cfqd. That's 136 bytes, so I
guess that's not too bad.
> + /*
> + * Our fallback cfqq if cfq_find_alloc_queue() runs into OOM issues.
> + * Grab a permanent reference to it, so that the normal code flow
> + * will not attempt to free it.
> + */
> + cfq_init_cfqq(cfqd, &cfqd->oom_cfqq, 1, 0);
> + atomic_inc(&cfqd->oom_cfqq.ref);
> +
I guess this is so we never try to free it, good. ;)
One issue I have with this patch is that, if a task happens to run into
this condition, there is no way out. It will always have the oom_cfqq
as it's cfqq. Can't we fix that if we recover from the OOM condition?
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists