[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090614.213419.35998061.konishi.ryusuke@gmail.com>
Date: Sun, 14 Jun 2009 21:34:19 +0900 (JST)
From: Ryusuke Konishi <konishi.ryusuke@....ntt.co.jp>
To: penberg@...helsinki.fi
Cc: konishi.ryusuke@....ntt.co.jp, albertito@...tiri.com.ar,
llucax@...il.com, linux-kernel@...r.kernel.org, users@...fs.org
Subject: Re: NILFS2 get stuck after bio_alloc() fail
Hi,
On Sun, 14 Jun 2009 10:00:06 +0300, Pekka Enberg <penberg@...helsinki.fi> wrote:
> On Sun, Jun 14, 2009 at 9:30 AM, Ryusuke
> Konishi<konishi.ryusuke@....ntt.co.jp> wrote:
>> The original GFP flag was GFP_NOIO, but replaced to GFP_NOWAIT at a
>> preliminary release in February 2008. It was because a user
>> experienced system memory shortage by the bio_alloc() call.
>>
>> Even though nilfs_alloc_seg_bio() repeatedly calls bio_alloc()
>> reducing the number of bio vectors in case of failure, this fallback
>> did not work well.
>>
>> I'm in two minds whether I should change it back to GFP_NOIO.
>> Or should I switch the gfp as follows?
>
> As far as I can tell, the only difference with GFP_NOIO and GFP_NOWAIT
> here is that the former will trigger the mempool_alloc() page reclaim
> path. But I am not sure I understand why switching to GFP_NOWAIT
> helped with memory shortage. What exactly was the problem there?
>
> Pekka
I will confirm the details later. I cannot dig into the record due to
planned outage at office now. The version I replaced GFP_NOIO with
GFP_NOWAIT is considerably old. A true problem might have been in
other places somewhere.
For reference, the retry loop in question is as follows. It is to
allocate bio for log writing.
bio = bio_alloc(GFP_NOWAIT, nr_vecs);
if (bio == NULL) {
while (!bio && (nr_vecs >>= 1))
bio = bio_alloc(GFP_NOWAIT, nr_vecs);
}
where nr_vecs is less than or equal to bio_get_nr_vecs(bdev).
Thanks,
Ryusuke Konishi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists