[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YXaTBrhEqTZhTJYX@dhcp22.suse.cz>
Date: Mon, 25 Oct 2021 13:20:38 +0200
From: Michal Hocko <mhocko@...e.com>
To: Uladzislau Rezki <urezki@...il.com>
Cc: NeilBrown <neilb@...e.de>,
Linux Memory Management List <linux-mm@...ck.org>,
Dave Chinner <david@...morbit.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Hellwig <hch@...radead.org>,
linux-fsdevel@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Ilya Dryomov <idryomov@...il.com>,
Jeff Layton <jlayton@...nel.org>
Subject: Re: [RFC 2/3] mm/vmalloc: add support for __GFP_NOFAIL
On Mon 25-10-21 11:48:41, Uladzislau Rezki wrote:
> On Fri, Oct 22, 2021 at 09:49:08AM +1100, NeilBrown wrote:
[...]
> > If, as you say, the precision doesn't matter that much, then maybe
> > msleep(0)
> > which would sleep to the start of the next jiffy. Does that look a bit
> > weird? If so, the msleep(1) would be ok.
> >
> Agree, msleep(1) looks much better rather then converting 1 jiffy to
> milliseconds. Result should be the same.
I would really prefer if this was not the main point of arguing here.
Unless you feel strongly about msleep I would go with schedule_timeout
here because this is a more widely used interface in the mm code and
also because I feel like that relying on the rounding behavior is just
subtle. Here is what I have staged now.
Are there any other concerns you see with this or other patches in the
series?
Thanks!
---
commit c1a7e40e6b56fed5b9e716de7055b77ea29d89d0
Author: Michal Hocko <mhocko@...e.com>
Date: Wed Oct 20 10:12:45 2021 +0200
fold me "mm/vmalloc: add support for __GFP_NOFAIL"
Add a short sleep before retrying. 1 jiffy is a completely random
timeout. Ideally the retry would wait for an explicit event - e.g.
a change to the vmalloc space change if the failure was caused by
the space fragmentation or depletion. But there are multiple different
reasons to retry and this could become much more complex. Keep the retry
simple for now and just sleep to prevent from hogging CPUs.
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 0fb5413d9239..a866db0c9c31 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2944,6 +2944,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
do {
ret = vmap_pages_range(addr, addr + size, prot, area->pages,
page_shift);
+ schedule_timeout_uninterruptible(1);
} while ((gfp_mask & __GFP_NOFAIL) && (ret < 0));
if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
@@ -3034,8 +3035,10 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
warn_alloc(gfp_mask, NULL,
"vmalloc error: size %lu, vm_struct allocation failed",
real_size);
- if (gfp_mask & __GFP_NOFAIL)
+ if (gfp_mask & __GFP_NOFAIL) {
+ schedule_timeout_uninterruptible(1);
goto again;
+ }
goto fail;
}
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists