[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9fdd44c2-a10e-23f0-a71c-bf8f3e6fc384@linux.intel.com>
Date: Tue, 30 Jul 2019 14:13:25 -0700
From: sathyanarayanan kuppuswamy
<sathyanarayanan.kuppuswamy@...ux.intel.com>
To: Dave Hansen <dave.hansen@...el.com>,
Uladzislau Rezki <urezki@...il.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 1/1] mm/vmalloc.c: Fix percpu free VM area search
criteria
On 7/30/19 1:54 PM, Dave Hansen wrote:
> On 7/30/19 1:46 PM, Uladzislau Rezki wrote:
>>> + /*
>>> + * If required width exeeds current VA block, move
>>> + * base downwards and then recheck.
>>> + */
>>> + if (base + end > va->va_end) {
>>> + base = pvm_determine_end_from_reverse(&va, align) - end;
>>> + term_area = area;
>>> + continue;
>>> + }
>>> +
>>> /*
>>> * If this VA does not fit, move base downwards and recheck.
>>> */
>>> - if (base + start < va->va_start || base + end > va->va_end) {
>>> + if (base + start < va->va_start) {
>>> va = node_to_va(rb_prev(&va->rb_node));
>>> base = pvm_determine_end_from_reverse(&va, align) - end;
>>> term_area = area;
>>> --
>>> 2.21.0
>>>
>> I guess it is NUMA related issue, i mean when we have several
>> areas/sizes/offsets. Is that correct?
> I don't think NUMA has anything to do with it. The vmalloc() area
> itself doesn't have any NUMA properties I can think of. We don't, for
> instance, partition it into per-node areas that I know of.
>
> I did encounter this issue on a system with ~100 logical CPUs, which is
> a moderate amount these days.
I agree with Dave. I don't think this issue is related to NUMA. The
problem here is about the logic we use to find appropriate vm_area that
satisfies the offset and size requirements of pcpu memory allocator.
In my test case, I can reproduce this issue if we make request with
offset (ffff000000) and size (600000).
>
--
Sathyanarayanan Kuppuswamy
Linux kernel developer
Powered by blists - more mailing lists