[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <833be473-a065-4402-f369-f946b6f4e312@oracle.com>
Date: Mon, 27 Jul 2020 19:42:59 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <songmuchun@...edance.com>
Cc: mhocko@...nel.org, rientjes@...gle.com, mgorman@...e.de,
walken@...gle.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Jianchao Guo <guojianchao@...edance.com>
Subject: Re: [PATCH v3] mm/hugetlb: add mempolicy check in the reservation
routine
On 7/27/20 5:19 PM, Andrew Morton wrote:
> On Sat, 25 Jul 2020 16:07:49 +0800 Muchun Song <songmuchun@...edance.com> wrote:
>
>> In the reservation routine, we only check whether the cpuset meets
>> the memory allocation requirements. But we ignore the mempolicy of
>> MPOL_BIND case. If someone mmap hugetlb succeeds, but the subsequent
>> memory allocation may fail due to mempolicy restrictions and receives
>> the SIGBUS signal. This can be reproduced by the follow steps.
>>
>> 1) Compile the test case.
>> cd tools/testing/selftests/vm/
>> gcc map_hugetlb.c -o map_hugetlb
>>
>> 2) Pre-allocate huge pages. Suppose there are 2 numa nodes in the
>> system. Each node will pre-allocate one huge page.
>> echo 2 > /proc/sys/vm/nr_hugepages
>>
>> 3) Run test case(mmap 4MB). We receive the SIGBUS signal.
>> numactl --membind=0 ./map_hugetlb 4
>>
>> With this patch applied, the mmap will fail in the step 3) and throw
>> "mmap: Cannot allocate memory".
>
> This doesn't compile with CONFIG_NUMA=n - ther eis no implementation of
> get_task_policy().
>
> I think it needs more than a simple build fix - can we please rework
> the patch so that its impact (mainly code size) on non-NUMA machines is
> minimized?
I'll let Muchun see if there is a more elegant fix. However, a relatively
simple build fix such as:
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 8069ca47c18c..4bfbddfee0d3 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3455,12 +3455,14 @@ static unsigned int allowed_mems_nr(struct hstate *h)
{
int node;
unsigned int nr = 0;
- struct mempolicy *mpol = get_task_policy(current);
- nodemask_t *mpol_allowed;
+ nodemask_t *mpol_allowed = NULL;
unsigned int *array = h->free_huge_pages_node;
+#ifdef CONFIG_NUMA
+ struct mempolicy *mpol = get_task_policy(current);
gfp_t gfp_mask = htlb_alloc_mask(h);
mpol_allowed = policy_nodemask(gfp_mask, mpol);
+#endif
for_each_node_mask(node, cpuset_current_mems_allowed) {
if (!mpol_allowed ||
Does not have much of an impact on code size. Here are the non-numa
versions of the routine before Muchun's patch and after.
Dump of assembler code for function cpuset_mems_nr:
0xffffffff8126a3a0 <+0>: callq 0xffffffff81060f80 <__fentry__>
0xffffffff8126a3a5 <+5>: xor %eax,%eax
0xffffffff8126a3a7 <+7>: mov %gs:0x17bc0,%rdx
0xffffffff8126a3b0 <+16>: testb $0x1,0x778(%rdx)
0xffffffff8126a3b7 <+23>: jne 0xffffffff8126a3ba <cpuset_mems_nr+26>
0xffffffff8126a3b9 <+25>: retq
0xffffffff8126a3ba <+26>: mov (%rdi),%eax
0xffffffff8126a3bc <+28>: retq
End of assembler dump.
Dump of assembler code for function allowed_mems_nr:
0xffffffff8126a3a0 <+0>: callq 0xffffffff81060f80 <__fentry__>
0xffffffff8126a3a5 <+5>: xor %eax,%eax
0xffffffff8126a3a7 <+7>: mov %gs:0x17bc0,%rdx
0xffffffff8126a3b0 <+16>: testb $0x1,0x778(%rdx)
0xffffffff8126a3b7 <+23>: jne 0xffffffff8126a3ba <allowed_mems_nr+26>
0xffffffff8126a3b9 <+25>: retq
0xffffffff8126a3ba <+26>: mov 0x6c(%rdi),%eax
0xffffffff8126a3bd <+29>: retq
End of assembler dump.
--
Mike Kravetz
Powered by blists - more mailing lists