lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 30 Aug 2018 10:41:56 -0400 From: jglisse@...hat.com To: linux-mm@...ck.org Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org, Ralph Campbell <rcampbell@...dia.com>, Jérôme Glisse <jglisse@...hat.com>, "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>, Balbir Singh <bsingharora@...il.com>, stable@...r.kernel.org Subject: [PATCH 3/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly v2 From: Ralph Campbell <rcampbell@...dia.com> Private ZONE_DEVICE pages use a special pte entry and thus are not present. Properly handle this case in map_pte(), it is already handled in check_pte(), the map_pte() part was lost in some rebase most probably. Without this patch the slow migration path can not migrate back private ZONE_DEVICE memory to regular memory. This was found after stress testing migration back to system memory. This ultimatly can lead the CPU to an infinite page fault loop on the special swap entry. Changes since v1: - properly lock pte directory in map_pte() Signed-off-by: Ralph Campbell <rcampbell@...dia.com> Signed-off-by: Jérôme Glisse <jglisse@...hat.com> Cc: Andrew Morton <akpm@...ux-foundation.org> Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com> Cc: Balbir Singh <bsingharora@...il.com> Cc: stable@...r.kernel.org --- mm/page_vma_mapped.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index ae3c2a35d61b..bd67e23dce33 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -21,7 +21,14 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw) if (!is_swap_pte(*pvmw->pte)) return false; } else { - if (!pte_present(*pvmw->pte)) + if (is_swap_pte(*pvmw->pte)) { + swp_entry_t entry; + + /* Handle un-addressable ZONE_DEVICE memory */ + entry = pte_to_swp_entry(*pvmw->pte); + if (!is_device_private_entry(entry)) + return false; + } else if (!pte_present(*pvmw->pte)) return false; } } -- 2.17.1
Powered by blists - more mailing lists