[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1500213406.195321499@decadent.org.uk>
Date: Sun, 16 Jul 2017 14:56:46 +0100
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org,
"Janosch Frank" <frankja@...ux.vnet.ibm.com>,
"Martin Schwidefsky" <schwidefsky@...ibm.com>,
"Christian Borntraeger" <borntraeger@...ibm.com>
Subject: [PATCH 3.16 005/178] KVM: s390: Fix guest migration for huge
guests resulting in panic
3.16.46-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Janosch Frank <frankja@...ux.vnet.ibm.com>
commit 2e4d88009f57057df7672fa69a32b5224af54d37 upstream.
While we can technically not run huge page guests right now, we can
setup a guest with huge pages. Trying to migrate it will trigger a
VM_BUG_ON and, if the kernel is not configured to panic on a BUG, it
will happily try to work on non-existing page table entries.
With this patch, we always return "dirty" if we encounter a large page
when migrating. This at least fixes the immediate problem until we
have proper handling for both kind of pages.
Fixes: 15f36eb ("KVM: s390: Add proper dirty bitmap support to S390 kvm.")
Signed-off-by: Janosch Frank <frankja@...ux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@...ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@...ibm.com>
[bwh: Backported to 3.16:
- Use respectively gmap->mm, address and pte instead of mm, addr and ptep
- Adjust context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
arch/s390/mm/pgtable.c | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -1411,11 +1411,28 @@ EXPORT_SYMBOL_GPL(s390_enable_skey);
*/
bool gmap_test_and_clear_dirty(unsigned long address, struct gmap *gmap)
{
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
pte_t *pte;
spinlock_t *ptl;
bool dirty = false;
- pte = get_locked_pte(gmap->mm, address, &ptl);
+ pgd = pgd_offset(gmap->mm, address);
+ pud = pud_alloc(gmap->mm, pgd, address);
+ if (!pud)
+ return false;
+ pmd = pmd_alloc(gmap->mm, pud, address);
+ if (!pmd)
+ return false;
+ /* We can't run guests backed by huge pages, but userspace can
+ * still set them up and then try to migrate them without any
+ * migration support.
+ */
+ if (pmd_large(*pmd))
+ return true;
+
+ pte = pte_alloc_map_lock(gmap->mm, pmd, address, &ptl);
if (unlikely(!pte))
return false;
Powered by blists - more mailing lists