[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <F05A63B8-4070-4528-B8A1-DAA66EBF94D9@vmware.com>
Date: Wed, 26 Apr 2017 23:56:22 +0000
From: Nadav Amit <namit@...are.com>
To: Andy Lutomirski <luto@...nel.org>,
"x86@...nel.org" <x86@...nel.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Borislav Petkov" <bp@...en8.de>, Rik van Riel <riel@...hat.com>,
Dave Hansen <dave.hansen@...el.com>,
Michal Hocko <mhocko@...e.com>,
Sasha Levin <sasha.levin@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v3 1/4] x86/vm86/32: Switch to flush_tlb_mm_range() in
mark_screen_rdonly()
It may be benign, but I don’t think that flushing the TLB without
holding the ptl or the mmap_sem (for no apparent reason) is a good
practice.
On 4/22/17, 12:01 AM, "Andy Lutomirski" <luto@...nel.org> wrote:
mark_screen_rdonly() is the last remaining caller of flush_tlb().
flush_tlb_mm_range() is potentially faster and isn't obsolete.
Compile-tested only because I don't know whether software that uses
this mechanism even exists.
Cc: Rik van Riel <riel@...hat.com>
Cc: Dave Hansen <dave.hansen@...el.com>
Cc: Nadav Amit <namit@...are.com>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Sasha Levin <sasha.levin@...cle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Andy Lutomirski <luto@...nel.org>
---
arch/x86/kernel/vm86_32.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/vm86_32.c b/arch/x86/kernel/vm86_32.c
index 23ee89ce59a9..3eda76b3c835 100644
--- a/arch/x86/kernel/vm86_32.c
+++ b/arch/x86/kernel/vm86_32.c
@@ -193,7 +193,7 @@ static void mark_screen_rdonly(struct mm_struct *mm)
pte_unmap_unlock(pte, ptl);
out:
up_write(&mm->mmap_sem);
- flush_tlb();
+ flush_tlb_mm_range(mm, 0xA0000, 0xA0000 + 32*PAGE_SIZE, 0UL);
}
--
2.9.3
Powered by blists - more mailing lists