lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZeppLlDeTro6zpIg@google.com>
Date: Fri, 8 Mar 2024 01:26:06 +0000
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Thomas Gleixner <tglx@...utronix.de>, 
	Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, Peter Zijlstra <peterz@...radead.org>, 
	Andy Lutomirski <luto@...nel.org>, "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, x86@...nel.org, 
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 2/3] x86/mm: make sure LAM is up-to-date during
 context switching

On Thu, Mar 07, 2024 at 10:29:29PM +0000, Yosry Ahmed wrote:
> On Thu, Mar 07, 2024 at 01:39:53PM -0800, Dave Hansen wrote:
> > On 3/7/24 13:04, Yosry Ahmed wrote:
> > > I thought about doing inc_mm_tlb_gen() when LAM is updated, but it felt
> > > hacky and more importantly doesn't make it clear in switch_mm_irqs_off()
> > > that we correctly handle LAM updates. We can certainly add a comment,
> > > but I think an explicit check for CPU LAM vs. mm LAM is much clearer.
> > > 
> > > WDYT?
> > 
> > The mm generations are literally there so that if the mm changes that
> > all the CPUs know they need an update.  Changing LAM enabling is 100%
> > consistent with telling other CPUs that they need an update.
> > 
> > I'd be curious of Andy feels differently though.
> 
> The mm generations are TLB-specific and all the code using them implies
> as such (e.g. look at the checks in switch_mm_irqs_off() when prev ==
> next). We can go around and update comments and/or function names to
> make them more generic, but this seems excessive. If we don't, the code
> becomes less clear imo.
> 
> I agree that the use case here is essentially the same (let other
> CPUs know they need to write CR3), but I still think that since the LAM
> case is just a simple one-time enablement, an explicit check in
> switch_mm_irqs_off() would be clearer.
> 
> Just my 2c, let me know what you prefer :)
> 
> > 
> > >> Considering how fun this code path is, a little effort at an actual
> > >> reproduction would be really appreciated.
> > > 
> > > I tried reproducing it but gave up quickly. We need a certain sequence
> > > of events to happen:
> > > 
> > > CPU 1					CPU 2
> > > kthread_use_mm()
> > > 					/* user thread enables LAM */
> > > 					context_switch()
> > > context_switch() /* to user thread */
> > 
> > First, it would be fine to either create a new kthread for reproduction
> > purposes or to hack an existing one.  For instance, have have the LAM
> > prctl() take an extra ref on the mm and stick it in a global variable:
> > 
> > 	mmgrab(current->mm);
> > 	global_mm = current->mm;
> > 
> > Then in the kthread, grab the mm and use it:
> > 
> > 	while (!global_mm);
> > 	kthread_use_mm(global_mm);
> > 	... check for the race
> > 	mmdrop(global_mm);
> > 
> > You can also hackily wait for thread to move with a stupid spin loop:
> > 
> > 	while (smp_processor_id() != 1);
> > 
> > and then actually move it with sched_setaffinity() from userspace.  That
> > can make it easier to get that series of events to happen in lockstep.
> 
> I will take a stab at doing something similar and let you know, thanks.

I came up with a kernel patch that I *think* may reproduce the problem
with enough iterations. Userspace only needs to enable LAM, so I think
the selftest can be enough to trigger it.

However, there is no hardware with LAM at my disposal, and IIUC I cannot
use QEMU without KVM to run a kernel with LAM. I was planning to do more
testing before sending a non-RFC version, but apparently I cannot do
any testing beyond building at this point (including reproducing) :/

Let me know how you want to proceed. I can send a non-RFC v1 based on
the feedback I got on the RFC, but it will only be build tested.

For the record, here is the diff that I *think* may reproduce the bug:

diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 33b268747bb7b..c37a8c26a3c21 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -750,8 +750,25 @@ static long prctl_map_vdso(const struct vdso_image *image, unsigned long addr)
 
 #define LAM_U57_BITS 6
 
+static int kthread_fn(void *_mm)
+{
+	struct mm_struct *mm = _mm;
+
+	/*
+	 * Wait for LAM to be enabled then schedule. Hopefully we will context
+	 * switch directly into the task that enabled LAM due to CPU pinning.
+	 */
+	kthread_use_mm(mm);
+	while (!test_bit(MM_CONTEXT_LOCK_LAM, &mm->context.flags));
+	schedule();
+	return 0;
+}
+
 static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits)
 {
+	struct task_struct *kthread_task;
+	int kthread_cpu;
+
 	if (!cpu_feature_enabled(X86_FEATURE_LAM))
 		return -ENODEV;
 
@@ -782,10 +799,22 @@ static int prctl_enable_tagged_addr(struct mm_struct *mm, unsigned long nr_bits)
 		return -EINVAL;
 	}
 
+	/* Pin the task to the current CPU */
+	set_cpus_allowed_ptr(current, cpumask_of(smp_processor_id()));
+
+	/* Run a kthread on another CPU and wait for it to start */
+	kthread_cpu = cpumask_next_wrap(smp_processor_id(), cpu_online_mask, 0, false),
+	kthread_task = kthread_run_on_cpu(kthread_fn, mm, kthread_cpu, "lam_repro_kthread");
+	while (!task_is_running(kthread_task));
+
 	write_cr3(__read_cr3() | mm->context.lam_cr3_mask);
 	set_tlbstate_lam_mode(mm);
 	set_bit(MM_CONTEXT_LOCK_LAM, &mm->context.flags);
 
+	/* Move the task to the kthread CPU */
+	set_cpus_allowed_ptr(current, cpumask_of(kthread_cpu));
+
 	mmap_write_unlock(mm);
 
 	return 0;
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 51f9f56941058..3afb53f1a1901 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -593,7 +593,7 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
 		next_tlb_gen = atomic64_read(&next->context.tlb_gen);
 		if (this_cpu_read(cpu_tlbstate.ctxs[prev_asid].tlb_gen) ==
 				next_tlb_gen)
-			return;
+			BUG_ON(new_lam != tlbstate_lam_cr3_mask());
 
 		/*
 		 * TLB contents went out of date while we were in lazy


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ