lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140902125810.GB18534@redhat.com>
Date:	Tue, 2 Sep 2014 14:58:10 +0200
From:	Oleg Nesterov <oleg@...hat.com>
To:	Suresh Siddha <sbsiddha@...il.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Al Viro <viro@...iv.linux.org.uk>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Fenghua Yu <fenghua.yu@...el.com>,
	Bean Anderson <bean@...lsystems.com>,
	"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	the arch/x86 maintainers <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/4] x86, fpu: irq_fpu_usable: kill all checks except
	!in_kernel_fpu

On 09/02, Suresh Siddha wrote:
>
> On Fri, Aug 29, 2014 at 11:17 AM, Oleg Nesterov <oleg@...hat.com> wrote:
> > ONCE AGAIN, THIS IS MORE THE QUESTION THAN THE PATCH.
>
> this patch I think needs more thought for sure. please see below.

Of course.

> > interrupted_kernel_fpu_idle() does:
> >
> >         if (use_eager_fpu())
> >                 return true;
> >
> >         return !__thread_has_fpu(current) &&
> >                 (read_cr0() & X86_CR0_TS);
> >
> > and it is absolutely not clear why these 2 cases differ so much.
> >
> > To remind, the use_eager_fpu() case is buggy; __save_init_fpu() in
> > __kernel_fpu_begin() can race with math_state_restore() which does
> > __thread_fpu_begin() + restore_fpu_checking(). So we should fix this
> > race anyway and we can't require __thread_has_fpu() == F likes the
> > !use_eager_fpu() case does, in this case kernel_fpu_begin() will not
> > work if it interrupts the idle thread (this will reintroduce the
> > performance regression fixed by 5187b28f).
> >
> > Probably math_state_restore() can use kernel_fpu_disable/end() which
> > sets/clears in_kernel_fpu, or it can disable irqs. Doesn't matter, we
> > should fix this bug anyway.
> >
> > And if we fix this bug, why else !use_eager_fpu() case needs the much
> > more strict check? Why we can't handle the __thread_has_fpu(current)
> > case the same way?
> >
> > The comment deleted by this change says:
> >
> >         and TS must be set so that the clts/stts pair does nothing
> >
> > and can explain the difference, but I can not understand this (again,
> > assuming that we fix the race(s) mentoined above).
> >
> > Say, user_fpu_begin(). Yes, kernel_fpu_begin/end() can restore X86_CR0_TS.
> > But this should be fine?
>
> No. The reason is that has_fpu state and cr0.TS can get out of sync.
>
> Let's say you get an interrupt after clts() in __thread_fpu_begin()
> called as part of user_fpu_begin().
>
> And because of this proposed change, irq_fpu_usable() returns true and
> an interrupt can end-up using fpu and after the return from interrupt
> we can have a state where cr0.TS is set but we end up resuming the
> execution from __thread_set_has_fpu(). Now after this point has_fpu is
> set but cr0.TS is set. And now any schedule() with this state (let's
> say immd after preemption_enable() at the end of user_fpu_begin()) is
> dangerous. We can get a dna fault in the middle of __switch_to() which
> can lead to subtle bugs.

Thanks. Yes, I thought about this race, but I didn't realize that a DNA
inside __switch_to() is dangerous.

Thanks a lot. OK, this is trivially fixable. I had this v2 in mind from
the very beginning because I was not really sure that unconditional stts()
in kernel_fpu_end() is safe (but yes, I didn't realize why). Just we need
to save X86_CR0_TS in the same per-cpu in_kernel_fpu,

	static DEFINE_PER_CPU(int, in_kernel_fpu);

	void __kernel_fpu_begin(void)
	{
		struct task_struct *me = current;

		this_cpu_write(in_kernel_fpu, 1);

		if (__thread_has_fpu(me)) {
			__save_init_fpu(me);
		} else if (!use_eager_fpu()) {
			this_cpu_write(fpu_owner_task, NULL);
			if ((read_cr0() & X86_CR0_TS))
				this_cpu_write(in_kernel_fpu, 2);
				clts();
			}
		}
	}

	void __kernel_fpu_end(void)
	{
		struct task_struct *me = current;

		if (__thread_has_fpu(me)) {
			if (WARN_ON(restore_fpu_checking(me)))
				drop_init_fpu(me);
		} else if (!use_eager_fpu()) {
			if (this_cpu_read(in_kernel_fpu) == 2)
				stts();
		}

		this_cpu_write(in_kernel_fpu, 0);
	}

Or, perhaps better, we can turn user_fpu_begin()->preempt_disable()
into kernel_fpu_disable().

Do you think this can work?

> other than this patch, rest of the changes look ok to me. Can you
> please resend this patchset with the math_state_restore() race
> addressed aswell?

OK, will do, but probably next week.

Will you agree if I add kernel_fpu_disable/enable to fix this race?
It can probably have more users (see above).

Thanks!

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ