[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201124175242.GE1021337@google.com>
Date: Tue, 24 Nov 2020 12:52:42 -0500
From: Joel Fernandes <joel@...lfernandes.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Nishanth Aravamudan <naravamudan@...italocean.com>,
Julien Desfossez <jdesfossez@...italocean.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Vineeth Pillai <viremana@...ux.microsoft.com>,
Aaron Lu <aaron.lwe@...il.com>,
Aubrey Li <aubrey.intel@...il.com>, tglx@...utronix.de,
linux-kernel@...r.kernel.org, mingo@...nel.org,
torvalds@...ux-foundation.org, fweisbec@...il.com,
keescook@...omium.org, kerrnel@...gle.com,
Phil Auld <pauld@...hat.com>,
Valentin Schneider <valentin.schneider@....com>,
Mel Gorman <mgorman@...hsingularity.net>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>, vineeth@...byteword.org,
Chen Yu <yu.c.chen@...el.com>,
Christian Brauner <christian.brauner@...ntu.com>,
Agata Gruza <agata.gruza@...el.com>,
Antonio Gomez Iglesias <antonio.gomez.iglesias@...el.com>,
graf@...zon.com, konrad.wilk@...cle.com, dfaggioli@...e.com,
pjt@...gle.com, rostedt@...dmis.org, derkling@...gle.com,
benbjiang@...cent.com,
Alexandre Chartre <alexandre.chartre@...cle.com>,
James.Bottomley@...senpartnership.com, OWeisse@...ch.edu,
Dhaval Giani <dhaval.giani@...cle.com>,
Junaid Shahid <junaids@...gle.com>, jsbarnes@...gle.com,
chris.hyser@...cle.com, Ben Segall <bsegall@...gle.com>,
Josh Don <joshdon@...gle.com>, Hao Luo <haoluo@...gle.com>,
Tom Lendacky <thomas.lendacky@....com>,
Aubrey Li <aubrey.li@...ux.intel.com>,
Tim Chen <tim.c.chen@...el.com>,
"Paul E . McKenney" <paulmck@...nel.org>
Subject: Re: [PATCH -tip 18/32] kernel/entry: Add support for core-wide
protection of kernel-mode
Hi Peter,
On Tue, Nov 24, 2020 at 05:09:06PM +0100, Peter Zijlstra wrote:
> On Tue, Nov 17, 2020 at 06:19:48PM -0500, Joel Fernandes (Google) wrote:
> > Core-scheduling prevents hyperthreads in usermode from attacking each
> > other, but it does not do anything about one of the hyperthreads
> > entering the kernel for any reason. This leaves the door open for MDS
> > and L1TF attacks with concurrent execution sequences between
> > hyperthreads.
> >
> > This patch therefore adds support for protecting all syscall and IRQ
> > kernel mode entries. Care is taken to track the outermost usermode exit
> > and entry using per-cpu counters. In cases where one of the hyperthreads
> > enter the kernel, no additional IPIs are sent. Further, IPIs are avoided
> > when not needed - example: idle and non-cookie HTs do not need to be
> > forced into kernel mode.
> >
> > More information about attacks:
> > For MDS, it is possible for syscalls, IRQ and softirq handlers to leak
> > data to either host or guest attackers. For L1TF, it is possible to leak
> > to guest attackers. There is no possible mitigation involving flushing
> > of buffers to avoid this since the execution of attacker and victims
> > happen concurrently on 2 or more HTs.
>
> Oh gawd; this is horrible...
Yeah, its the same issue we discussed earlier this year :)
> > +bool sched_core_wait_till_safe(unsigned long ti_check)
> > +{
> > + bool restart = false;
> > + struct rq *rq;
> > + int cpu;
> > +
> > + /* We clear the thread flag only at the end, so no need to check for it. */
> > + ti_check &= ~_TIF_UNSAFE_RET;
> > +
> > + cpu = smp_processor_id();
> > + rq = cpu_rq(cpu);
> > +
> > + if (!sched_core_enabled(rq))
> > + goto ret;
> > +
> > + /* Down grade to allow interrupts to prevent stop_machine lockups.. */
> > + preempt_disable();
> > + local_irq_enable();
> > +
> > + /*
> > + * Wait till the core of this HT is not in an unsafe state.
> > + *
> > + * Pair with raw_spin_lock/unlock() in sched_core_unsafe_enter/exit().
> > + */
> > + while (smp_load_acquire(&rq->core->core_unsafe_nest) > 0) {
> > + cpu_relax();
> > + if (READ_ONCE(current_thread_info()->flags) & ti_check) {
> > + restart = true;
> > + break;
> > + }
> > + }
>
> What's that ACQUIRE for?
I was concerned about something like below for weakly-ordered arch:
Kernel Attacker
------ --------
write unsafe=1
kernel code does stores while (unsafe == 1); (ACQUIRE)
^ ^
| needs to be ordered | needs to be ordered
v v
write unsafe=0 (RELEASE) Attacker code.
Here, I want the access made by kernel code to be ordered WRT the write to
the unsafe nesting counter variable, so that the attacker code does not see
those accesses later.
It could be argued its a theoretical concern, but I wanted to play it safe. In
particular, the existing uarch buffer flushing before entering the Attacker
code might make it sufficiently impossible for Attacker to do anything bad
even without the additional memory barriers.
> > +
> > + /* Upgrade it back to the expectations of entry code. */
> > + local_irq_disable();
> > + preempt_enable();
> > +
> > +ret:
> > + if (!restart)
> > + clear_tsk_thread_flag(current, TIF_UNSAFE_RET);
> > +
> > + return restart;
> > +}
>
> So if TIF_NEED_RESCHED gets set, we'll break out and reschedule, cute.
Thanks.
> > + /* Do nothing more if the core is not tagged. */
> > + if (!rq->core->core_cookie)
> > + goto unlock;
> > +
> > + for_each_cpu(i, smt_mask) {
> > + struct rq *srq = cpu_rq(i);
> > +
> > + if (i == cpu || cpu_is_offline(i))
> > + continue;
> > +
> > + if (!srq->curr->mm || is_task_rq_idle(srq->curr))
> > + continue;
> > +
> > + /* Skip if HT is not running a tagged task. */
> > + if (!srq->curr->core_cookie && !srq->core_pick)
> > + continue;
> > +
> > + /*
> > + * Force sibling into the kernel by IPI. If work was already
> > + * pending, no new IPIs are sent. This is Ok since the receiver
> > + * would already be in the kernel, or on its way to it.
> > + */
> > + irq_work_queue_on(&srq->core_irq_work, i);
>
> Why irq_work though? Why not smp_send_reschedule(i)?
I tried this approach. The main issue is sending IPIs if a previous IPI was
already sent. I needed some mechanism (shared variables to detect this)
so that an IPI would not be sent if one was just sent.
With irq_work, I get the irq_work_is_busy() which helps with exactly that.
With other approaches I tried, I ran into issues with CSD locking and it
seems irq_work implements the CSD locking the way I wanted to. So to avoid
reinventing the wheel, I stuck to irq_work and it worked well.
thanks,
- Joel
Powered by blists - more mailing lists