[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160916153131.GG5016@twins.programming.kicks-ass.net>
Date: Fri, 16 Sep 2016 17:31:31 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Andy Lutomirski <luto@...capital.net>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>,
Andy Lutomirski <luto@...nel.org>, X86 ML <x86@...nel.org>,
Borislav Petkov <bp@...en8.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Brian Gerst <brgerst@...il.com>, Jann Horn <jann@...jh.net>,
Ingo Molnar <mingo@...nel.org>, live-patching@...r.kernel.org
Subject: Re: [PATCH 08/12] x86/dumpstack: Pin the target stack in
save_stack_trace_tsk()
On Fri, Sep 16, 2016 at 08:12:40AM -0700, Andy Lutomirski wrote:
> On Fri, Sep 16, 2016 at 12:47 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> > On Thu, Sep 15, 2016 at 02:19:38PM -0500, Josh Poimboeuf wrote:
> >> My idea was to use task_rq_lock() to lock the runqueue and then check
> >> tsk->on_cpu. I think Peter wasn't too keen on it.
> >
> > That basically allows a DoS on the scheduler, since a user can run tasks
> > on every cpu (through sys_sched_setaffinity()). Then doing while (1) cat
> > /proc/$PID/stack would saturate the rq->lock on every CPU.
> >
> > The more tasks the merrier.
>
> Is this worse than it would be if this code used preempt_disable()
> (which I think it did until very recently)?
Much worse, since the proposed task_rq_lock() not only disables
preemption, it also disables IRQs and takes 2 locks. And hogging the
rq->lock affects other tasks their ability to schedule.
Powered by blists - more mailing lists