[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrW8=+ZeBPVO4BU86ENC0D5Q6_FcR44Bryosz-RdpAqnqQ@mail.gmail.com>
Date: Fri, 16 Sep 2016 08:32:44 -0700
From: Andy Lutomirski <luto@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>,
Andy Lutomirski <luto@...nel.org>, X86 ML <x86@...nel.org>,
Borislav Petkov <bp@...en8.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Brian Gerst <brgerst@...il.com>, Jann Horn <jann@...jh.net>,
Ingo Molnar <mingo@...nel.org>, live-patching@...r.kernel.org
Subject: Re: [PATCH 08/12] x86/dumpstack: Pin the target stack in save_stack_trace_tsk()
On Fri, Sep 16, 2016 at 8:31 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Fri, Sep 16, 2016 at 08:12:40AM -0700, Andy Lutomirski wrote:
>> On Fri, Sep 16, 2016 at 12:47 AM, Peter Zijlstra <peterz@...radead.org> wrote:
>> > On Thu, Sep 15, 2016 at 02:19:38PM -0500, Josh Poimboeuf wrote:
>
>> >> My idea was to use task_rq_lock() to lock the runqueue and then check
>> >> tsk->on_cpu. I think Peter wasn't too keen on it.
>> >
>> > That basically allows a DoS on the scheduler, since a user can run tasks
>> > on every cpu (through sys_sched_setaffinity()). Then doing while (1) cat
>> > /proc/$PID/stack would saturate the rq->lock on every CPU.
>> >
>> > The more tasks the merrier.
>>
>> Is this worse than it would be if this code used preempt_disable()
>> (which I think it did until very recently)?
>
> Much worse, since the proposed task_rq_lock() not only disables
> preemption, it also disables IRQs and takes 2 locks. And hogging the
> rq->lock affects other tasks their ability to schedule.
>
Fair enough.
I'm not sure I care quite enough about /proc/PID/stack to personally
dig through the scheduler and find a way to cleanly say "please don't
run this task for a little while".
Powered by blists - more mailing lists