[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+aNLHrYj1pYbkXO7CKESLeB-5enkSDK7ksgkMA3KtwJ+w@mail.gmail.com>
Date: Fri, 5 Jul 2019 17:48:31 +0200
From: Dmitry Vyukov <dvyukov@...gle.com>
To: "Paul E. McKenney" <paulmck@...ux.ibm.com>
Cc: "Theodore Ts'o" <tytso@....edu>,
syzbot <syzbot+4bfbbf28a2e50ab07368@...kaller.appspotmail.com>,
Andreas Dilger <adilger.kernel@...ger.ca>,
David Miller <davem@...emloft.net>, eladr@...lanox.com,
Ido Schimmel <idosch@...lanox.com>,
Jiri Pirko <jiri@...lanox.com>,
John Stultz <john.stultz@...aro.org>,
linux-ext4@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
netdev <netdev@...r.kernel.org>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: INFO: rcu detected stall in ext4_write_checks
On Fri, Jul 5, 2019 at 5:17 PM Paul E. McKenney <paulmck@...ux.ibm.com> wrote:
>
> On Fri, Jul 05, 2019 at 03:24:26PM +0200, Dmitry Vyukov wrote:
> > On Thu, Jun 27, 2019 at 12:47 AM Theodore Ts'o <tytso@....edu> wrote:
> > >
> > > More details about what is going on. First, it requires root, because
> > > one of that is required is using sched_setattr (which is enough to
> > > shoot yourself in the foot):
> > >
> > > sched_setattr(0, {size=0, sched_policy=0x6 /* SCHED_??? */, sched_flags=0, sched_nice=0, sched_priority=0, sched_runtime=2251799813724439, sched_deadline=4611686018427453437, sched_period=0}, 0) = 0
> > >
> > > This is setting the scheduler policy to be SCHED_DEADLINE, with a
> > > runtime parameter of 2251799.813724439 seconds (or 26 days) and a
> > > deadline of 4611686018.427453437 seconds (or 146 *years*). This means
> > > a particular kernel thread can run for up to 26 **days** before it is
> > > scheduled away, and if a kernel reads gets woken up or sent a signal,
> > > no worries, it will wake up roughly seven times the interval that Rip
> > > Van Winkle spent snoozing in a cave in the Catskill Mountains (in
> > > Washington Irving's short story).
> > >
> > > We then kick off a half-dozen threads all running:
> > >
> > > sendfile(fd, fd, &pos, 0x8080fffffffe);
> > >
> > > (and since count is a ridiculously large number, this gets cut down to):
> > >
> > > sendfile(fd, fd, &pos, 2147479552);
> > >
> > > Is it any wonder that we are seeing RCU stalls? :-)
> >
> > +Peter, Ingo for sched_setattr and +Paul for rcu
> >
> > First of all: is it a semi-intended result of a root (CAP_SYS_NICE)
> > doing local DoS abusing sched_setattr? It would perfectly reasonable
> > to starve other processes, but I am not sure about rcu. In the end the
> > high prio process can use rcu itself, and then it will simply blow
> > system memory by stalling rcu. So it seems that rcu stalls should not
> > happen as a result of weird sched_setattr values. If that is the case,
> > what needs to be fixed? sched_setattr? rcu? sendfile?
>
> Does the (untested, probably does not even build) patch shown below help?
> This patch assumes that the kernel was built with CONFIG_PREEMPT=n.
> And that I found all the tight loops on the do_sendfile() code path.
The config used when this happened is referenced from here:
https://syzkaller.appspot.com/bug?extid=4bfbbf28a2e50ab07368
and it contains:
CONFIG_PREEMPT=y
So... what does this mean? The loop should have been preempted without
the cond_resched() then, right?
> > If this is semi-intended, the only option I see is to disable
> > something in syzkaller: sched_setattr entirely, or drop CAP_SYS_NICE,
> > or ...? Any preference either way?
>
> Long-running tight loops in the kernel really should contain
> cond_resched() or better.
>
> Thanx, Paul
>
> ------------------------------------------------------------------------
>
> diff --git a/fs/splice.c b/fs/splice.c
> index 25212dcca2df..50aa3286764a 100644
> --- a/fs/splice.c
> +++ b/fs/splice.c
> @@ -985,6 +985,7 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
> sd->pos = prev_pos + ret;
> goto out_release;
> }
> + cond_resched();
> }
>
> done:
>
Powered by blists - more mailing lists