lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 5 Jul 2019 08:16:58 -0700 From: "Paul E. McKenney" <paulmck@...ux.ibm.com> To: Dmitry Vyukov <dvyukov@...gle.com> Cc: "Theodore Ts'o" <tytso@....edu>, syzbot <syzbot+4bfbbf28a2e50ab07368@...kaller.appspotmail.com>, Andreas Dilger <adilger.kernel@...ger.ca>, David Miller <davem@...emloft.net>, eladr@...lanox.com, Ido Schimmel <idosch@...lanox.com>, Jiri Pirko <jiri@...lanox.com>, John Stultz <john.stultz@...aro.org>, linux-ext4@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>, netdev <netdev@...r.kernel.org>, syzkaller-bugs <syzkaller-bugs@...glegroups.com>, Thomas Gleixner <tglx@...utronix.de>, Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...nel.org> Subject: Re: INFO: rcu detected stall in ext4_write_checks On Fri, Jul 05, 2019 at 03:24:26PM +0200, Dmitry Vyukov wrote: > On Thu, Jun 27, 2019 at 12:47 AM Theodore Ts'o <tytso@....edu> wrote: > > > > More details about what is going on. First, it requires root, because > > one of that is required is using sched_setattr (which is enough to > > shoot yourself in the foot): > > > > sched_setattr(0, {size=0, sched_policy=0x6 /* SCHED_??? */, sched_flags=0, sched_nice=0, sched_priority=0, sched_runtime=2251799813724439, sched_deadline=4611686018427453437, sched_period=0}, 0) = 0 > > > > This is setting the scheduler policy to be SCHED_DEADLINE, with a > > runtime parameter of 2251799.813724439 seconds (or 26 days) and a > > deadline of 4611686018.427453437 seconds (or 146 *years*). This means > > a particular kernel thread can run for up to 26 **days** before it is > > scheduled away, and if a kernel reads gets woken up or sent a signal, > > no worries, it will wake up roughly seven times the interval that Rip > > Van Winkle spent snoozing in a cave in the Catskill Mountains (in > > Washington Irving's short story). > > > > We then kick off a half-dozen threads all running: > > > > sendfile(fd, fd, &pos, 0x8080fffffffe); > > > > (and since count is a ridiculously large number, this gets cut down to): > > > > sendfile(fd, fd, &pos, 2147479552); > > > > Is it any wonder that we are seeing RCU stalls? :-) > > +Peter, Ingo for sched_setattr and +Paul for rcu > > First of all: is it a semi-intended result of a root (CAP_SYS_NICE) > doing local DoS abusing sched_setattr? It would perfectly reasonable > to starve other processes, but I am not sure about rcu. In the end the > high prio process can use rcu itself, and then it will simply blow > system memory by stalling rcu. So it seems that rcu stalls should not > happen as a result of weird sched_setattr values. If that is the case, > what needs to be fixed? sched_setattr? rcu? sendfile? Does the (untested, probably does not even build) patch shown below help? This patch assumes that the kernel was built with CONFIG_PREEMPT=n. And that I found all the tight loops on the do_sendfile() code path. > If this is semi-intended, the only option I see is to disable > something in syzkaller: sched_setattr entirely, or drop CAP_SYS_NICE, > or ...? Any preference either way? Long-running tight loops in the kernel really should contain cond_resched() or better. Thanx, Paul ------------------------------------------------------------------------ diff --git a/fs/splice.c b/fs/splice.c index 25212dcca2df..50aa3286764a 100644 --- a/fs/splice.c +++ b/fs/splice.c @@ -985,6 +985,7 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd, sd->pos = prev_pos + ret; goto out_release; } + cond_resched(); } done:
Powered by blists - more mailing lists