lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160701172854.GS12473@ubuntu>
Date:	Fri, 1 Jul 2016 10:28:54 -0700
From:	Viresh Kumar <viresh.kumar@...aro.org>
To:	Tejun Heo <tj@...nel.org>
Cc:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	vlevenetz@...sol.com, vaibhav.hiremath@...aro.org,
	alex.elder@...aro.org, johan@...nel.org
Subject: Re: [Query] Preemption (hogging) of the work handler

Thanks for the quick reply Tejun, really appreciate it.

On 01-07-16, 12:22, Tejun Heo wrote:
> Hello, Viresh.
> 
> On Fri, Jul 01, 2016 at 09:59:59AM -0700, Viresh Kumar wrote:
> > The system watchdog uses a delayed-work (1 second) for petting the
> > watchdog (resetting its counter) and if the work doesn't reset the
> > counters in time (another 1 second), the watchdog resets the system.
> > 
> > Petting-time: 1 second
> > Watchdog Reset-time: 2 seconds
> > 
> > The wq is allocated with:
> >         wdog_wq = alloc_workqueue("wdog", WQ_HIGHPRI, 0);
> 
> You probably want WQ_MEM_RECLAIM there to guarantee that it can run
> quickly under memory pressure.  In reality, this shouldn't matter too
> much as there aren't many competing highpri work items and thus it's
> likely that there are ready highpri workers waiting for work items.

Sure. I though we should also have WQ_UNBOUND here to let the handler
run on any CPU.

> > kernel: 3.10 (Yeah, you can rant me for that, but its not something I
> > can decide on :)
> 
> Android?

Yeah.

> > - But somehow, the timer isn't programmed for the right time.
> > 
> > - Something is happening between the time the work-handler starts
> >   running and we read jiffies from the add_timer() function which gets
> >   called from within the queue_delayed_work().
> > 
> > - For example, if the value of jiffies in the pet_watchdog_work()
> >   handler (before calling queue_delayed_work()) is say 1000000, then
> >   the value of jiffies after the call to queue_delayed_work() has
> >   returned becomes 1000310. i.e. it sometimes increases by a value of
> >   over 300, which is 1 second in our setup. I have seen this delta to
> >   vary from 50 to 350. If it crosses 300, the watchdog resets the
> >   system (as it was programmed for 2 seconds).
> 
> That's weird.  Once the work item starts executing, there isn't much
> which can delay it.  queue_delayed_work() doesn't even take any lock
> before reading jiffies.  In the failing cases, what's jiffies right
> before and after pet_watchdog_work()?  Can that take long?

Have verified that and that part isn't taking long even in the cases
where we reboot..

Sometimes (not always), I have read the jiffies around the
lock_irq_save() in queue_delayed_work() and the jiffies had a delta of
300 :)

> > So, we aren't able to queue the next timer in time and that causes all
> > these problems. I haven't concluded on why is that so..
> > 
> > Questions:
> > 
> > - I hope that the wq handler can be preempted, but can it be this bad?
> 
> It doesn't get preempted more than any other kthread w/ -20 nice
> value, so in most systems it shouldn't get preempted at all.

Hmm..

> > - Is it fine to use the wq-handler for petting the watchdog? Or should
> >   that only be done with help of interrupt-handlers?
> 
> It's absoultely fine but you'd prolly want WQ_HIGHPRI |
> WQ_MEM_RECLAIM.

Sure.

> > - Any other clues you can give which can help us figure out what's
> >   going on?
> > 
> > Thanks in advance and sorry to bother you :)
> 
> I'd watch sched TPs and see what actually is going on.  The described
> scenario should work completely fine.

Hmm, I will see if I can get those in..

-- 
viresh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ