lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFyzsMYcRX3V5CEWB4Zb-9BuRGCjib3DMXuX5y9nBWiZ1w@mail.gmail.com>
Date:	Wed, 14 Oct 2015 13:10:33 -0700
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Tejun Heo <tj@...nel.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Christoph Lameter <cl@...ux.com>,
	Michal Hocko <mhocko@...e.cz>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Lai Jiangshan <jiangshanlai@...il.com>,
	Shaohua Li <shli@...com>, linux-mm <linux-mm@...ck.org>
Subject: Re: [GIT PULL] workqueue fixes for v4.3-rc5

On Wed, Oct 14, 2015 at 12:38 PM, Tejun Heo <tj@...nel.org> wrote:
>
> Doesn't seem that way.  This is from 597d0275736d^ - right before
> TIMER_NOT_PINNED is introduced.  add_timer() eventually calls into
> __mod_timer().
>
>                 if (likely(base->running_timer != timer)) {
>                         /* See the comment in lock_timer_base() */
>                         timer_set_base(timer, NULL);
>                         spin_unlock(&base->lock);
>                         base = new_base;
>                         spin_lock(&base->lock);
>                         timer_set_base(timer, base);
>
> It looks like the timers for work items will be reliably queued on the
> local CPU.

.. unless they are running on another cpu at the time, yes.

Also, the new base is not necessarily the current cpu base, although I
think the exceptions to that are pretty rare (ie you have to enable
timer migration etc)

Which I guess might not actually happen with workqueue timers due to
the extra workqueue locking thing, but I'm not sure. It's going to be
very unlikely, regardless, I agree.

> Heh, I don't think much in this area is intended.  It's mostly all
> historical accidents and failures to get things cleaned up in time.

No argument there.

> So, the following two things bother me about this.
>
> * Given that this is the first reported case of breakage, I don't
>   think this is gonna cause lots of criticial issues; however, the
>   only thing this indicates is that there simply hasn't been enough
>   cases where timers actualy migrate.  If we end up migrating timers
>   more actively in the future, it's possible that we'll see more
>   breakages which will likely be subtler.

I agree that that's a real concern.

At the same time, some of the same issues that are pushing people to
move timers around (put idle cores to deeper sleeps etc) would also
argue for moving delayed work around to other cpus if possible.

So I agree that there is a push to make timer cpu targets more dynamic
in a way we historically didn't really have. At the same time, I think
the same forces that want to move timers around would actually likely
want to move delayed work around too...

> * This makes queue_delayed_work() behave differently from queue_work()
>   and when I checked years ago the local queueing guarantee was
>   definitely being depended upon by some users.

Yes. But the delayed work really is different. By definition, we know
that the current cpu is busy and active _right_now_, and so keeping
work on that cpu isn't obviously wrong.

But it's *not* obviously right to schedule something on that
particular cpu a few seconds from now, when it might be happily asleep
and there might be better cpus to bother..

> I do want to get rid of the local queueing guarnatee for all work
> items.  That said, I don't think this is the right way to do it.

Hmm. I guess that for being past rc5, taking your patch is the safe
thing. I really don't like it very much, though.

                    Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ