lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 14 Oct 2015 12:10:29 -0700
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Christoph Lameter <cl@...ux.com>
Cc:	Tejun Heo <tj@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Michal Hocko <mhocko@...e.cz>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Lai Jiangshan <jiangshanlai@...il.com>,
	Shaohua Li <shli@...com>, linux-mm <linux-mm@...ck.org>
Subject: Re: [GIT PULL] workqueue fixes for v4.3-rc5

On Wed, Oct 14, 2015 at 11:59 AM, Christoph Lameter <cl@...ux.com> wrote:
> On Wed, 14 Oct 2015, Linus Torvalds wrote:
>
>> And "schedule_delayed_work()" uses WORK_CPU_UNBOUND.
>
> Uhhh. Someone changed that?

It always did.  This is from 2007:

int fastcall schedule_delayed_work(struct delayed_work *dwork,
                                        unsigned long delay)
{
        timer_stats_timer_set_start_info(&dwork->timer);
        return queue_delayed_work(keventd_wq, dwork, delay);
}
...
int fastcall queue_delayed_work(struct workqueue_struct *wq,
                        struct delayed_work *dwork, unsigned long delay)
{
        timer_stats_timer_set_start_info(&dwork->timer);
        if (delay == 0)
                return queue_work(wq, &dwork->work);

        return queue_delayed_work_on(-1, wq, dwork, delay);
}
...
int queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
                        struct delayed_work *dwork, unsigned long delay)
{
....
                timer->function = delayed_work_timer_fn;

                if (unlikely(cpu >= 0))
                        add_timer_on(timer, cpu);
                else
                        add_timer(timer);
}
...
void delayed_work_timer_fn(unsigned long __data)
{
        int cpu = smp_processor_id();
        ...
        __queue_work(per_cpu_ptr(wq->cpu_wq, cpu), &dwork->work);
}


so notice how it always just used "add_timer()", and then queued it on
whatever cpu workqueue the timer ran on.

Now, 99.9% of the time, the timer is just added to the current CPU
queues, so yes, in practice it ended up running on the same CPU almost
all the time. There are exceptions (timers can get moved around, and
active timers end up staying on the CPU they were scheduled on when
they get updated, rather than get moved to the current cpu), but they
are hard to hit.

But the code clearly didn't do that "same CPU" intentionally, and just
going by naming of things I would also say that it was never implied.

                    Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists