lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 3 Nov 2017 06:59:11 -0700
From:   Tejun Heo <tj@...nel.org>
To:     Stephen Rothwell <sfr@...b.auug.org.au>
Cc:     Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...e.hu>,
        "H. Peter Anvin" <hpa@...or.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Linux-Next Mailing List <linux-next@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Tal Shorer <tal.shorer@...il.com>,
        Frederic Weisbecker <frederic@...nel.org>
Subject: Re: linux-next: build failure after merge of the workqueues tree

On Thu, Nov 02, 2017 at 02:34:40PM +1100, Stephen Rothwell wrote:
> Hi Tejun,
> 
> After merging the workqueues tree, today's linux-next build (arm
> multi_v7_defconfig) failed like this:
> 
> kernel/workqueue.c: In function 'workqueue_init_early':
> kernel/workqueue.c:5561:56: error: 'cpu_isolated_map' undeclared (first use in this function)
>   cpumask_andnot(wq_unbound_cpumask, cpu_possible_mask, cpu_isolated_map);
>                                                         ^
> 
> Caused by commit
> 
>   b5149873a0c2 ("workqueue: respect isolated cpus when queueing an unbound work")
> 
> interacting with commit
> 
>   edb9382175c3 ("sched/isolation: Move isolcpus= handling to the housekeeping code")
> 
> from the tip tree.
> 
> I am not sure how to fix this, so I have reverted b5149873a0c2 for today.

I'm reverting it from my tree.  Tal, can you please spin up a new
patch against the sched branch?  Let's either route it through sched
branch or try again after the rc1.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ