lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALZhoSTzGH+mAB-Om=3dQTgQJriDmH+is_DEAzWNN7rRpzNFuQ@mail.gmail.com>
Date:	Tue, 21 Jan 2014 10:07:58 +0800
From:	Lei Wen <adrian.wenl@...il.com>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	Viresh Kumar <viresh.kumar@...aro.org>,
	Kevin Hilman <khilman@...aro.org>,
	Lists linaro-kernel <linaro-kernel@...ts.linaro.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Linaro Networking <linaro-networking@...aro.org>,
	Tejun Heo <tj@...nel.org>
Subject: Re: [QUERY]: Is using CPU hotplug right for isolating CPUs?

On Mon, Jan 20, 2014 at 11:41 PM, Frederic Weisbecker
<fweisbec@...il.com> wrote:
> On Mon, Jan 20, 2014 at 08:30:10PM +0530, Viresh Kumar wrote:
>> On 20 January 2014 19:29, Lei Wen <adrian.wenl@...il.com> wrote:
>> > Hi Viresh,
>>
>> Hi Lei,
>>
>> > I have one question regarding unbounded workqueue migration in your case.
>> > You use hotplug to migrate the unbounded work to other cpus, but its cpu mask
>> > would still be 0xf, since cannot be changed by cpuset.
>> >
>> > My question is how you could prevent this unbounded work migrate back
>> > to your isolated cpu?
>> > Seems to me there is no such mechanism in kernel, am I understand wrong?
>>
>> These workqueues are normally queued back from workqueue handler. And we
>> normally queue them on the local cpu, that's the default behavior of workqueue
>> subsystem. And so they land up on the same CPU again and again.
>
> But for workqueues having a global affinity, I think they can be rescheduled later
> on the old CPUs. Although I'm not sure about that, I'm Cc'ing Tejun.

Agree, since worker thread is made as enterring into all cpus, it
cannot prevent scheduler
do the migration.

But here is one point, that I see Viresh alredy set up two cpuset with
scheduler load balance
disabled, so it should stop the task migration between those two groups? Since
the sched_domain changed?

What is more, I also did  similiar test, and find when I set two such
cpuset group,
like core 0-2 to cpuset1, core 3 to cpuset2, while hotunplug the core3
afterwise.
I find the cpuset's cpus member becomes NULL even I hotplug the core3
back again.
So is it a bug?

Thanks,
Lei

>
> Also, one of the plan is to extend the sysfs interface of workqueues to override
> their affinity. If any of you guys want to try something there, that would be welcome.
> Also we want to work on the timer affinity. Perhaps we don't need a user interface
> for that, or maybe something on top of full dynticks to outline that we want the unbound
> timers to run on housekeeping CPUs only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ