lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 17 Sep 2011 09:29:46 +0900
From:	Tejun Heo <tj@...nel.org>
To:	Ripduman Sohan <Ripduman.Sohan@...cam.ac.uk>
Cc:	linux-kernel@...r.kernel.org, peterz@...radead.org
Subject: Re: [PATCH] workqueue: Restore cpus_allowed mask for sleeping
 workqueue rescue threads

Hello, Ripduman.

On Thu, Sep 15, 2011 at 05:14:30PM +0100, Ripduman Sohan wrote:
> The rescuer being left bound to the last CPU it was active on is not a
> problem.  As I pointed out in the commit log the issue is that the
> allowed_cpus mask is not restored when rescuers return to sleep,
> rendering inconsistent the presented and actual set of CPUs the
> process may potentially run on.
> 
> Perhaps an explanation is in order.  I am working on a system where we
> constantly sample process run-state (including the process
> Cpus_Allowed field in /proc/<pid>/status) to build a forward plan of
> where the process _may_ run in the future.  In situations of high
> memory pressue (common on our setup) where the rescuers ran often the
> plan begun to significantly deviate from the calculated schedule
> because rescuer threads were marked as only runnable on a single CPU
> when in reality they would bounce across CPUs.

But cpus_allowed doesn't mean where the task *may* run in the future.
It indicates on which cpus the task is allowed to run *now* and it's
allowed to change.

> I've currently put in a special-case exception in our code to account
> for the fact that rescuer threads may run on _any_ CPU regardless of
> the current cpus_allowed mask but I thought it would be useful to
> correct it.  I'm happy to continue with my current approach if you
> deem the patch irrelevant.

I'm not necessarily against the patch if it helps a valid use case but
let's do that when and if the use case becomes relevant enough, which
I don't think it is yet.  Please feel free to raise the issue again
when the situation changes.

Thank you.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ