lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 6 Aug 2008 00:46:29 -0500
From:	Paul Jackson <pj@....com>
To:	Max Krasnyansky <maxk@...lcomm.com>
Cc:	mingo@...e.hu, linux-kernel@...r.kernel.org, menage@...gle.com,
	a.p.zijlstra@...llo.nl, vegard.nossum@...il.com,
	lizf@...fujitsu.com
Subject: Re: [PATCH] cpuset: Rework sched domains and CPU hotplug handling
 (2.6.27-rc1)

Max, replying to pj:
> > ...  Such naming controversies are usually
> > a sign of code duplication or improper factoring.
> I'm not sure what you're referring to. There was no back an forth.

I was referring to an earlier discussion, which resulted in a patch
which includes the line:

    __rebuild_sched_domains() has been renamed to async_rebuild_sched_domains().

> What I'm saying is that I do not think it's the best
> to change all the paths to be async.

But you seemed to be saying that the reason it was not
best to do so was because it was best not to change more
than necessary.

Perhaps I still don't understand your metric.  Apparently
it is not CPU cycles, nor "least change", but something else?

> I still do not see a good reason why. IMO exceptions are acceptable.
> Only domain rebuilds triggered by cpuset fs writes need to be async.
> I do not see a good technical reason why the rest needs to be converted
> and retested.

Well, your metric seems clear enough there - minimize effort of
code conversion and testing.


How about this ... two routines quite identical and parallel,
even in their names, except that one is async and the other not:

==================================================================

/*
 * Rebuild scheduler domains, asynchronously in a separate thread.
 *
 * If the flag 'sched_load_balance' of any cpuset with non-empty
 * 'cpus' changes, or if the 'cpus' allowed changes in any cpuset
 * which has that flag enabled, or if any cpuset with a non-empty
 * 'cpus' is removed, then call this routine to rebuild the
 * scheduler's dynamic sched domains.
 *
 * The rebuild_sched_domains() and partition_sched_domains()
 * routines must nest cgroup_lock() inside get_online_cpus(),
 * but such cpuset changes as these must nest that locking the
 * other way, holding cgroup_lock() for much of the code.
 *
 * So in order to avoid an ABBA deadlock, the cpuset code handling
 * these user changes delegates the actual sched domain rebuilding
 * to a separate workqueue thread, which ends up processing the
 * above rebuild_sched_domains_thread() function.
 */
static void async_rebuild_sched_domains(void)
{
        queue_work(cpuset_wq, &rebuild_sched_domains_work);
}

/*
 * Accomplishes the same scheduler domain rebuild as the above
 * async_rebuild_sched_domains(), however it directly calls the
 * rebuild routine inline, rather than calling it via a separate
 * asynchronous work thread.
 *
 * This can only be called from code that is not holding
 * cgroup_mutex (not nested in a cgroup_lock() call.)
 */
void inline_rebuild_sched_domains(void)
{
        rebuild_sched_domains_thread(NULL);
}

==================================================================

> To be fair the fact that you had trouble understanding my code does
> not automatically mean that it was not artistic ;-).

Quite so ... my mental capacities are modest and easily distracted ;).

Likely this explains in part why I fuss so much over keeping code
straight forward, with minimal twists, and turns, and duplications
with non-essential variations in detail.

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@....com> 1.940.382.4214
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ