lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <B99DC9EB-9502-4E31-B829-6EB7EFC6212C@earthlink.net>
Date:	Tue, 30 Jun 2009 18:36:33 -0700
From:	Mitchell Erblich <erblichs@...thlink.net>
To:	linux-kernel@...r.kernel.org
Subject: CFS Scheduler : Period : for NCPUs : Code Suggestion Change

        This is NOT A PATCH.

	PLEASE include my email in the reply as I am not currently on
	the linux kernel mail alias.

	This code snap is grabbed from what is believed to be a semi-current
	OS source (fxr.watson.org) comparison webpage.

	Upon a quick CFS scheduler code walk, increasing the period
	should ALSO be dependent on the number of online/active CPUs.

	The period should be adjusted based on the number of
	online CPUs. This change allows NCPUs * tasks without
	changing/increasing the period.

	On first thought NR_CPUS should give the number of cpu on
	the system, however, this may be different from the number of
	CPUs online, thus..

	Change #1: place after line 425
	int cpu, ncpu;


	Change #2: place before line 427
	for_each_online_cpu(cpu) {
		ncpu++;
         }
	nr_running /=  ncpu;
	

	This may not be from from the latest source, but should be accurate.

	sched_fair.c :

   415  * The idea is to set a period in which each task runs once.
   416  *
   417  * When there are too many tasks (sysctl_sched_nr_latency) we  
have to stretch
   418  * this period because otherwise the slices get too small.
   419  *
   420  * p = (nr <= nl) ? l : l*nr/nl
   421  */
   422 static u64 __sched_period(unsigned long nr_running)
   423 {
   424         u64 period = sysctl_sched_latency;
   425         unsigned long nr_latency = sched_nr_latency;
   426

   427         if (unlikely(nr_running > nr_latency)) {
   428                 period = sysctl_sched_min_granularity;
   429                 period *= nr_running;
   430         }
   431
   432         return period;
   433 }


   435 /*
   436  * We calculate the wall-time slice from the period by taking a  
part
   437  * proportional to the weight.
   438  *
   439  * s = p*P[w/rw]
   440  */
   441 static u64 sched_slice(struct cfs_rq *cfs_rq, struct  
sched_entity *se)
   442 {
   443         unsigned long nr_running = cfs_rq->nr_running;
   444
   445         if (unlikely(!se->on_rq))
   446                 nr_running++;
   447
   448         return calc_delta_weight(__sched_period(nr_running), se);
   449 }
   450


		Thank you, Mitchell Erblich

	
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ