[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4DD5CE05.3030500@cn.fujitsu.com>
Date: Fri, 20 May 2011 10:12:21 +0800
From: Xiao Guangrong <xiaoguangrong@...fujitsu.com>
To: Paul Turner <pjt@...gle.com>
CC: linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Bharata B Rao <bharata@...ux.vnet.ibm.com>,
Dhaval Giani <dhaval.giani@...il.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Srivatsa Vaddagiri <vatsa@...ibm.com>,
Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...e.hu>, Pavel Emelyanov <xemul@...nvz.org>
Subject: Test for CFS Bandwidth Control V6
Hi Paul,
I'm so sorry for sending this mail in the new thread, since i didn't
receive your V6 patchset from LKML.
It seams the patchset can not be applied, since it's conflict between
patch 3 and patch 5:
========Quote========
[patch 03/15] sched: introduce primitives to account for CFS bandwidth tracking
+#ifdef CONFIG_CFS_BANDWIDTH
+ int runtime_enabled;
+ s64 runtime_remaining;
+#endif
#endif
};
+#ifdef CONFIG_CFS_BANDWIDTH
+static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
+{
+ return &tg->cfs_bandwidth;
+}
+
+static inline u64 default_cfs_period(void);
+
[patch 05/15] sched: add a timer to handle CFS bandwidth refresh
@@ -394,12 +400,38 @@ static inline struct cfs_bandwidth *tg_c
#ifdef CONFIG_CFS_BANDWIDTH
static inline u64 default_cfs_period(void);
+static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun);
+
+static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
+{
+ struct cfs_bandwidth *cfs_b =
+ container_of(timer, struct cfs_bandwidth, period_timer);
+ ktime_t now;
+ int overrun;
========End quote========
I downloaded the patchset from Internet, i missed the newer version?
I have done some test after fixed the conflict by handle, below test can cause
box crash:
========Quote cpu_hotlpug.sh ========
#!/bin/sh
ROOT_PATH="/mnt"
def_quota=30000
def_period=100000
pid=
creat_process()
{
nice -n $1 cat /dev/zero > /dev/null &
pid=$!
if [ $2 -ne -1 ]; then
taskset -pc $2 $pid &> /dev/null
fi
}
HOTPLUG_PATH=$ROOT_PATH/cpu-hotplug
mount -t cgroup -o cpu none $ROOT_PATH
mkdir $HOTPLUG_PATH
echo $def_quota > $HOTPLUG_PATH/cpu.cfs_quota_us
echo $def_period > $HOTPLUG_PATH/cpu.cfs_period_us
# create 3 tasks for every cpu
for((i=0;i<3;i++))
{
creat_process -6 1
echo $pid > $HOTPLUG_PATH/tasks
}
for((i=0;i<3;i++))
{
creat_process -6 2
echo $pid > $HOTPLUG_PATH/tasks
}
for((i=0;i<3;i++))
{
creat_process -6 3
echo $pid > $HOTPLUG_PATH/tasks
}
echo 0 > /sys/devices/system/cpu/cpu1/online
echo 1 > /sys/devices/system/cpu/cpu1/online
echo 0 > /sys/devices/system/cpu/cpu2/online
echo 1 > /sys/devices/system/cpu/cpu2/online
echo 0 > /sys/devices/system/cpu/cpu3/online
echo 1 > /sys/devices/system/cpu/cpu3/online
killall -9 cat
rmdir $HOTPLUG_PATH
umount $ROOT_PATH
======== End quote cpu_hotlpug.sh ========
Sorry to disturb you if the bug is know.
Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists