[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53731D12.7040804@linux.vnet.ibm.com>
Date: Wed, 14 May 2014 15:36:50 +0800
From: Michael wang <wangyun@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>
CC: LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>, Mike Galbraith <efault@....de>,
Alex Shi <alex.shi@...aro.org>, Paul Turner <pjt@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Daniel Lezcano <daniel.lezcano@...aro.org>
Subject: Re: [ISSUE] sched/cgroup: Does cpu-cgroup still works fine nowadays?
Hi, Peter
On 05/13/2014 10:23 PM, Peter Zijlstra wrote:
[snip]
>
> I you want to investigate !spinners, replace the ABC with slightly more
> complex loads like: https://lkml.org/lkml/2012/6/18/212
I've done a little reform, enabled multi-threads and add a mutex,
please check the code below for details.
I built it by:
gcc -o my_tool cgroup_tool.c -lpthread
distro mount cpu-subsys under '/sys/fs/cgroup/cpu', create group like:
mkdir /sys/fs/cgroup/cpu/A
mkdir /sys/fs/cgroup/cpu/B
mkdir /sys/fs/cgroup/cpu/C
and then:
echo $$ > /sys/fs/cgroup/cpu/A/tasks ; ./my_tool -l
echo $$ > /sys/fs/cgroup/cpu/B/tasks ; ./my_tool -l
echo $$ > /sys/fs/cgroup/cpu/C/tasks ; ./my_tool 50
the results in top is around:
A B C
CPU% 550 550 100
While only './my_tool 50' was running, it require around 300%.
And this could also be reproduced by dbench, stress combination like:
echo $$ > /sys/fs/cgroup/cpu/A/tasks ; dbench 6
echo $$ > /sys/fs/cgroup/cpu/B/tasks ; stress -c 6
echo $$ > /sys/fs/cgroup/cpu/C/tasks ; stress -c 6
Now it seems more like a generic problem... will keep investigating, please
let me know if there are any suggestions :)
Regards,
Michael Wang
#include <sys/time.h>
#include <unistd.h>
#include <stdio.h>
#include <pthread.h>
pthread_mutex_t my_mutex;
unsigned long long stamp(void)
{
struct timeval tv;
gettimeofday(&tv, NULL);
return (unsigned long long)tv.tv_sec * 1000000 + tv.tv_usec;
}
void consume(int spin, int total)
{
unsigned long long begin, now;
begin = stamp();
for (;;) {
pthread_mutex_lock(&my_mutex);
now = stamp();
if ((long long)(now - begin) > spin) {
pthread_mutex_unlock(&my_mutex);
usleep(total - spin);
pthread_mutex_lock(&my_mutex);
begin += total;
}
pthread_mutex_unlock(&my_mutex);
}
}
struct my_data {
int spin;
int total;
};
void *my_fn_sleepy(void *arg)
{
struct my_data *data = (struct my_data *)arg;
consume(data->spin, data->total);
return NULL;
}
void *my_fn_loop(void *arg)
{
while (1) {};
return NULL;
}
int main(int argc, char **argv)
{
int period = 100000; /* 100ms */
int frac;
struct my_data data;
pthread_t last_thread;
int thread_num = sysconf(_SC_NPROCESSORS_ONLN) / 2;
void *(*my_fn)(void *arg) = &my_fn_sleepy;
if (thread_num <= 0 || thread_num > 1024) {
fprintf(stderr, "insane processor(half) size %d\n", thread_num);
return -1;
}
if (argc == 2 && !strcmp(argv[1], "-l")) {
my_fn = &my_fn_loop;
printf("loop mode enabled\n");
goto loop_mode;
}
if (argc < 2) {
fprintf(stderr, "%s <frac> [<period>]\n"
" frac -- [1-100] %% of time to burn\n"
" period -- [usec] period of burn/sleep cycle\n",
argv[0]);
return -1;
}
frac = atoi(argv[1]);
if (argc > 2)
period = atoi(argv[2]);
if (frac > 100)
frac = 100;
if (frac < 1)
frac = 1;
data.spin = (period * frac) / 100;
data.total = period;
loop_mode:
pthread_mutex_init(&my_mutex, NULL);
while (thread_num--) {
if (pthread_create(&last_thread, NULL, my_fn, &data)) {
fprintf(stderr, "Create thread failed\n");
return -1;
}
}
printf("Threads never stop, CTRL + C to terminate\n");
pthread_join(last_thread, NULL);
pthread_mutex_destroy(&my_mutex); //won't happen
return 0;
}
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
View attachment "cgroup_tool.c" of type "text/x-csrc" (2043 bytes)
Powered by blists - more mailing lists