[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160308174856.GA28862@gmail.com>
Date: Tue, 8 Mar 2016 18:48:56 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Wang Nan <wangnan0@...wei.com>, Ingo Molnar <mingo@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
He Kuang <hekuang@...wei.com>,
Alexei Starovoitov <ast@...nel.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Brendan Gregg <brendan.d.gregg@...il.com>,
Jiri Olsa <jolsa@...nel.org>,
Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
Namhyung Kim <namhyung@...nel.org>,
Zefan Li <lizefan@...wei.com>, pi3orama@....com
Subject: Re: [RESEND PATCH 0/5] perf core: Support overwrite ring buffer
* Dmitry Vyukov <dvyukov@...gle.com> wrote:
> On Tue, Mar 8, 2016 at 6:37 PM, Ingo Molnar <mingo@...nel.org> wrote:
> >
> > * Dmitry Vyukov <dvyukov@...gle.com> wrote:
> >
> >> > fomalhaut:~/go/src/github.com/google/syzkaller> ps aux | grep -i syz
> >> > mingo 1374 0.0 0.0 118476 2376 pts/2 S+ 18:23 0:00 grep --color=auto -i syz
> >> >
> >> > and with no kernel messages in dmesg - and with a fully functional system.
> >> >
> >> > I'm running the 16-task load on a 120 CPU system - should I increase it to 120?
> >> > Does the code expect to saturate the system?
> >>
> >> No, it does not expect to saturate the system. Set "procs" to 480, or
> >> something like that.
> >
> > Does not seem to help much:
> >
> > fomalhaut:~> vmstat 10
> > procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> > r b swpd free buff cache si so bi bo in cs us sy id wa st
> >
> > 1 0 0 257465904 219940 4736092 0 0 0 102 16022 4396 0 1 99 0 0
> > 2 0 0 257452144 220496 4755052 0 0 2 3649 14286 4627 0 1 99 0 0
> > 2 0 0 257473408 221188 4770824 0 0 15 1898 17175 4474 0 1 99 0 0
> >
> > Only around 1% system utilization. Should I go for 1,000 or more? :)
> >
> > Peter, do you experience with running syz-kaller on larger CPU count Intel
> > systems?
>
>
> Try to set "dropprivs": false in config.
Things got a lot more lively after that!
But most of the overhead seems to come from systemd trying to dump core or
something like that:
85872 mingo 20 0 34712 3016 2656 S 4.6 0.0 0:00.14 systemd-coredum
85440 mingo 20 0 34712 3028 2664 S 4.2 0.0 0:00.13 systemd-coredum
85751 mingo 20 0 34712 3076 2716 S 4.2 0.0 0:00.13 systemd-coredum
85840 mingo 20 0 34712 2988 2624 S 4.2 0.0 0:00.13 systemd-coredum
85861 mingo 20 0 34712 3080 2720 S 4.2 0.0 0:00.13 systemd-coredum
85954 mingo 20 0 34712 3028 2664 S 4.2 0.0 0:00.13 systemd-coredum
and I have:
fomalhaut:~/go/src/github.com/google/syzkaller> ulimit -c
0
weird ... Has any of you seen such behavior?
Thanks,
Ingo
Powered by blists - more mailing lists