lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 16 Oct 2023 12:51:52 -0300
From:   Arnaldo Carvalho de Melo <acme@...nel.org>
To:     Namhyung Kim <namhyung@...nel.org>
Cc:     Ingo Molnar <mingo@...nel.org>, Jiri Olsa <jolsa@...nel.org>,
        Ian Rogers <irogers@...gle.com>,
        Adrian Hunter <adrian.hunter@...el.com>,
        Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        linux-perf-users@...r.kernel.org
Subject: Re: [PATCH v3] perf bench sched pipe: Add -G/--cgroups option

Em Mon, Oct 16, 2023 at 12:45:17PM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Mon, Oct 16, 2023 at 12:38:12PM -0300, Arnaldo Carvalho de Melo escreveu:
> > Em Mon, Oct 16, 2023 at 11:35:35AM +0200, Ingo Molnar escreveu:
> > > * Namhyung Kim <namhyung@...nel.org> wrote:
> > > 
> > > > +	/* try cgroup v2 interface first */
> > > > +	if (threaded)
> > > > +		fd = openat(cgrp->fd, "cgroup.threads", O_WRONLY);
> > > > +	else
> > > > +		fd = openat(cgrp->fd, "cgroup.procs", O_WRONLY);
> > > > +
> > > > +	/* try cgroup v1 if failed */
> > > > +	if (fd < 0)
> > > > +		fd = openat(cgrp->fd, "tasks", O_WRONLY);
> > > > +
> > > > +	if (fd < 0) {
> > > > +		char mnt[PATH_MAX];
> > > > +
> > > > +		printf("Failed to open cgroup file in %s\n", cgrp->name);
> > > > +
> > > > +		if (cgroupfs_find_mountpoint(mnt, sizeof(mnt), "perf_event") == 0)
> > > > +			printf(" Hint: create the cgroup first, like 'mkdir %s/%s'\n",
> > > > +			       mnt, cgrp->name);
> > > 
> > > Ok, this works too I suppose.
> > > 
> > > Acked-by: Ingo Molnar <mingo@...nel.org>
> > 
> > I'm not getting that:
> > 
> > [root@...e ~]# perf bench sched pipe -l 10000 -G AAA,BBB
> > # Running 'sched/pipe' benchmark:
> > no access to cgroup /sys/fs/cgroup/AAA
> > cannot open sender cgroup: AAA
> >  Usage: perf bench sched pipe <options>
> > 
> >     -G, --cgroups <SEND,RECV>
> >                           Put sender and receivers in given cgroups
> > [root@...e ~]#
> > 
> > Its better now as it bails out, but it is not emitting any message that
> > helps with running the test, well, there is that /sys/fs/cgroup/AAA
> > path, lemme try doing a mkdir:
> > 
> > [root@...e ~]# perf bench sched pipe -l 10000 -G AAA,BBB
> > # Running 'sched/pipe' benchmark:
> > no access to cgroup /sys/fs/cgroup/BBB
> > cannot open receiver cgroup: BBB
> >  Usage: perf bench sched pipe <options>
> > 
> >     -G, --cgroups <SEND,RECV>
> >                           Put sender and receivers in given cgroups
> > [root@...e ~]#
> > 
> > [root@...e ~]# perf bench sched pipe -l 10000 -G AAA,BBB
> > # Running 'sched/pipe' benchmark:
> > [root@...e ~]#
> > 
> > It seems to be bailing out but doesn't run the test nor emits any
> > warning.
> 
> (gdb) run bench sched pipe -l 10000
> Starting program: /root/bin/perf bench sched pipe -l 10000
> # Running 'sched/pipe' benchmark:
> [Detaching after fork from child process 33618]
> 
> Breakpoint 1, bench_sched_pipe (argc=0, argv=0x7fffffffe3d8) at bench/sched-pipe.c:259
> 259		if (threads[0].cgroup_failed || threads[1].cgroup_failed)
> (gdb) p threads[0].cgroup_failed
> $1 = 137
> (gdb) p threads[1].cgroup_failed
> $2 = false
> (gdb) n
> 260			return 0;
> (gdb)
> 
> But I'm not even using cgroups?

So, with the patch below 'perf bench sched pipe -l 1000' is back working
for me:

[root@...e ~]# perf bench sched pipe  -l 1000
# Running 'sched/pipe' benchmark:
# Executed 1000 pipe operations between two processes

     Total time: 0.007 [sec]

       7.671000 usecs/op
         130361 ops/sec
[root@...e ~]#

Now back at testing with with cgroups.

- Arnaldo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ