lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 5 Jan 2011 09:32:37 +0100
From:	Stephane Eranian <eranian@...gle.com>
To:	Li Zefan <lizf@...fujitsu.com>
Cc:	linux-kernel@...r.kernel.org, peterz@...radead.org, mingo@...e.hu,
	paulus@...ba.org, davem@...emloft.net, fweisbec@...il.com,
	perfmon2-devel@...ts.sf.net, eranian@...il.com,
	robert.richter@....com, acme@...hat.com
Subject: Re: [PATCH 4/5] perf_events: add cgroup support (v7)

Li,

Thanks for your comment. Good catch on perf_detach_cgroup().
I will resubmit this patch (part 4/5).

On Wed, Jan 5, 2011 at 4:13 AM, Li Zefan <lizf@...fujitsu.com> wrote:
> Stephane Eranian wrote:
>> This kernel patch adds the ability to filter monitoring based on
>> container groups (cgroups). This is for use in per-cpu mode only.
>>
>> The cgroup to monitor is passed as a file descriptor in the pid
>> argument to the syscall. The file descriptor must be opened to
>> the cgroup name in the cgroup filesystem. For instance, if the
>> cgroup name is foo and cgroupfs is mounted in /cgroup, then the
>> file descriptor is opened to /cgroup/foo. Cgroup mode is
>> activated by passing PERF_FLAG_PID_CGROUP in the flags argument
>> to the syscall.
>>
>> For instance to measure in cgroup foo on CPU1 assuming
>> cgroupfs is mounted under /cgroup:
>>
>> struct perf_event_attr attr;
>> int cgroup_fd, fd;
>>
>> cgroup_fd = open("/cgroup/foo", O_RDONLY);
>> fd = perf_event_open(&attr, cgroup_fd, 1, -1, PERF_FLAG_PID_CGROUP);
>> close(cgroup_fd);
>>
>> Signed-off-by: Stephane Eranian <eranian@...gle.com>
>
> For the cgroup part:
>
> Acked-by: Li Zefan <lizf@...fujitsu.com>
>
> a few comments below:
>
>>
> ...
>> +config CGROUP_PERF
>> +     bool "Enable perf_event per-cpu per-container group (cgroup) monitoring"
>> +     depends on PERF_EVENTS && CGROUPS
>
> Depending on CGROUPS is already implicated by "if CGROUPS"
>
>> +     help
>> +       This option extends the per-cpu mode to restrict monitoring to
>> +       threads which belong to the cgroup specificied and run on the
>> +       designated cpu.
>> +
>> +       Say N if unsure.
>> +
>>  menuconfig CGROUP_SCHED
>>       bool "Group CPU scheduler"
>>       depends on EXPERIMENTAL
> ...
>> +static inline int perf_cgroup_connect(int fd, struct perf_event *event,
>> +                                   struct perf_event_attr *attr,
>> +                                   struct perf_event *group_leader)
>> +{
>> +     struct perf_cgroup *cgrp;
>> +     struct cgroup_subsys_state *css;
>> +     struct file *file;
>> +     int ret = 0, fput_needed;
>> +
>> +     file = fget_light(fd, &fput_needed);
>> +     if (!file)
>> +             return -EBADF;
>> +
>> +     css = cgroup_css_from_dir(file, perf_subsys_id);
>> +     if (IS_ERR(css))
>> +             return PTR_ERR(css);
>> +
>> +     cgrp = container_of(css, struct perf_cgroup, css);
>> +     event->cgrp = cgrp;
>> +
>> +     /*
>> +      * all events in a group must monitor
>> +      * the same cgroup because a thread belongs
>> +      * to only one perf cgroup at a time
>> +      */
>> +     if (group_leader && group_leader->cgrp != cgrp) {
>> +             perf_detach_cgroup(event);
>
> This doesn't seem right, because we haven't got the cgroup
> refcount.
>
>> +             ret = -EINVAL;
>> +     } else {
>> +             /* must be done before we fput() the file */
>> +             perf_get_cgroup(event);
>> +     }
>> +     fput_light(file, fput_needed);
>> +     return ret;
>> +}
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ