lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 4 Feb 2010 17:32:31 +0100
From:	Stephane Eranian <eranian@...gle.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu, paulus@...ba.org,
	davem@...emloft.net, fweisbec@...il.com, robert.richter@....com,
	perfmon2-devel@...ts.sf.net, eranian@...il.com
Subject: Re: [PATCH] perf_events: AMD event scheduling (v2)

>> >> +     nb_id = amd_get_nb_id(cpu);
>> >> +     if (nb_id == BAD_APICID)
>> >> +             return;
>> >> +
>> >> +     cpu1 = &per_cpu(cpu_hw_events, cpu);
>> >> +     cpu1->amd_nb = NULL;
>> >> +
>> >> +     raw_spin_lock(&amd_nb_lock);
>> >> +
>> >> +     for_each_online_cpu(i) {
>> >> +             cpu2 = &per_cpu(cpu_hw_events, i);
>> >> +             nb = cpu2->amd_nb;
>> >> +             if (!nb)
>> >> +                     continue;
>> >> +             if (nb->nb_id == nb_id)
>> >> +                     goto found;
>> >> +     }
>> >> +
>> >> +     nb = amd_alloc_nb(cpu, nb_id);
>> >> +     if (!nb) {
>> >> +             pr_err("perf_events: failed to allocate NB storage for CPU%d\n", cpu);
>> >> +             raw_spin_unlock(&amd_nb_lock);
>> >> +             return;
>> >> +     }
>> >> +found:
>> >> +     nb->refcnt++;
>> >> +     cpu1->amd_nb = nb;
>> >> +
>> >> +     raw_spin_unlock(&amd_nb_lock);
>> >
>> > Can't this be simplified by using the cpu to node mask?
>>
>> You mean to find the NB that corresponds to a CPU?
>
> Yep, saves having to poke at all cpus.

That's not what the loop is about.

amd_get_nb_id() is the function which returns the id
of the NB based on cpu. It is not clear to me whether
NB and NUMA node always perfectly overlap. What
matter here is the HW config, not how NUMA nodes
were setup.

We allocate one amd_nb struct per NB. Once we get
our NB, we need to find if it has already been allocated
and refcnt++, otherwise we must allocate it.

>> > Also, I think this is buggy in that:
>> >
>> >  perf_disable();
>> >  event->pmu->disable(event);
>> >  ...
>> >  event->pmu->enable(event);
>> >  perf_enable();
>> >
>> > can now fail, I think we need to move the put_event_constraint() from
>> > x86_pmu_disable() into x86_perf_enable() or something.
>>
>> Constraints are reserved during x86_scheduling(), not during enable().
>> So if you had a conflict it was detected earlier than that.
>
> What I'm worried about is:
>
> CPU A and B are of the same NorthBridge and all node counters are taken.
>
> CPU-A                   CPU-B
>
> perf_disable();
> event->pmu->disable(event)
>  x86_pmu.put_event_constraint() /* free slot n */
>
>                        event->pmu->enable(event);
>                          x86_schedule_events();
>                            x86_pmu.get_event_constraint() /* grab slot n */
>
> event->pmu->enable(event)
>  x86_schedule_events()
>    x86_pmu.get_event_constraint() /* FAIL */
> perf_enable();
>
> That means you cannot disable/enable the same event within a
> perf_disable/enable section.
>
Yes but I would say that is because of the logic behind the enable/disable
interface. Here you are disabling "for a short period", you know you're going
to re-enable. Yet you are using an API that is a generic enable/disable.
You would need to pass some argument to disable() to say "temporary"
or "stop but don't release the resource".
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ