[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <174496592184.31282.8164200843782365980.tip-bot2@tip-bot2>
Date: Fri, 18 Apr 2025 08:45:21 -0000
From: "tip-bot2 for Sandipan Das" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Peter Zijlstra <peterz@...radead.org>, Sandipan Das <sandipan.das@....com>,
Ingo Molnar <mingo@...nel.org>, x86@...nel.org, linux-kernel@...r.kernel.org
Subject: [tip: perf/core] perf/x86/intel/uncore: Use HRTIMER_MODE_HARD for
detecting overflows
The following commit has been merged into the perf/core branch of tip:
Commit-ID: 05c9b0cbe4b822c42382d27e3f73918600594882
Gitweb: https://git.kernel.org/tip/05c9b0cbe4b822c42382d27e3f73918600594882
Author: Sandipan Das <sandipan.das@....com>
AuthorDate: Fri, 18 Apr 2025 09:13:00 +05:30
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Fri, 18 Apr 2025 10:35:33 +02:00
perf/x86/intel/uncore: Use HRTIMER_MODE_HARD for detecting overflows
hrtimer handlers can be deferred to softirq context and affect timely
detection of counter overflows. Hence switch to HRTIMER_MODE_HARD.
Disabling and re-enabling IRQs in the hrtimer handler is not required
as pmu->start() and pmu->stop() can no longer intervene while updating
event->hw.prev_count.
Suggested-by: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Sandipan Das <sandipan.das@....com>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Acked-by: Peter Zijlstra <peterz@...radead.org>
Link: https://lore.kernel.org/r/0ad4698465077225769e8edd5b2c7e8f48f636d5.1744906694.git.sandipan.das@amd.com
---
arch/x86/events/intel/uncore.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index a34e50f..5811e17 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -305,17 +305,11 @@ static enum hrtimer_restart uncore_pmu_hrtimer(struct hrtimer *hrtimer)
{
struct intel_uncore_box *box;
struct perf_event *event;
- unsigned long flags;
int bit;
box = container_of(hrtimer, struct intel_uncore_box, hrtimer);
if (!box->n_active || box->cpu != smp_processor_id())
return HRTIMER_NORESTART;
- /*
- * disable local interrupt to prevent uncore_pmu_event_start/stop
- * to interrupt the update process
- */
- local_irq_save(flags);
/*
* handle boxes with an active event list as opposed to active
@@ -328,8 +322,6 @@ static enum hrtimer_restart uncore_pmu_hrtimer(struct hrtimer *hrtimer)
for_each_set_bit(bit, box->active_mask, UNCORE_PMC_IDX_MAX)
uncore_perf_event_update(box, box->events[bit]);
- local_irq_restore(flags);
-
hrtimer_forward_now(hrtimer, ns_to_ktime(box->hrtimer_duration));
return HRTIMER_RESTART;
}
@@ -337,7 +329,7 @@ static enum hrtimer_restart uncore_pmu_hrtimer(struct hrtimer *hrtimer)
void uncore_pmu_start_hrtimer(struct intel_uncore_box *box)
{
hrtimer_start(&box->hrtimer, ns_to_ktime(box->hrtimer_duration),
- HRTIMER_MODE_REL_PINNED);
+ HRTIMER_MODE_REL_PINNED_HARD);
}
void uncore_pmu_cancel_hrtimer(struct intel_uncore_box *box)
@@ -347,7 +339,7 @@ void uncore_pmu_cancel_hrtimer(struct intel_uncore_box *box)
static void uncore_pmu_init_hrtimer(struct intel_uncore_box *box)
{
- hrtimer_setup(&box->hrtimer, uncore_pmu_hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ hrtimer_setup(&box->hrtimer, uncore_pmu_hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
}
static struct intel_uncore_box *uncore_alloc_box(struct intel_uncore_type *type,
Powered by blists - more mailing lists