lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZfhXUuwcEC148cdx@arm.com>
Date: Mon, 18 Mar 2024 15:01:38 +0000
From: Ionela Voinescu <ionela.voinescu@....com>
To: Beata Michalska <beata.michalska@....com>
Cc: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
	vanshikonda@...amperecomputing.com, sudeep.holla@....com,
	will@...nel.org, catalin.marinas@....com,
	vincent.guittot@...aro.org, sumitg@...dia.com,
	yang@...amperecomputing.com, lihuisong@...wei.com
Subject: Re: [PATCH v3 2/3] arm64: Provide an AMU-based version of
 arch_freq_get_on_cpu

Hey,

On Thursday 14 Mar 2024 at 00:46:19 (+0100), Beata Michalska wrote:
[..]
> > >  static void amu_scale_freq_tick(void)
> > >  {
> > > +	struct amu_cntr_sample *amu_sample = this_cpu_ptr(&cpu_amu_samples);
> > >  	u64 prev_core_cnt, prev_const_cnt;
> > >  	u64 core_cnt, const_cnt, scale;
> > >  
> > > -	prev_const_cnt = this_cpu_read(arch_const_cycles_prev);
> > > -	prev_core_cnt = this_cpu_read(arch_core_cycles_prev);
> > > +	prev_const_cnt = amu_sample->arch_const_cycles_prev;
> > > +	prev_core_cnt = amu_sample->arch_core_cycles_prev;
> > > +
> > > +	write_seqcount_begin(&amu_sample->seq);
> > 
> > The critical section here does not need to be this extensive, right?
> > 
> > The arch_freq_get_on_cpu() function only uses the frequency scale factor
> > and the last_update value, so this need only be placed above
> > "this_cpu_write(arch_freq_scale,..", if I'm not missing anything.
> 
> You're not missing anything. The write side critical section could span only
> those two, but having it extended gives a chance for the readers to get in on
> the update and as those are not really performance sensitive I though it might
> be a good option, especially if we can save the cycles on not needing to poke
> the cpufeq driver. Furthermore, if the critical section is to span only the two,
> then it does not really change much and can be dropped.
> 
> > 
> > >  
> > >  	update_freq_counters_refs();
> > >  
> > > -	const_cnt = this_cpu_read(arch_const_cycles_prev);
> > > -	core_cnt = this_cpu_read(arch_core_cycles_prev);
> > > +	const_cnt = amu_sample->arch_const_cycles_prev;
> > > +	core_cnt = amu_sample->arch_core_cycles_prev;
> > >  
> > > +	/*
> > > +	 * This should not happen unless the AMUs have been reset and the
> > > +	 * counter values have not been resroted - unlikely
> > > +	 */
> > >  	if (unlikely(core_cnt <= prev_core_cnt ||
> > >  		     const_cnt <= prev_const_cnt))
> > > -		return;
> > > +		goto leave;
> > >  
> > >  	/*
> > >  	 *	    /\core    arch_max_freq_scale
> > > @@ -182,6 +204,10 @@ static void amu_scale_freq_tick(void)
> > >  
> > >  	scale = min_t(unsigned long, scale, SCHED_CAPACITY_SCALE);
> > >  	this_cpu_write(arch_freq_scale, (unsigned long)scale);
> > > +
> > > +	amu_sample->last_update = jiffies;
> > > +leave:
> > > +	write_seqcount_end(&amu_sample->seq);
> > >  }
> > >  
> > >  static struct scale_freq_data amu_sfd = {
> > > @@ -189,6 +215,61 @@ static struct scale_freq_data amu_sfd = {
> > >  	.set_freq_scale = amu_scale_freq_tick,
> > >  };
> > >  
> > > +#define AMU_SAMPLE_EXP_MS	20
> > > +
> > > +unsigned int arch_freq_get_on_cpu(int cpu)
> > > +{
> > > +	struct amu_cntr_sample *amu_sample;
> > > +	unsigned long last_update;
> > > +	unsigned int seq;
> > > +	unsigned int freq;
> > > +	u64 scale;
> > > +
> > > +	if (!cpumask_test_cpu(cpu, amu_fie_cpus) || !arch_scale_freq_ref(cpu))
> > > +		return 0;
> > > +
> > > +retry:
> > > +	amu_sample = per_cpu_ptr(&cpu_amu_samples, cpu);
> > > +
> > > +	do {
> > > +		seq = raw_read_seqcount_begin(&amu_sample->seq);
> > > +		last_update = amu_sample->last_update;
> > > +	} while (read_seqcount_retry(&amu_sample->seq, seq));
> > 
> > Related to the point above, this retry loop should also contain
> > "scale = arch_scale_freq_capacity(cpu)", otherwise there's no much point
> > for synchronisation, as far as I can tell.
> I'm not entirely sure why we would need to include the scale factor within
> the read critical section. The aim here is to make sure we see the update if
> one is ongoing and that the update to the timestamp is observed along with
> one to the scale factor, which is what the write_seqcount_end will guarantee
> (although the latter is not a hard sell as the update happens under interrupts
> being disabled). If later on we fetch newer scale factor that's perfectly fine,
> we do not want to see the stale one. Again, I can drop the seqcount (which is
> slightly abused in this case I must admit) at a cost of potentially missing some
> updates.

Replying here for both comments, as they are related.

I fully agree, but I would be more inclined to drop the seqcount. It
would be a game of chance if there was an update in the last few ns of
the 20ms deadline which we might hit or miss due to the presence of an
extended write critical section or the lack of one.

> > 
> > For x86, arch_freq_get_on_cpu() uses the counter deltas and it would be
> > bad if values from different ticks would be used. But here the only
> > benefit of synchronisation is to make sure that we're using the scale
> > factor computed at the last update time. For us, even skipping on the
> > synchronisation logic would still be acceptable, as we'd be ensuring that
> > there was a tick in the past 20ms and we'd always use the most recent
> > value of the frequency scale factor.
> How would we ensure there was a tick in last 20ms ?

I just meant that we'd observe the presence of a tick in the last 20ms
(if there was one) and we don't necessarily need to guarantee that we'd
use the scale factor obtained at that time. We could use the latest, as
you mentioned above as well.

Thanks,
Ionela.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ