lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190226023016.ooyjsiopsyc6vg5s@vireshk-i7>
Date:   Tue, 26 Feb 2019 08:00:16 +0530
From:   Viresh Kumar <viresh.kumar@...aro.org>
To:     Qais Yousef <qais.yousef@....com>
Cc:     Rafael Wysocki <rjw@...ysocki.net>, linux-pm@...r.kernel.org,
        Vincent Guittot <vincent.guittot@...aro.org>, mka@...omium.org,
        juri.lelli@...il.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2 4/5] cpufreq: Register notifiers with the PM QoS
 framework

On 25-02-19, 12:14, Qais Yousef wrote:
> On 02/25/19 14:39, Viresh Kumar wrote:
> > On 25-02-19, 08:58, Qais Yousef wrote:
> > > On 02/25/19 10:01, Viresh Kumar wrote:
> > > > > > +	min = dev_pm_qos_read_value(cpu_dev, DEV_PM_QOS_MIN_FREQUENCY);
> > > > > > +	max = dev_pm_qos_read_value(cpu_dev, DEV_PM_QOS_MAX_FREQUENCY);
> > > > > > +
> > > > > > +	if (min > new_policy->min)
> > > > > > +		new_policy->min = min;
> > > > > > +	if (max < new_policy->max)
> > > > > > +		new_policy->max = max;
> > 
> > > And this is why we need to check here if the PM QoS value doesn't conflict with
> > > the current min/max, right? Until the current notifier code is removed they
> > > could trip over each others.
> > 
> > No. The above if/else block is already removed as part of patch 5/5. It was
> > required because of conflict between userspace specific min/max and qos min/max,
> > which are migrated to use qos by patc 5/5.
> > 
> > The cpufreq notifier mechanism already lets users play with min/max and that is
> > already safe from conflicts.
> > 
> > 
> > > It would be nice to add a comment here about PM QoS managing and remembering
> > > values
> > 
> > I am not sure if that would add any value. Some documentation update may be
> > useful for people looking for details though, that I shall do after all the
> > changes get in and things become a bit stable.
> > 
> 
> Up to you. But not everyone is familiar with the code and a one line comment
> that points to where aggregation is happening would be helpful for someone
> scanning this code IMHO.

Okay, will add something then.

> > > and that we need to be careful that both mechanisms don't trip over
> > > each others until this transient period is over.
> > 
> > The second mechanism will die very very soon once this is merged, migrating them
> > shouldn't be a big challenge AFAICT. I didn't attempt that because I didn't
> > wanted to waste time updating things in case this version also doesn't make
> > sense to others.
> > 
> > > I have a nit too. It would be nice to explicitly state this is
> > > CPU_{MIN,MAX}_FREQUENCY. I can see someone else adding {MIN,MAX}_FREQUENCY for
> > > something elsee (memory maybe?)
> > 
> > This is not CPU specific, but any device. The same interface shall be used by
> > devfreq as well, who wanted to use freq-constraints initially.
> > 
> 
> I don't get that to be honest. I probably have to read more.
> 
> Is what you're saying that when applying a MIN_FREQUENCY constraint the same
> value will be applied to both cpufreq and devfreq? Isn't this too coarse?

Oh no. A constraint with QoS is added like this:

        dev_pm_qos_add_request(dev, req, DEV_PM_QOS_MIN_FREQUENCY, min);

Now dev here can be any device struct, CPU's or GPU's or anything else. All the
MIN freq requests are stored/processed per device and for a CPU in cpufreq all
we will see is MIN requests for the CPUs. And so the macro is required to be a
bit generic and shouldn't have CPU word within it.

Hope I was able to clarify your doubt a bit. Thanks.

-- 
viresh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ