lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 25 Oct 2019 03:25:53 +0200
From:   Frederic Weisbecker <frederic@...nel.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Jacek Anaszewski <jacek.anaszewski@...il.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        "Rafael J . Wysocki" <rjw@...ysocki.net>,
        Rik van Riel <riel@...riel.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Yauheni Kaliuta <yauheni.kaliuta@...hat.com>,
        Ingo Molnar <mingo@...nel.org>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Pavel Machek <pavel@....cz>
Subject: Re: [PATCH 11/14] sched/kcpustat: Introduce vtime-aware kcpustat
 accessor for CPUTIME_SYSTEM

On Thu, Oct 24, 2019 at 01:50:34PM +0200, Peter Zijlstra wrote:
> On Wed, Oct 16, 2019 at 04:56:57AM +0200, Frederic Weisbecker wrote:
> 
> > +static int kcpustat_field_vtime(u64 *cpustat,
> > +				struct vtime *vtime,
> > +				enum cpu_usage_stat usage,
> > +				int cpu, u64 *val)
> > +{
> > +	unsigned int seq;
> > +	int err;
> > +
> > +	do {
> > +		seq = read_seqcount_begin(&vtime->seqcount);
> > +
> > +		/*
> > +		 * We raced against context switch, fetch the
> > +		 * kcpustat task again.
> > +		 */
> > +		if (vtime->cpu != cpu && vtime->cpu != -1) {
> > +			err = -EAGAIN;
> > +			continue;
> 
> Did that want to be break?
> 
> > +		}
> > +
> > +		/*
> > +		 * Two possible things here:
> > +		 * 1) We are seeing the scheduling out task (prev) or any past one.
> > +		 * 2) We are seeing the scheduling in task (next) but it hasn't
> > +		 *    passed though vtime_task_switch() yet so the pending
> > +		 *    cputime of the prev task may not be flushed yet.
> > +		 *
> > +		 * Case 1) is ok but 2) is not. So wait for a safe VTIME state.
> > +		 */
> > +		if (vtime->state == VTIME_INACTIVE) {
> > +			err = -EAGAIN;
> > +			continue;
> 
> Idem.

Well, both were meant to be continue. Which means do the same as
break but just in case we raced with the updater, try again with
the same task.

Now as we are checking again, we may as well reload the task indeed
so I'll turn those into break.

> 
> If so, you can do return -EAGAIN here, and return 0 at the end and get
> rid of err.
> 
> Also, if you're spin-waiting here, there should probably be a
> cpu_relax() before the return -EAGAIN.
> 
> And in case that is so, you probably want the rcu_read_lock() section
> below _inside_ the do{}while loop, such that the RCU section doesn't
> cover the entire spin-wait.

Good point!

> > +
> > +	do {
> > +		struct rq *rq = cpu_rq(cpu);
> > +		struct task_struct *curr;
> > +		struct vtime *vtime;
> > +
> > +		curr = rcu_dereference(rq->curr);
> 
> This is indeed safe now (relies on commit
> 
>   5311a98fef7d ("tasks, sched/core: RCUify the assignment of rq->curr")

Yeah and that has simplified the patchset a lot.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ