lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160906190129.GO10153@twins.programming.kicks-ass.net>
Date:   Tue, 6 Sep 2016 21:01:29 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Stanislaw Gruszka <sgruszka@...hat.com>
Cc:     linux-kernel@...r.kernel.org,
        Giovanni Gherdovich <ggherdovich@...e.cz>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Mike Galbraith <mgalbraith@...e.de>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Rik van Riel <riel@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Wanpeng Li <wanpeng.li@...mail.com>,
        Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH v3] sched/cputime: Protect some other sum_exec_runtime
 reads on 32 bit cpus

On Tue, Sep 06, 2016 at 02:49:08PM +0200, Stanislaw Gruszka wrote:
> diff --git a/kernel/exit.c b/kernel/exit.c
> index 2f974ae..a46f96f 100644
> --- a/kernel/exit.c
> +++ b/kernel/exit.c
> @@ -134,7 +134,7 @@ static void __exit_signal(struct task_struct *tsk)
>  	sig->inblock += task_io_get_inblock(tsk);
>  	sig->oublock += task_io_get_oublock(tsk);
>  	task_io_accounting_add(&sig->ioac, &tsk->ioac);
> -	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
> +	sig->sum_sched_runtime += read_sum_exec_runtime(tsk);
>  	sig->nr_threads--;
>  	__unhash_process(tsk, group_dead);
>  	write_sequnlock(&sig->stats_lock);

If I'm not mistaken, at this point @p is dead, sum_exec_runtime will not
ever be updated again.

> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> index b93c72d..4d080f2 100644
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -702,7 +682,7 @@ out:
>  void task_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st)
>  {
>  	struct task_cputime cputime = {
> -		.sum_exec_runtime = p->se.sum_exec_runtime,
> +		.sum_exec_runtime = read_sum_exec_runtime(p),
>  	};
>  
>  	task_cputime(p, &cputime.utime, &cputime.stime);

Right, even if @p == current, this can race with an interrupt (the tick)
updating sum_exec_runtime.

> diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
> index 39008d7..a2d753b 100644
> --- a/kernel/time/posix-cpu-timers.c
> +++ b/kernel/time/posix-cpu-timers.c
> @@ -848,7 +848,7 @@ static void check_thread_timers(struct task_struct *tsk,
>  	tsk_expires->virt_exp = expires_to_cputime(expires);
>  
>  	tsk_expires->sched_exp = check_timers_list(++timers, firing,
> -						   tsk->se.sum_exec_runtime);
> +						   read_sum_exec_runtime(tsk));
>  
>  	/*
>  	 * Check for the special case thread timers.

@tsk == current and IRQs are disabled, sum_exec_runtime cannot be
updated.

> @@ -1115,7 +1115,7 @@ static inline int fastpath_timer_check(struct task_struct *tsk)
>  		struct task_cputime task_sample;
>  
>  		task_cputime(tsk, &task_sample.utime, &task_sample.stime);
> -		task_sample.sum_exec_runtime = tsk->se.sum_exec_runtime;
> +		task_sample.sum_exec_runtime = read_sum_exec_runtime(tsk);
>  		if (task_cputime_expired(&task_sample, &tsk->cputime_expires))
>  			return 1;
>  	}

Same.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ