lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091113153653.GA4403@dhcp-lab-161.englab.brq.redhat.com>
Date:	Fri, 13 Nov 2009 16:36:54 +0100
From:	Stanislaw Gruszka <sgruszka@...hat.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>,
	Américo Wang <xiyou.wangcong@...il.com>,
	linux-kernel@...r.kernel.org, Oleg Nesterov <oleg@...hat.com>,
	Spencer Candland <spencer@...ehost.com>,
	Balbir Singh <balbir@...ibm.com>
Subject: Re: [PATCH] sys_times: fix utime/stime decreasing on thread exit

On Fri, Nov 13, 2009 at 02:16:59PM +0100, Peter Zijlstra wrote:
> > To fix we use pure tsk->{u,s}time values in __exit_signal(). This mean
> > reverting:
> > 
> > commit 49048622eae698e5c4ae61f7e71200f265ccc529
> > Author: Balbir Singh <balbir@...ux.vnet.ibm.com>
> > Date:   Fri Sep 5 18:12:23 2008 +0200
> > 
> >     sched: fix process time monotonicity
> > 
> > which is also fix for some utime/stime decreasing issues. However
> > I _believe_ issues which want to be fixed in this commit, was caused
> > by Problem 1) and this patch not make them happen again.
> 
> It would be very good to verify that believe and make it a certainty.

Balbir, are some chance to avoid task_[usg]time() usage here? Could
you be so kind and give me point to reproducer program/script you used
when worked on "sched: fix process time monotonicity" commit?

> Otherwise we need to do the opposite and propagate task_[usg]time() to
> all other places... :/
>
> /me quickly stares at fs/proc/array.c:do_task_stat(), which is what top
> uses to get the times..
> 
> That simply uses thread_group_cputime() properly under siglock and would
> thus indeed require the use of task_[usg]time() in order to avoid the
> stupid hiding 'exploit'..
> 
> Oh bugger,.. 
> 
> I think we do indeed need something like the below, not sure if all
> task_[usg]time() calls are now under siglock, if not they ought to be,
> otherwise there's a race with them updating p->prev_[us]time.
> 
> ---
> 
> ---diff --git a/kernel/posix-cpu-timers.c b/kernel/posix-cpu-timers.c
> index 5c9dc22..9b1d715 100644
> --- a/kernel/posix-cpu-timers.c
> +++ b/kernel/posix-cpu-timers.c
> @@ -170,11 +170,11 @@ static void bump_cpu_timer(struct k_itimer *timer,
>  
>  static inline cputime_t prof_ticks(struct task_struct *p)
>  {
> -	return cputime_add(p->utime, p->stime);
> +	return cputime_add(task_utime(p), task_stime(p));
>  }
>  static inline cputime_t virt_ticks(struct task_struct *p)
>  {
> -	return p->utime;
> +	return task_utime(p);
>  }
>  
>  int posix_cpu_clock_getres(const clockid_t which_clock, struct timespec
> *tp)

Something wrong with formatting.

> @@ -248,8 +248,8 @@ void thread_group_cputime(struct task_struct *tsk,
> struct task_cputime *times)
>  
>  	t = tsk;
>  	do {
> -		times->utime = cputime_add(times->utime, t->utime);
> -		times->stime = cputime_add(times->stime, t->stime);
> +		times->utime = cputime_add(times->utime, task_utime(t));
> +		times->stime = cputime_add(times->stime, task_stime(t));
>  		times->sum_exec_runtime += t->se.sum_exec_runtime;
>  
>  		t = next_thread(t);
[snip]

Confirmed patch fix problem using reproducer from this thread.

But I don't like it much. Sad we can not do transition to opposite
direction and remove task_{u,s}time.

A few month ago I was thinking about removing cputime_t and using
long long instead, now see much more reasons of doing this, but still
lack of skills/time for that - oh dear.

Stanislaw
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ