lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0703252107590.31900@linmac.oyster.ru>
Date:	Sun, 25 Mar 2007 21:14:11 +0400 (MSD)
From:	malc <av1474@...tv.ru>
To:	Con Kolivas <kernel@...ivas.org>
cc:	Ingo Molnar <mingo@...e.hu>,
	linux list <linux-kernel@...r.kernel.org>, zwane@...radead.org,
	ck list <ck@....kolivas.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [patch] sched: accurate user accounting

On Mon, 26 Mar 2007, Con Kolivas wrote:

> On Monday 26 March 2007 01:19, malc wrote:
>> On Mon, 26 Mar 2007, Con Kolivas wrote:
>>> So before we go any further with this patch, can you try the following
>>> one and see if this simple sanity check is enough?
>>
>> Sure (compiling the kernel now), too bad old axiom that testing can not
>> confirm absence of bugs holds.
>>
>> I have one nit and one request from clarification. Question first (i
>> admit i haven't looked at the surroundings of the patch maybe things
>> would have been are self evident if i did):
>>
>> What this patch amounts to is that the accounting logic is moved from
>> timer interrupt to the place where scheduler switches task (or something
>> to that effect)?
>
> Both the scheduler tick and context switch now. So yes it adds overhead as I
> said, although we already do update_cpu_clock on context switch, but it's not
> this complex.
>
>> [..snip..]
>>
>>>  * These are the 'tuning knobs' of the scheduler:
>>> @@ -3017,8 +3018,53 @@ EXPORT_PER_CPU_SYMBOL(kstat);
>>> static inline void
>>> update_cpu_clock(struct task_struct *p, struct rq *rq, unsigned long long
>>> now) {
>>> -	p->sched_time += now - p->last_ran;
>>> +	struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
>>> +	cputime64_t time_diff;
>>> +
>>> 	p->last_ran = rq->most_recent_timestamp = now;
>>> +	/* Sanity check. It should never go backwards or ruin accounting */
>>> +	if (unlikely(now < p->last_ran))
>>> +		return;
>>> +	time_diff = now - p->last_ran;
>>
>> A nit. Anything wrong with:
>>
>> time_diff = now - p->last_ran;
>> if (unlikeley (LESS_THAN_ZERO (time_diff))
>>          return;
>
> Does LESS_THAN_ZERO work on a cputime64_t on all arches? I can't figure that
> out just by looking myself which is why I did it the other way.

I have no idea what type cputime64_t really is, so used this imaginary
LESS_THAN_ZERO thing.

Erm... i just looked at the code and suddenly it stopped making any sense
at all:

         p->last_ran = rq->most_recent_timestamp = now;
         /* Sanity check. It should never go backwards or ruin accounting */
         if (unlikely(now < p->last_ran))
                 return;
         time_diff = now - p->last_ran;

First `now' is assigned to `p->last_ran' and the very next line
compares those two values, and then the difference is taken.. I quite
frankly am either very tired or fail to see the point.. time_diff is
either always zero or there's always a race here.

--
vale
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ