lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56FAEC3D.1070300@redhat.com>
Date:	Tue, 29 Mar 2016 17:57:33 -0300
From:	Daniel Bristot de Oliveira <bristot@...hat.com>
To:	Steven Rostedt <rostedt@...dmis.org>,
	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Juri Lelli <juri.lelli@....com>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>,
	linux-rt-users <linux-rt-users@...r.kernel.org>
Subject: Re: [PATCH V2 3/3] sched/deadline: Tracepoints for deadline scheduler



On 03/29/2016 05:29 PM, Steven Rostedt wrote:
>>> Yes, we don't want to get rid of the old one. But it shouldn't break
>>> > > anything if we extend it. I'm thinking of extending it with a dynamic
>>> > > array to store the deadline task values (runtime, period). And for non
>>> > > deadline tasks, the array would be empty (size zero). I think that
>>> > > could be doable and maintain backward compatibility.  
>> > 
>> > Why the complexity? Why not just tack those 32 bytes on and get on with
>> > life?
> 32 bytes that are zero and meaningless for 99.999% of scheduling?

I agree. Not only because of the extra bytes, but also because of extra
information that is not useful for 99.999% of non-deadline users.

> The scheduling tracepoint is probably the most common tracepoint used,
> and one of the frequent ones. 32bytes of wasted space per event can
> cause a lot of tracing to be missed.

And any change on it, now and in the future, will cause confusion for
99.999% of raw sched_switch users. Without considering those who wrote
bad applications that will break, and those who wrote nice applications
and probably will have to keep many versions of their handlers to keep
backward compatibility with old kernels.

If it needs to be generic, I vote for a dynamic set of data, handled
"per-scheduler", as Steven mentioned before... (even though it sounds
contradictory)

-- Daniel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ