lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 2 May 2008 17:33:13 +0200
From:	Mariusz Kozlowski <m.kozlowski@...land.pl>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Dan Noe <dpn@...merica.net>, torvalds@...ux-foundation.org,
	rjw@...k.pl, davem@...emloft.net, linux-kernel@...r.kernel.org,
	jirislaby@...il.com
Subject: Re: Slow DOWN, please!!!

Hello,

> > Speaking of energy and time of a tester. I'd like to know where these resources
> > should be directed from the arch point of view. Once I had a plan to buy as
> > many arches as I could get and run a farm of test boxes 8-) But that's hard
> > because of various reasons (money, time, room, energy). What arches need more
> > attention? Which are forgotten? Which are going away? For example does buying
> > an alphaserver DS 20 (hey - it's cheap) and running tests on it makes sense
> > these days?
> 
> A lot of bugs are not architecture specific. Or when they are architecture
> specific they only affect some specific machines in that architecture.

Yes, there is some amount of bugs that I see only on specific architecture.
These which are reproducible or have an easy test case I do report to LKML, but
there are also bugs I see rarely or just once and they never come back and sometimes
as a bonus leave no trace - and these I ususaly don't report. Providing a test case
is a challenge and one can really learn a lot.

> But really a lot of bugs should happen on most architectures. Just focussing
> on lots of boxes is not necessarily productive.

What I meant was one box per architecture, preferably an SMP one where possible - so
the number of required boxes is limited. This way instead of just cross-compiling
I could actually _run_ the kernel. On the other hand if some arch is close to be dead
and has no foreseable future then there is no point in testing it.

Also my thinking was that sometimes bugs from other (than x86) architectures can point to
some more generic problems. Well - I'll buy just a few more and that's it ;)

> My recommendation would be to concentrate on deeper testing (more coverage)
> on the architectures you have.

Can do.
 
> A interestig project for example would be to play with the kernel gcov patch that
> was recently reposted (I hope it makes mainline eventually). Apply that patch,
> run all the test suites and tests you usually run on your favourite test box
> and check how much of the code that is compiled into your kernel was really tested
> using the coverage information Then think: what additional tests can you do to get 
> more coverage?  Write tests then? Or just write descriptions on what is not tested 
> and send them to the list, as a project for others looking to contribute to the 
> kernel.

Sounds like a plan - will look into that.
 
	Mariusz aka arch'aeologist ;)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ