lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <MWHPR13MB0895B92C9B4807D94E1E6B04FDE30@MWHPR13MB0895.namprd13.prod.outlook.com>
Date:   Fri, 6 Mar 2020 19:49:38 +0000
From:   "Bird, Tim" <Tim.Bird@...y.com>
To:     Shuah Khan <skhan@...uxfoundation.org>,
        "open list:KERNEL SELFTEST FRAMEWORK" 
        <linux-kselftest@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
CC:     Kevin Hilman <khilman@...libre.com>
Subject: kselftest selftest issues and clarifications


> -----Original Message-----
> From: Shuah Khan
> 
> On 2/28/20 10:50 AM, Bird, Tim wrote:
> >
> >
> >> -----Original Message-----
> >> From:  Shuah Khan
> >>
> >> Integrating Kselftest into Kernel CI rings depends on Kselftest build
> >> and install framework to support Kernel CI use-cases. I am kicking off
> >> an effort to support Kselftest runs in Kernel CI rings. Running these
> >> tests in Kernel CI rings will help quality of kernel releases, both
> >> stable and mainline.
> >>
> >> What is required for full support?
> >>
> >> 1. Cross-compilation & relocatable build support
> >> 2. Generates objects in objdir/kselftest without cluttering main objdir
> >> 3. Leave source directory clean
> >> 4. Installs correctly in objdir/kselftest/kselftest_install and adds
> >>      itself to run_kselftest.sh script generated during install.
> >>
> >> Note that install step is necessary for all files to be installed for
> >> run time support.
> >>
> >> I looked into the current status and identified problems. The work is
> >> minimal to add full support. Out of 80+ tests, 7 fail to cross-build
> >> and 1 fails to install correctly.
> >>
> >> List is below:
> >>
> >> Tests fails to build: bpf, capabilities, kvm, memfd, mqueue, timens, vm
> >> Tests fail to install: android (partial failure)
> >> Leaves source directory dirty: bpf, seccomp
> >>
> >> I have patches ready for the following issues:
> >>
> >> Kselftest objects (test dirs) clutter top level object directory.
> >> seccomp_bpf generates objects in the source directory.
> >>
> >> I created a topic branch to collect all the patches:
> >>
> >> https://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git/?h=kernelci
> >>
> >> I am going to start working on build problems. If anybody is
> >> interested in helping me with this effort, don't hesitate to
> >> contact me. I first priority is fixing build and install and
> >> then look into tests that leave the source directory dirty.
> >
> > I'm interested in this.  I'd like the same cleanups in order to run
> > kselftest in Fuego, and I can try it with additional toolchains
> > and boards.  Unfortunately, in terms of running tests, almost all
> > the boards in my lab are running old kernels.  So the tests results
> > aren't useful for upstream work.  But I can still test
> > compilation and install issues, for the kselftest tests themselves.
> >
> 
> Testing compilation and install issues is very valuable. This is one
> area that hasn't been test coverage compared to running tests. So it
> great if you can help with build/install on linux-next to catch
> problems in new tests. I am finding that older tests have been stable
> and as new tests come in, we tend to miss catching these types of
> problems.
> 
> Especially cross-builds and installs on arm64 and others.

OK.  I've got 2 different arm64 compilers, with wildly different SDK setups,
so hopefully this will be useful.

> >>
> >> Detailed report can be found here:
> >>
> >> https://drive.google.com/file/d/11nnWOKIzzOrE4EiucZBn423lzSU_eNNv/view?usp=sharing
> >
> > Is there anything you'd like me to look at specifically?  Do you want me to start
> > at the bottom of the list and work up?  I could look at 'vm' or 'timens'.
> >
> 
> Yes you can start with vm and timens.

I wrote a test for Fuego and ran into a few interesting issues.  Also, I have a question
about the best place to start, and your preference for reporting results.  Your feedback
on any of this would be appreciated:

Here are some issues and questions I ran into:
1) overwriting of CC in lib.mk
This line in tools/testing/selftests/lib.mk caused me some grief:
CC := $(CROSS_COMPILE)gcc

One of my toolchains pre-defines CC with a bunch of extra flags, so this didn't work for
that tolchain.
I'm still debugging this.  I'm not sure why the weird definition of CC works for the rest
of the kernel but not with kselftest.  But I may submit some kind of patch to make this 
CC assignment conditional (that is, only do the assignment if it's not already defined)
Let me know what you think.

2) ability to get list of targets would be nice
It would be nice if there were a mechanism to get the list of default targets from
kselftest.  I added the following for my own tests, so that I don't have to hard-code
my loop over the individual selftests:

diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
index 63430e2..9955e71 100644
--- a/tools/testing/selftests/Makefile
+++ b/tools/testing/selftests/Makefile
@@ -246,4 +246,7 @@ clean:
 		$(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean;\
 	done;
 
+show_targets:
+	@echo $(TARGETS)
+
 .PHONY: khdr all run_tests hotplug run_hotplug clean_hotplug run_pstore_crash install clean

This is pretty simple.  I can submit this as a proper patch, if you're willing to take
something like it, and we can discuss details if you'd rather see this done another way.

3) different ways to invoke kselftest
There are a number of different ways to invoke kselftest.  I'm currently using the
'-C' method for both building and installing.
make ARCH=$ARCHITECTURE TARGETS="$target" -C tools/testing/selftests
make ARCH=$ARCHITECTURE TARGETS="$target" -C tools/testing/selftests install

I think, there there are now targets for kselftest in the top-level Makefile.
Do you have a preferred method you'd like me to test?  Or would you like
me to run my tests with multiple methods?

And I'm using a KBUILD_OUTPUT environment variable, rather than O=.
Let me know if you'd like me to build a matrix of these different build methods.

4) what tree(s) would you like me to test?
I think you mentioned that you'd like to see the tests against 'linux-next'.
Right now I've been doing tests against the 'torvalds' mainline tree, and
the 'linux-kselftest' tree, master branch.  Let me know if there are other
branches or trees you like me to test.

5) where would you like test results?
In the short term, I'm testing the compile and install of the tests
and working on the ones that fail for me (I'm getting 17 or 18
failures, depending on the toolchain I'm using, for some of my boards).
However, I'm still debugging my setup, I hope I can drop that down
to the same one's you are seeing shortly.

Longer-term I plan to set up a CI loop for these tests for Fuego, and publish some
kind of matrix results and reports on my own server (https://birdcloud.org/) 
I'm generating HTML tables now that work with Fuego's Jenkins
configuration, but I could send the data elsewhere if desired.

This is still under construction.  Would you like me to publish results also to
kcidb, or some other repository?  I might be able to publish my
results to Kernelci, but I'll end up with a customized report for kselftest,
that will allow drilling down to see output for individual compile or
install failures.  I'm not sure how much of that would be supported in
the KernelCI interface.  But I recognize you'd probably not like to
have to go to multiple places to see results.

Also, in terms of periodic results do you want any e-mails
sent to the Linux-kselftest list?  I thought I'd hold off for now,
and wait for the compile/install fixes to settle down, so that
future e-mails would only report regressions or issues with new tests.
We can discuss this later, as I don't plan to do this quite
yet (and would only do an e-mail after checking with you anyway).

Thanks for any feedback you can provide.
 -- Tim

P.S. Also, please let me know who is working on this on the KernelCI
side (if it's not Kevin), so I can CC them on future discussions.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ