[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160821025510.GB9693@thunk.org>
Date: Sat, 20 Aug 2016 22:55:10 -0400
From: Theodore Ts'o <tytso@....edu>
To: Dmitry Monakhov <dmonakhov@...nvz.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/6] Add dockerfile
Ok, I've checked in a Dockerfile into the xfstests-bld repository, and
played with it some, and I have a couple of observations:
First of all, despite some work cleaning up the Dockerfile, the
resulting image is somewhere between 150% and 200% larger than what it
would be if we build root_fs.img outside of the Docker. A bunch of
the wasted space is simply because we have to include a 47 MB
xfstests.tar.gz file which then has to get re-inserted into the
root_fs image via the --update-xfstests command line.
Secondly, as an automated build procedure, it's rather lacking in two
ways. First, it doesn't do the gen-image step, and secondly it only
builds the 64-bit x86_64 binaries --- and I normally like to use the
32-bit i386 image, since that can be used to test 32-bit and 64-bit
binaries, and it also forces us to test the 32/64-bit ioctl compat
code. So it's really only useful as a CI mechanism --- and in order
to use it I have to give Docker read/write access to my github
repositories. I might consider creating a throwaway github repository
on bitbucket just for the CI effort, but it's not high on my priority
list, since it doesn't test the gen-image part of the build process.
Third, it's a bit more inconvenient to use than the comments in your
Dockerfile would imply. The command:
docker run -i -t --privileged --rm dmonakhov/xfstests-bld \
kvm-xfstests.sh --kernel /tmp/bzImage --update-files --update-xfstests-tar smoke
... won't work because the docker because the image won't have
/tmp/bzImage. So you would need to add "-v /tmp/bzImage:/tmp/bzImage"
to the command line, making it even more unweildy.
BTW, with changes I've just commited, we can drop the --kernel since
we now default to ~/linux, and so the command would look like this:
docker run -i -t -rm -v /build/ext4:/root/linux --privileged \
tytso/xfstests-bld kvm-xfstests --update-files --update-xfstests smoke
So if I am going to publish something to the Docker Hub, it would an
addition to my current release process, where I would build the
root_fs using my existing Debian build chroots, and just create a
minimal tytso/kvm-xfstests image which would just have the needed
files, and would probably end up weighing in around 200-225 MB. The
user wouldn't need to specify the --update-files and --update-xfstests
flags, and it would start faster.
On the other hand, I'm not at all convinced this is actually a great
way to run kvm-xfstests; for one thing, the log file is trapped inside
the Docker container, and so you would need to manually extract it in
order to keep a history of past test runs. (This is also the
challenge of just sharding the test runs; collating the test results
becomes a big pain.) And the whole concept of running a VM inside a
docker container reminds me a bit of the "Hitler uses Docker" rant at:
https://youtu.be/PivpCKEiQOQ?t=2m27s
I'm also not wild about encouraging people to run random Docker images
they download over the network with docker run --privileged. That's
right up there with running with scissors, encouraging people to give
Docker read/write access to their github accounts, using the same
password across hundreds of web sites, etc.....
If the real goal is to allow people to shard the tests so it can be
run across multiple VM's, it might be better to give kvm-xfstests some
options so a different set of disks are used and either a different
set of ports, or just network ports altogether.
Cheers,
- Ted
Powered by blists - more mailing lists