[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <554FC664.8030807@draigBrady.com>
Date: Sun, 10 May 2015 21:58:12 +0100
From: Pádraig Brady <P@...igBrady.com>
To: Alexey Dobriyan <adobriyan@...il.com>
CC: Michal Marek <mmarek@...e.cz>, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v3] tags: much faster, parallel "make tags"
On 10/05/15 14:26, Alexey Dobriyan wrote:
> On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote:
>> On 08/05/15 14:26, Alexey Dobriyan wrote:
>
>>> exuberant()
>>> {
>>> - all_target_sources | xargs $1 -a \
>>> + rm -f .make-tags.*
>>> +
>>> + all_target_sources >.make-tags.src
>>> + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
>>
>> `nproc` is simpler and available since coreutils 8.1 (2009-11-18)
>
> nproc was discarded because getconf is standartized.
Note getconf doesn't honor CPU affinity which may be fine here?
$ taskset -c 0 getconf _NPROCESSORS_ONLN
4
$ taskset -c 0 nproc
1
>>> + NR_LINES=$(wc -l <.make-tags.src)
>>> + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
>>> +
>>> + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
>>
>> `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22)
>
> -nl/ can't count and always make first file somewhat bigger, which is
> suspicious. What else it can't do right?
It avoids the overhead of reading all data and counting the lines,
by splitting the data into approx equal numbers of lines as detailed at:
http://gnu.org/s/coreutils/split
>>> + sort .make-tags.* >>$2
>>> + rm -f .make-tags.*
>>
>> Using sort --merge would speed up significantly?
>
> By ~1 second, yes.
>
>> Even faster would be to get sort to skip the header lines, avoiding the need for sed.
>> It's a bit awkward and was discussed at:
>> http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html
>> Summarising that, is if not using merge you can:
>>
>> tlines=$(($(wc -l < "$2") + 1))
>> tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2
>>
>> Or if merge is appropriate then:
>>
>> tlines=$(($(wc -l < "$2") + 1))
>> eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2
>
> Might as well teach ctags to do real parallel processing.
> LC_* are set by top level Makefile.
>
>> p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.*
>
> The real question is how to kill ctags reliably.
> Naive
>
> trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT
>
> doesn't work.
>
> Files are removed, but processes aren't.
Is $(jobs -p) generating the correct list?
On an interactive shell here it is.
Perhaps you need to explicitly use #!/bin/sh -m
at the start to enable job control like that?
Another option would be to append each background $! pid
to a list and kill that list.
Note also you may want to `wait` after the kill too.
cheers,
Pádraig.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists