lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87si1h6g3o.fsf@ashishki-desk.ger.corp.intel.com>
Date:	Thu, 28 Jan 2016 14:30:51 +0200
From:	Alexander Shishkin <alexander.shishkin@...ux.intel.com>
To:	Dmitry Vyukov <dvyukov@...gle.com>, syzkaller@...glegroups.com,
	vegard.nossum@...cle.com, catalin.marinas@....com,
	taviso@...gle.com, will.deacon@....com,
	linux-kernel@...r.kernel.org, quentin.casasnovas@...cle.com,
	kcc@...gle.com, edumazet@...gle.com, glider@...gle.com,
	keescook@...gle.com, bhelgaas@...gle.com, sasha.levin@...cle.com,
	akpm@...ux-foundation.org, drysdale@...gle.com,
	linux-arm-kernel@...ts.infradead.org, ard.biesheuvel@...aro.org,
	ryabinin.a.a@...il.com, kirill@...temov.name,
	Peter Zijlstra <peterz@...radead.org>
Cc:	Dmitry Vyukov <dvyukov@...gle.com>
Subject: Re: [PATCH v6] kernel: add kcov code coverage

Dmitry Vyukov <dvyukov@...gle.com> writes:

> +	fd = open("/sys/kernel/debug/kcov", O_RDWR);
> +	if (fd == -1)
> +		perror("open"), exit(1);
> +	/* Setup trace mode and trace size. */
> +	if (ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE))
> +		perror("ioctl"), exit(1);
> +	/* Mmap buffer shared between kernel- and user-space. */
> +	cover = (unsigned long*)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
> +				     PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> +	if ((void*)cover == MAP_FAILED)
> +		perror("mmap"), exit(1);
> +	/* Enable coverage collection on the current thread. */
> +	if (ioctl(fd, KCOV_ENABLE, 0))
> +		perror("ioctl"), exit(1);
> +	/* Reset coverage from the tail of the ioctl() call. */
> +	__atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
> +	/* That's the target syscal call. */
> +	read(-1, NULL, 0);
> +	/* Read number of PCs collected. */
> +	n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
> +	for (i = 0; i < n; i++)
> +		printf("0x%lx\n", cover[i + 1]);

Kirill is right, this does look a lot like a candidate for a perf
pmu. Most of the legwork that you do in this patch is already taken care
of by perf afaict: enabling/disabling, context tracking, ring buffer for
exporting data to userspace. You'll also get other things like privilege
separation for free.

Moreover, this can already be achieved by means of hardware assisted
instruction tracing such as BTS or PT on Intel cpus (BTS will literally
output instruction pointer addresses into a ring buffer). ARM Coresight
ETM/PTM support is also on its way. That's not to say that this work is
not useful (and it's gotta be, or who will be the one to debug the
debugger), but more to make a case for a perf-based implementation.

Regards,
--
Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ