lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 4 Feb 2014 20:31:48 +0100 (CET)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Andy Lutomirski <luto@...capital.net>
cc:	x86@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86: Remove hpet vclock support

On Fri, 31 Jan 2014, Andy Lutomirski wrote:

> The HPET is so amazingly slow that this is barely a win.  It adds

That's nonsense. It's definitely a win to access HPET directly
especially on older systems with TSC wreckage. Why do you want to
enforce a syscall if we can read out the data straight from user
space. The systems which are forced to use HPET have slower syscalls
than those which have a proper TSC.

> complexity, and it scares me a tiny bit to map a piece of crappy
> hardware where every userspace program can poke at it (even if said
> poking is read-only).  Let's just remove it.

What's scary about granting user space access to a read only resource?
 
> This is probably a tiny speedup on kvmclock systems.
 
Oh, now you are coming to the real point of your change:

    You want to save a few cycles in a guest system.

So you justify that by completely unbacked assumptions and a pointless
blurb about scariness.

We are not removing existing functionality which is useful for parts
of our userbase just to make financial online gambling faster.

>  #ifdef CONFIG_PARAVIRT_CLOCK
>  
>  static notrace const struct pvclock_vsyscall_time_info *get_pvti(int cpu)
> @@ -157,8 +151,6 @@ notrace static inline u64 vgetsns(int *mode)
>  	cycles_t cycles;
>  	if (gtod->clock.vclock_mode == VCLOCK_TSC)
>  		cycles = vread_tsc();
> -	else if (gtod->clock.vclock_mode == VCLOCK_HPET)
> -		cycles = vread_hpet();
>  #ifdef CONFIG_PARAVIRT_CLOCK
>  	else if (gtod->clock.vclock_mode == VCLOCK_PVCLOCK)
>  		cycles = vread_pvclock(mode);

I really doubt, that your tiny speedup is even measurable simply
because the branch prediction wont fail and the cache foot print is
not that excitingly smaller. I could be wrong as usual, but its up to
you to provide proper numbers which prove that:

    - there is no impact for the actual forced to use HPET systems

    - there is a measurable improvement for the PV clock case

and not before you tried to revert the evaluation order of VCLOCK_HPET
and VCLOCK_PVCLOCK.

Thanks,

	tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ