lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20060731.035716.39159213.davem@davemloft.net>
Date:	Mon, 31 Jul 2006 03:57:16 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	johnpol@....mipt.ru
Cc:	herbert@...dor.apana.org.au, drepper@...hat.com,
	zach.brown@...cle.com, linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org
Subject: Re: [RFC 1/4] kevent: core files.

From: Evgeniy Polyakov <johnpol@....mipt.ru>
Date: Mon, 31 Jul 2006 14:50:37 +0400

> In syscall time kevents copy 40bytes for each event + 12 bytes of header 
> (number of events, timeout and command number). That's likely two cache
> lines if only one event is reported.

Do you know how many cachelines are dirtied by system call
entry and exit on typical system?

On sparc64 it is a minimum of 3 64-byte cachelines just to save and
restore the system call time cpu register state.  If application is
deep in a call chain, register windows might spill and each such
register window will dirty 2 more cachelines as they are dumped to the
stack.

I am not even talking about the other basic necessities of doing
a system call such as touching various task_struct and thread_info
state to check for pending signals etc.

System call overhead is non-trivial especially when you are using
it to move only a few small objects into and out of the kernel.

So I would say for up to 4 or 5 events, system call overhead alone
touches as many cache lines as the events themselves.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ