[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wpteq3m3.fsf_-_@doppelsaurus.mobileactivedefense.com>
Date: Wed, 18 Nov 2015 23:39:32 +0000
From: Rainer Weikusat <rweikusat@...ileactivedefense.com>
To: David Miller <davem@...emloft.net>
Cc: rweikusat@...ileactivedefense.com, jbaron@...mai.com,
dvyukov@...gle.com, syzkaller@...glegroups.com, mkubecek@...e.cz,
viro@...iv.linux.org.uk, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, hannes@...essinduktion.org,
dhowells@...hat.com, paul@...l-moore.com, salyzyn@...roid.com,
sds@...ho.nsa.gov, ying.xue@...driver.com, netdev@...r.kernel.org,
kcc@...gle.com, glider@...gle.com, andreyknvl@...gle.com,
sasha.levin@...cle.com, jln@...gle.com, keescook@...gle.com,
minipli@...glemail.com
Subject: more statistics (was: [PATCH] unix: avoid use-after-free in ep_remove_wait_queue (w/ Fixes:))
Rainer Weikusat <rw@...pelsaurus.mobileactivedefense.com> writes:
[...]
> Some more information on this: Running the test program included below
> on my 'work' system (otherwise idle, after logging in via VT with no GUI
> running)/ quadcore AMD A10-5700, 3393.984 for 20 times/ patched 4.3 resulted in the
> following throughput statistics[*]:
Since the results were too variable with only 20 runs, I've also tested
this with 100 for three kernels, stock 4.3, 4.3 plus the published
patch, 4.3 plus the published patch plus the "just return EAGAIN"
modification". The 1st and the 3rd perform about identical for the
test program I used (slightly modified version included below), the 2nd
is markedly slower. This is most easily visible when grouping the
printed data rates (B/s) 'by millions':
stock 4.3
---------
13000000.000-13999999.000 3 (3%)
14000000.000-14999999.000 82 (82%)
15000000.000-15999999.000 15 (15%)
4.3 + patch
-----------
13000000.000-13999999.000 54 (54%)
14000000.000-14999999.000 35 (35%)
15000000.000-15999999.000 7 (7%)
16000000.000-16999999.000 1 (1%)
18000000.000-18999999.000 1 (1%)
22000000.000-22999999.000 2 (2%)
4.3 + modified patch
--------------------
13000000.000-13999999.000 3 (3%)
14000000.000-14999999.000 82 (82%)
15000000.000-15999999.000 14 (14%)
24000000.000-24999999.000 1 (1%)
IMHO, the 3rd option would be the way to go if this was considered an
acceptable option (ie, despite it returns spurious errors in 'rare
cases').
modified test program
=====================
#include <inttypes.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/socket.h>
#include <sys/time.h>
#include <sys/wait.h>
#include <unistd.h>
enum {
MSG_SZ = 16,
MSGS = 1000000
};
static char msg[MSG_SZ];
static uint64_t tv2u(struct timeval *tv)
{
uint64_t u;
u = tv->tv_sec;
u *= 1000000;
return u + tv->tv_usec;
}
int main(void)
{
struct timeval start, stop;
uint64_t t_diff;
double rate;
int sks[2];
unsigned remain;
char buf[MSG_SZ];
socketpair(AF_UNIX, SOCK_SEQPACKET, 0, sks);
if (fork() == 0) {
close(*sks);
gettimeofday(&start, 0);
while (read(sks[1], buf, sizeof(buf)) > 0);
gettimeofday(&stop, 0);
t_diff = tv2u(&stop);
t_diff -= tv2u(&start);
rate = MSG_SZ * MSGS;
rate /= t_diff;
rate *= 1000000;
printf("%f\n", rate);
fflush(stdout);
_exit(0);
}
close(sks[1]);
remain = MSGS;
do write(*sks, msg, sizeof(msg)); while (--remain);
close(*sks);
wait(NULL);
return 0;
}
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists