lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1337878751-22942-1-git-send-email-jack@suse.cz>
Date:	Thu, 24 May 2012 18:59:09 +0200
From:	Jan Kara <jack@...e.cz>
To:	Wu Fengguang <fengguang.wu@...el.com>
Cc:	Peter Zijlstra <peterz@...radead.org>, <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: [PATCH 0/2 v4] Flexible proportions


  Hello,

  here is the next iteration of my flexible proportions code. I've addressed
all Peter's comments.
Changes since v3:
  * changed fprop_fraction_foo() to avoid using percpu_counter_sum()
  * changed __fprop_inc_percpu_max() to avoid 64-bit division (now maximum
    allowed fraction is expressed max_frac/FPROP_FRAC_BASE)
  * avoid drifting of period timer
  * handle better cases where period timer fires long after intended time by
    aging by really passed number of periods
Changes since v2:
  * use timer instead of workqueue for triggering period switch
  * arm timer only if aging didn't zero out all fractions, re-arm timer when
    new event arrives again
  * set period length to 3s

  Some introduction for first time readers:

  The idea of this patch set is to provide code for computing event proportions
where aging period is not dependent on the number of events happening (so
that aging works well both with fast storage and slow USB sticks in the same
system).

  The basic idea is that we compute proportions as:
p_j = (\Sum_{i>=0} x_{i,j}/2^{i+1}) / (\Sum_{i>=0} x_i/2^{i+1})

  Where x_{i,j} is j's number of events in i-th last time period and x_i is
total number of events in i-th last time period.

  Note that when x_i's are all the same (as is the case with current
proportion code), this expression simplifies to the expression defining
current proportions which is:
p_j =  \Sum_{i>=0} x_{i,j}/2^{i+1} / t

  where t is the lenght of the aging period.

  In fact, if we are in the middle of the period, the proportion computed by
the current code is:
p_j = (x_0 + \Sum_{i>=1} x_{i,j}/2^{i+1}) / (t' + t)

  where t' is total number of events in the running period and t is the lenght
of the aging period. So there is event more similarity.

  Similarly as with current proportion code, it is simple to compute update
proportion after several periods have elapsed. For each proportion we store
the numerator of our fraction and the number of period when the proportion
was last updated. In global proportion structure we compute the denominator
of the fraction which is the same for all event types. So catch up with missed
periods boils down to shifting the numerator by the number of missed periods
and that's it. For more details, please see the code.

  I've also run a few tests (I've created a userspace wrapper to allow me to
run proportion code in userpace and arbitrarily generate events for it) to
compare the behavior of old and new code. You can see them at
http://beta.suse.com/private/jack/flex_proportions/ In all the tests new code
showed faster convergence to current event proportions (I tried to
realistically set period_shift for fixed proportions).  Also in the last test
we see that if period_shift is decreased, then current proportions become more
sensitive to short term fluctuations in event rate so just decreasing
period_shift isn't a good solution to slower convergence. If anyone has other
idea what to try, I can do that - it should be simple enough to implement in
my testing tool.

								Honza
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ