lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHp75VchWpr+76rBCmXctpYi5a06pr-MT==FrNDy03g=boxfog@mail.gmail.com>
Date:   Tue, 26 Jun 2018 20:04:34 +0300
From:   Andy Shevchenko <andy.shevchenko@...il.com>
To:     Dmitry Safonov <dima@...sta.com>
Cc:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Arnd Bergmann <arnd@...db.de>, David Airlie <airlied@...ux.ie>,
        Dmitry Safonov <0x7f454c46@...il.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Jani Nikula <jani.nikula@...ux.intel.com>,
        Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
        Rodrigo Vivi <rodrigo.vivi@...el.com>,
        "Theodore Ts'o" <tytso@....edu>,
        Thomas Gleixner <tglx@...utronix.de>,
        intel-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org
Subject: Re: [PATCHv2] lib/ratelimit: Lockless ratelimiting

On Tue, Jun 26, 2018 at 7:24 PM, Dmitry Safonov <dima@...sta.com> wrote:
> Currently ratelimit_state is protected with spin_lock. If the .lock is
> taken at the moment of ___ratelimit() call, the message is suppressed to
> make ratelimiting robust.
>
> That results in the following issue issue:
>       CPU0                          CPU1
> ------------------           ------------------
> printk_ratelimit()           printk_ratelimit()
>         |                             |
>   try_spin_lock()              try_spin_lock()
>         |                             |
> time_is_before_jiffies()          return 0; // suppress
>
> So, concurrent call of ___ratelimit() results in silently suppressing
> one of the messages, regardless if the limit is reached or not.
> And rc->missed is not increased in such case so the issue is covered
> from user.
>
> Convert ratelimiting to use atomic counters and drop spin_lock.
>
> Note: That might be unexpected, but with the first interval of messages
> storm one can print up to burst*2 messages. So, it doesn't guarantee
> that in *any* interval amount of messages is lesser than burst.
> But, that differs lightly from previous behavior where one can start
> burst=5 interval and print 4 messages on the last milliseconds of
> interval + new 5 messages from new interval (totally 9 messages in
> lesser than interval value):
>
>    msg0              msg1-msg4 msg0-msg4
>     |                     |       |
>     |                     |       |
>  |--o---------------------o-|-----o--------------------|--> (t)
>                           <------->
>                        Lesser than burst

>  #define RATELIMIT_STATE_INIT(name, interval_init, burst_init) {                \
> -               .lock           = __RAW_SPIN_LOCK_UNLOCKED(name.lock),  \

name is now redundant, isn't it?

>                 .interval       = interval_init,                        \
>                 .burst          = burst_init,                           \
> +               .printed        = ATOMIC_INIT(0),                       \
> +               .missed         = ATOMIC_INIT(0),                       \
>         }
>
>  #define RATELIMIT_STATE_INIT_DISABLED                                  \
> @@ -42,9 +41,10 @@ static inline void ratelimit_state_init(struct ratelimit_state *rs,
>  {
>         memset(rs, 0, sizeof(*rs));
>
> -       raw_spin_lock_init(&rs->lock);
>         rs->interval    = interval;
>         rs->burst       = burst;
> +       atomic_set(&rs->printed, 0);
> +       atomic_set(&rs->missed, 0);

Can it be

*rs = RATELIMIT_STATE_INIT(interval, burst);

?

(Yes, the '(struct ratelimit_state)' has to be added to macro to allow this)

>  }


>  static inline void ratelimit_state_exit(struct ratelimit_state *rs)
>  {
> +       int missed;
> +
>         if (!(rs->flags & RATELIMIT_MSG_ON_RELEASE))
>                 return;
>
> -       if (rs->missed) {
> +       if ((missed = atomic_xchg(&rs->missed, 0)))

Perhaps

missed = ...
if (missed)

?

>                 pr_warn("%s: %d output lines suppressed due to ratelimiting\n",
> -                       current->comm, rs->missed);
> -               rs->missed = 0;
> -       }
> +                       current->comm, missed);
>  }

> +static void ratelimit_end_interval(struct ratelimit_state *rs, const char *func)
> +{
> +       rs->begin = jiffies;
> +
> +       if (!(rs->flags & RATELIMIT_MSG_ON_RELEASE)) {

> +               unsigned missed = (unsigned)atomic_xchg(&rs->missed, 0);
> +
> +               if (missed)
> +                       pr_warn("%s: %u callbacks suppressed\n", func, missed);

Instead of casting, perhaps

int missed = ...

I think you already has a guard against going it below zero. Or I
missed something?

> +       }
> +}

-- 
With Best Regards,
Andy Shevchenko

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ