lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190416095852.GA31772@zn.tnic>
Date:   Tue, 16 Apr 2019 11:58:52 +0200
From:   Borislav Petkov <bp@...en8.de>
To:     Cong Wang <xiyou.wangcong@...il.com>
Cc:     linux-kernel@...r.kernel.org, linux-edac@...r.kernel.org,
        Tony Luck <tony.luck@...el.com>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 2/2] ras: close the race condition with timer

On Mon, Apr 15, 2019 at 06:20:01PM -0700, Cong Wang wrote:
> cec_timer_fn() is a timer callback which reads ce_arr.array[]
> and updates its decay values. Elements could be added to or
> removed from this global array in parallel, although the array
> itself will not grow or shrink. del_lru_elem_unlocked() uses
> FULL_COUNT() as a key to find a right element to remove,
> which could be affected by the parallel timer.
> 
> Fix this by converting the mutex to a spinlock and holding it
> inside the timer.
> 
> Fixes: 011d82611172 ("RAS: Add a Corrected Errors Collector")
> Cc: Tony Luck <tony.luck@...el.com>
> Cc: Borislav Petkov <bp@...en8.de>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Signed-off-by: Cong Wang <xiyou.wangcong@...il.com>
> ---
>  drivers/ras/cec.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
> index 61332c9aab5a..a82c9d08d47a 100644
> --- a/drivers/ras/cec.c
> +++ b/drivers/ras/cec.c
> @@ -117,7 +117,7 @@ static struct ce_array {
>  	};
>  } ce_arr;
>  
> -static DEFINE_MUTEX(ce_mutex);
> +static DEFINE_SPINLOCK(ce_lock);

Nah, pls keep the simplicity here by retaining the mutex. Use a
workqueue instead to queue the spring cleaning from IRQ context and then
do it when back in preemptible context.

Thx.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ