lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 5 Sep 2011 20:06:04 +0530
From:	"kautuk.c @samsung.com" <consul.kautuk@...il.com>
To:	Jan Kara <jack@...e.cz>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Jens Axboe <jaxboe@...ionio.com>,
	Wu Fengguang <fengguang.wu@...el.com>,
	Dave Chinner <dchinner@...hat.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] mm/backing-dev.c: Call del_timer_sync instead of del_timer

Hi,

>  OK, I don't care much whether we have there del_timer() or
> del_timer_sync(). Let me just say that the race you are afraid of is
> probably not going to happen in practice so I'm not sure it's valid to be
> afraid of CPU cycles being burned needlessly. The timer is armed when an
> dirty inode is first attached to default bdi's dirty list. Then the default
> bdi flusher thread would have to be woken up so that following happens:
>        CPU1                            CPU2
>  timer fires -> wakeup_timer_fn()
>                                        bdi_forker_thread()
>                                          del_timer(&me->wakeup_timer);
>                                          wb_do_writeback(me, 0);
>                                          ...
>                                          set_current_state(TASK_INTERRUPTIBLE);
>  wake_up_process(default_backing_dev_info.wb.task);
>
>  Especially wb_do_writeback() is going to take a long time so just that
> single thing makes the race unlikely. Given del_timer_sync() is slightly
> more costly than del_timer() even for unarmed timer, it is questionable
> whether (chance race happens * CPU spent in extra loop) > (extra CPU spent
> in del_timer_sync() * frequency that code is executed in
> bdi_forker_thread())...
>

Ok, so this means that we can compare the following 2 paths of code:
i)   One extra iteration of the bdi_forker_thread loop, versus
ii)  The amount of time it takes for the del_timer_sync to wait till
the timer_fn
     on the other CPU finishes executing + schedule resulting in a
guaranteed sleep.

Considering both situations to be a race till the tasks are ejected
from the runqueue
(i.e., sleep), I think ii) should be a better option, don't you think ?
Scenario i)  will result in execution of the entire schedule()
function once without
resulting in the "sleep" of the task. Also, if another task schedules,
it could take a
lot of CPU cycles before we return to this (bdi-default) task.
Scenario ii) will result only in the execution of a couple of more
iterations of the
del_timer_sync loop which will quickly respond to completion of
timer_fn on other CPU
and lead to removal of current task as per the call to schedule with
guaranteed sleep.

Is my reasoning correct/adequate ?

I know that the bdi_forker_thread anyways doesn't do much on its own,
but I'm just
understanding your expert opinion(s) on this aspect of the kernel code. :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ