[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150203112733.GM26304@twins.programming.kicks-ass.net>
Date: Tue, 3 Feb 2015 12:27:33 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Benjamin LaHaise <bcrl@...ck.org>, linux-aio@...ck.org,
Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [GIT PULL] aio: fix sleeping while TASK_INTERRUPTIBLE
On Sun, Feb 01, 2015 at 05:18:17PM -0800, Linus Torvalds wrote:
> Ahh. That would be a bug, yes, but it wouldn't be one in the aio code.
>
> If somebody just does a "schedule()" and thinks that their own private
> events are the only thing that can wake it up, and doesn't use one of
> the millions of "wait_event_xyz()" variations to actually wait for the
> real completion, that is just buggy. Always. Always has been.
>
> So I wouldn't worry too much about it. It has never been correct to do
> that, and it's not one of the standard patterns for sleeping anyway.
> Which is not to say that it might not exist in some dank corner of the
> kernel, of course, but you shouldn't write code trying to make buggy
> code like that happy. If we ever find code like that, let's just fix
> it where the bug exists, not try to write odd code in places where it
> isn't.
>
> And I'd actually be a bit surprised to see that kind of really broken
> code. You really almost have to work at it. All our normal "sleep
> until X" patterns are much more obvious, and it's just _simpler_ to do
> the right thing with "wait_event()" than to mis-code an explicit "set
> task state and then just schedule without actually checking the thing
> you are waiting for".
block/bsg.c- prepare_to_wait(&bd->wq_done, &wait, TASK_UNINTERRUPTIBLE);
block/bsg.c- spin_unlock_irq(&bd->lock);
block/bsg.c: io_schedule();
block/bsg.c- finish_wait(&bd->wq_done, &wait);
Which is double buggy because:
1) it doesn't loop
2) it sets TASK_UNINTERRUPTIBLE _after_ testing for the sleep event.
drivers/iommu/amd_iommu_v2.c-static void put_device_state_wait(struct device_state *dev_state)
drivers/iommu/amd_iommu_v2.c-{
drivers/iommu/amd_iommu_v2.c- DEFINE_WAIT(wait);
drivers/iommu/amd_iommu_v2.c-
drivers/iommu/amd_iommu_v2.c- prepare_to_wait(&dev_state->wq, &wait, TASK_UNINTERRUPTIBLE);
drivers/iommu/amd_iommu_v2.c- if (!atomic_dec_and_test(&dev_state->count))
drivers/iommu/amd_iommu_v2.c: schedule();
drivers/iommu/amd_iommu_v2.c- finish_wait(&dev_state->wq, &wait);
drivers/iommu/amd_iommu_v2.c-
drivers/iommu/amd_iommu_v2.c- free_device_state(dev_state);
drivers/iommu/amd_iommu_v2.c-}
No loop...
drivers/md/dm-bufio.c-static void __wait_for_free_buffer(struct dm_bufio_client *c)
drivers/md/dm-bufio.c-{
drivers/md/dm-bufio.c- DECLARE_WAITQUEUE(wait, current);
drivers/md/dm-bufio.c-
drivers/md/dm-bufio.c- add_wait_queue(&c->free_buffer_wait, &wait);
drivers/md/dm-bufio.c- set_task_state(current, TASK_UNINTERRUPTIBLE);
drivers/md/dm-bufio.c- dm_bufio_unlock(c);
drivers/md/dm-bufio.c-
drivers/md/dm-bufio.c: io_schedule();
drivers/md/dm-bufio.c-
drivers/md/dm-bufio.c- remove_wait_queue(&c->free_buffer_wait, &wait);
drivers/md/dm-bufio.c-
drivers/md/dm-bufio.c- dm_bufio_lock(c);
drivers/md/dm-bufio.c-}
No loop..
And I'm sure there's a _waaay_ more. And the above examples are all in
relatively well maintained code afaik.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists