[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53CE5494.3030708@suse.cz>
Date: Tue, 22 Jul 2014 14:09:56 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Hugh Dickins <hughd@...gle.com>,
Sasha Levin <sasha.levin@...cle.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Konstantin Khlebnikov <koct9i@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Michel Lespinasse <walken@...gle.com>,
Lukas Czerner <lczerner@...hat.com>,
Dave Jones <davej@...hat.com>, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Michal Hocko <mhocko@...e.cz>
Subject: Re: [PATCH 0/2] shmem: fix faulting into a hole while it's punched,
take 3
On 07/22/2014 12:06 PM, Vlastimil Babka wrote:
> So if this is true, the change to TASK_UNINTERRUPTIBLE will avoid the
> problem, but it would be nicer to keep the KILLABLE state.
> I think it could be done by testing if the wait queue still exists and
> is the same, before attempting finish wait. If it doesn't exist, that
> means the faulter can skip finish_wait altogether because it must be
> already TASK_RUNNING.
>
> shmem_falloc = inode->i_private;
> if (shmem_falloc && shmem_falloc->waitq == shmem_falloc_waitq)
> finish_wait(shmem_falloc_waitq, &shmem_fault_wait);
>
> It might still be theoretically possible that although it has the same
> address, it's not the same wait queue, but that doesn't hurt
> correctness. I might be uselessly locking some other waitq's lock, but
> the inode->i_lock still protects me from other faulters that are in the
> same situation. The puncher is already gone.
Actually, I was wrong and deleting from a different queue could corrupt
the queue head. I don't know if trinity would be able to trigger this,
but I wouldn't be comfortable knowing it's possible. Calling fallocate
twice in quick succession from the same process could easily end up at
the same address on the stack, no?
Another also somewhat ugly possibility is to make sure that the wait
queue is empty before the puncher quits, regardless of the running state
of the processes in the queue. I think the conditions here
(serialization by i_lock) might allow us to do that without risking that
we e.g. leave anyone sleeping. But it's bending the wait queue design...
> However it's quite ugly and if there is some wait queue debugging mode
> (I hadn't checked) that e.g. checks if wait queues and wait objects are
> empty before destruction, it wouldn't like this at all...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists