[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230622145620.hk3bdjxtlr64gtzl@quack3>
Date: Thu, 22 Jun 2023 16:56:20 +0200
From: Jan Kara <jack@...e.cz>
To: Baokun Li <libaokun1@...wei.com>
Cc: Jan Kara <jack@...e.cz>, linux-fsdevel@...r.kernel.org,
linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
yi.zhang@...wei.com, yangerkun@...wei.com, chengzhihao1@...wei.com,
yukuai3@...wei.com
Subject: Re: [PATCH] quota: fix race condition between dqput() and
dquot_mark_dquot_dirty()
Hello!
On Mon 19-06-23 14:44:03, Baokun Li wrote:
> On 2023/6/16 23:28, Jan Kara wrote:
> > Now calling synchronize_srcu() directly from dquot_transfer() is too
> > expensive (and mostly unnecessary) so what I would rather suggest is to
> > create another dquot list (use dq_free list_head inside struct dquot for
> > it) and add dquot whose last reference should be dropped there. We'd then
> > queue work item which would call synchronize_srcu() and after that perform
> > the final cleanup of all the dquots on the list.
> >
> > Now this also needs some modifications to dqget() and to quotaoff code to
> > handle various races with the new dqput() code so if you feel it is too
> > complex for your taste, I can implement this myself.
> >
> > Honza
> I see what you mean, what we are doing here is very similar to
> drop_dquot_ref(),
> and if we have to modify it this way, I am happy to implement it.
>
> But as you said, calling synchronize_srcu() is too expensive and it blocks
> almost all
> mark dirty processes, so we only call it now in performance insensitive
> scenarios
> like dquot_disable(). And how do we control how often synchronize_srcu() is
> called?
> Are there more than a certain number of dquots in releasing_dquots or are
> they
> executed at regular intervals? And it would introduce various new
> competitions.
> Is it worthwhile to do this for a corner scenario like this one?
So the way this is handled (e.g. in fsnotify subsystem) is that we just
queue work item when we drop the last reference to the protected structure.
The scheduling latency before the work item gets executed is enough to
batch synchronize_srcu() calls and once synchronize_srcu() finishes, we add
all items from the "staging list" to the free_dquots list.
> I think we can simply focus on the race between the DQ_ACTIVE_B flag and
> the DQ_MOD_B flag, which is the core problem, because the same quota
> should not have both flags. These two flags are protected by dq_list_lock
> and dquot->dq_lock respectively, so it makes sense to add a
> wait_on_dquot() to ensure the accuracy of DQ_ACTIVE_B.
But the fundamental problem is not only the race with DQ_MOD_B setting. The
dquot structure can be completely freed by the time
dquot_claim_space_nodirty() calls dquot_mark_dquot_dirty() on it. That's
why I think making __dquot_transfer() obey dquot_srcu rules is the right
solution.
Honza
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists