lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9ac4fdcf-f236-8a05-bb96-b0b85a63b54e@huawei.com>
Date:   Thu, 29 Jun 2023 19:47:08 +0800
From:   Baokun Li <libaokun1@...wei.com>
To:     Jan Kara <jack@...e.cz>
CC:     <linux-fsdevel@...r.kernel.org>, <linux-ext4@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <yi.zhang@...wei.com>,
        <yangerkun@...wei.com>, <chengzhihao1@...wei.com>,
        <yukuai3@...wei.com>, Baokun Li <libaokun1@...wei.com>
Subject: Re: [PATCH v2 5/7] quota: fix dqput() to follow the guarantees
 dquot_srcu should provide

On 2023/6/29 18:59, Jan Kara wrote:
> On Wed 28-06-23 21:21:53, Baokun Li wrote:
>> @@ -760,6 +771,8 @@ dqcache_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>>   	struct dquot *dquot;
>>   	unsigned long freed = 0;
>>   
>> +	flush_delayed_work(&quota_release_work);
>> +
> I would not flush the work here. Sure, it can make more dquots available
> for reclaim but I think it is more important for the shrinker to not wait
> on srcu period as shrinker can be called very frequently under memory
> pressure.
This is because I want to use remove_free_dquot() directly, and if I 
don't do
flush here anymore, then DQST_FREE_DQUOTS will not be accurate.
Since that's the case, I'll remove the flush here and add a determination
to remove_free_dquot() whether to increase DQST_FREE_DQUOTS.
>>   	spin_lock(&dq_list_lock);
>>   	while (!list_empty(&free_dquots) && sc->nr_to_scan) {
>>   		dquot = list_first_entry(&free_dquots, struct dquot, dq_free);
>> @@ -787,6 +800,60 @@ static struct shrinker dqcache_shrinker = {
>>   	.seeks = DEFAULT_SEEKS,
>>   };
>>   
>> +/*
>> + * Safely release dquot and put reference to dquot.
>> + */
>> +static void quota_release_workfn(struct work_struct *work)
>> +{
>> +	struct dquot *dquot;
>> +	struct list_head rls_head;
>> +
>> +	spin_lock(&dq_list_lock);
>> +	/* Exchange the list head to avoid livelock. */
>> +	list_replace_init(&releasing_dquots, &rls_head);
>> +	spin_unlock(&dq_list_lock);
>> +
>> +restart:
>> +	synchronize_srcu(&dquot_srcu);
>> +	spin_lock(&dq_list_lock);
>> +	while (!list_empty(&rls_head)) {
> I think the logic below needs a bit more work. Firstly, I think that
> dqget() should removing dquots from releasing_dquots list - basically just
> replace the:
> 	if (!atomic_read(&dquot->dq_count))
> 		remove_free_dquot(dquot);
> with
> 	/* Dquot on releasing_dquots list? Drop ref kept by that list. */
> 	if (atomic_read(&dquot->dq_count) == 1 && !list_empty(&dquot->dq_free))
> 		atomic_dec(&dquot->dq_count);
> 	remove_free_dquot(dquot);
> 	atomic_inc(&dquot->dq_count);
>
> That way we are sure that while we are holding dq_list_lock, all dquots on
> rls_head list have dq_count == 1.
I wrote it this way at first, but that would have been problematic, so I 
ended up
dropping the dq_count == 1 constraint for dquots on releasing_dquots.
Like the following, we will get a bad dquot directly:

quota_release_workfn
  spin_lock(&dq_list_lock)
  dquot = list_first_entry(&rls_head, struct dquot, dq_free)
  spin_unlock(&dq_list_lock)
  dquot->dq_sb->dq_op->release_dquot(dquot)
  release_dquot
        dqget
         atomic_dec(&dquot->dq_count)
         remove_free_dquot(dquot)
         atomic_inc(&dquot->dq_count)
         spin_unlock(&dq_list_lock)
         wait_on_dquot(dquot)
         if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags))
         // still active
  mutex_lock(&dquot->dq_lock)
  dquot_is_busy(dquot)
   atomic_read(&dquot->dq_count) > 1
  clear_bit(DQ_ACTIVE_B, &dquot->dq_flags)
  mutex_unlock(&dquot->dq_lock)

Removing dquot from releasing_dquots and its reduced reference count
will cause dquot_is_busy() in dquot_release to fail. wait_on_dquot(dquot)
in dqget would have no effect. This is also the reason why I did not restart
at dquot_active. Adding dquot to releasing_dquots only in dqput() and
removing dquot from releasing_dquots only in quota_release_workfn() is
a simple and effective way to ensure consistency.


>> +		dquot = list_first_entry(&rls_head, struct dquot, dq_free);
>> +		if (dquot_dirty(dquot)) {
>> +			spin_unlock(&dq_list_lock);
>> +			/* Commit dquot before releasing */
>> +			dquot_write_dquot(dquot);
>> +			goto restart;
>> +		}
>> +		/* Always clear DQ_ACTIVE_B, unless racing with dqget() */
>> +		if (dquot_active(dquot)) {
>> +			spin_unlock(&dq_list_lock);
>> +			dquot->dq_sb->dq_op->release_dquot(dquot);
> I'd just go to restart here to make the logic simple. Forward progress is
> guaranteed anyway and it isn't really much less efficient.
>
>
> The rest looks good.
>
> 								Honza
Thanks!
-- 
With Best Regards,
Baokun Li
.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ