[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D247D89.3090700@fusionio.com>
Date: Wed, 5 Jan 2011 15:17:45 +0100
From: Jens Axboe <jaxboe@...ionio.com>
To: Jerome Marchand <jmarchan@...hat.com>
CC: Vivek Goyal <vgoyal@...hat.com>,
Satoru Takeuchi <takeuchi_satoru@...fujitsu.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Greg Kroah-Hartman <greg@...ah.com>
Subject: Re: [PATCH 2/2] block: fix accounting bug on cross partition merges
On 2011-01-05 15:09, Jerome Marchand wrote:
> On 01/05/2011 03:00 PM, Jens Axboe wrote:
>> On 2011-01-04 16:55, Jerome Marchand wrote:
>>> + } else {
>>> + part = disk_map_sector_rcu(rq->rq_disk, blk_rq_pos(rq));
>>> + if (!kref_test_and_get(&part->ref)) {
>>> + /*
>>> + * The partition is already being removed,
>>> + * the request will be accounted on the disk only
>>> + *
>>> + * We take a reference on disk->part0 although that
>>> + * partition will never be deleted, so we can treat
>>> + * it as any other partition.
>>> + */
>>> + part = &rq->rq_disk->part0;
>>> + kref_get(&part->ref);
>>> + }
>>
>> This still doesn't work. So you are inside the if {} block, you know
>> what someone has dropped the last reference and the call_rcu() is being
>> scheduled. Adding a reference now will not prevent 'part' from going
>> away as soon as you do part_stat_unlock().
>
> And what is the problem with that since we don't use 'part' (as returned
> by disk_map_sector_rcu()), but disk->part0 instead?
Ugh, I'm the one who's blind now. part0 is indeed fine, I didn't read
that carefully enough.
So I think your patch looks safe now, I don't see any holes in it.
Whether we move the kref to inside the lookup or not, that doesn't
change anything wrt using an atomic_inc_not_zero(). Can you resend 1/2
with the manual barriers removed?
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists