[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <B1B4CB0E-6A3C-4023-9911-D57B5F58D691@cumulusnetworks.com>
Date: Thu, 16 Jan 2014 10:04:31 -0800
From: Scott Feldman <sfeldma@...ulusnetworks.com>
To: Veaceslav Falico <vfalico@...hat.com>
Cc: Jay Vosburgh <fubar@...ibm.com>,
Andy Gospodarek <andy@...yhouse.net>,
Netdev <netdev@...r.kernel.org>,
Roopa Prabhu <roopa@...ulusnetworks.com>,
Shrijeet Mukherjee <shm@...ulusnetworks.com>
Subject: Re: [PATCH net-next 1/2] bonding: add sysfs /slave dir for bond slave devices.
On Jan 16, 2014, at 7:31 AM, Veaceslav Falico <vfalico@...hat.com> wrote:
> On Wed, Jan 15, 2014 at 09:54:34PM -0800, Scott Feldman wrote:
>> Add sub-directory under /sys/class/net/<interface>/slave with
>> read-only attributes for slave. Directory only appears when
>> <interface> is a slave.
>> +static ssize_t state_show(struct slave *slave, char *buf)
>> +{
>> + switch (bond_slave_state(slave)) {
>> + case BOND_STATE_ACTIVE:
>> + return sprintf(buf, "active\n");
>> + case BOND_STATE_BACKUP:
>> + return sprintf(buf, "backup\n");
>> + default:
>> + return sprintf(buf, "UNKONWN\n");
>> + }
>> +}
>> +static SLAVE_ATTR_RO(state);
>
> Am I missing something or does it really completely lacks any locking?
>
> What prevents the slave to be freed in between?
Correct me if I’m wrong, but I think the equivalent question is: is there a race between sysfs_remove_file() and another CPU open on that file trying to read/write the file? I believe the answer is no, but I’ll defer to the experts.
The file removal call path is:
bond_release (ndo_del_slave)
__bond_release_one
bond_sysfs_slave_del
sysfs_remove_file
<...continue freeing slave...>
So slave is freed after sysfs_remove_file. I would expect I/O on sysfs file to fail during sysfs_remove_file.
Does this sound OK? Am I missing anything else?
-scott
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists