lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 07 May 2013 15:03:49 -0600
From:	Toshi Kani <toshi.kani@...com>
To:	"Rafael J. Wysocki" <rjw@...k.pl>
Cc:	Vasilis Liaskovitis <vasilis.liaskovitis@...fitbricks.com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	isimatu.yasuaki@...fujitsu.com, Len Brown <lenb@...nel.org>,
	linux-mm@...ck.org, wency@...fujitsu.com
Subject: Re: [PATCH 2/2 v2, RFC] Driver core: Introduce offline/online
 callbacks for memory blocks

On Tue, 2013-05-07 at 14:11 +0200, Rafael J. Wysocki wrote:
> On Tuesday, May 07, 2013 12:59:45 PM Vasilis Liaskovitis wrote:

 :

> Updated patch is appended for completness.

Yes, this updated patch solved the locking issue.

> > > > A more general issue is that there are now two memory offlining efforts:
> > > > 
> > > > 1) from acpi_bus_offline_companions during device offline
> > > > 2) from mm: remove_memory during device detach (offline_memory_block_cb)
> > > > 
> > > > The 2nd is only called if the device offline operation was already succesful, so
> > > > it seems ineffective or redundant now, at least for x86_64/acpi_memhotplug machine
> > > > (unless the blocks were re-onlined in between).
> > > 
> > > Sure, and that should be OK for now.  Changing the detach behavior is not
> > > essential from the patch [2/2] perspective, we can do it later.
> > 
> > yes, ok.
> > 
> > > 
> > > > On the other hand, the 2nd effort has some more intelligence in offlining, as it
> > > > tries to offline twice in the precense of memcg, see commits df3e1b91 or
> > > > reworked 0baeab16. Maybe we need to consolidate the logic.
> > > 
> > > Hmm.  Perhaps it would make sense to implement that logic in
> > > memory_subsys_offline(), then?
> > 
> > the logic tries to offline the memory blocks of the device twice, because the
> > first memory block might be storing information for the subsequent memblocks.
> > 
> > memory_subsys_offline operates on one memory block at a time. Perhaps we can get
> > the same effect if we do an acpi_walk of acpi_bus_offline_companions twice in
> > acpi_scan_hot_remove but it's probably not a good idea, since that would
> > affect non-memory devices as well. 
> > 
> > I am not sure how important this intelligence is in practice (I am not using
> > mem cgroups in my guest kernel tests yet).  Maybe Wen (original author) has
> > more details on 2-pass offlining effectiveness.
> 
> OK
> 
> It may be added in a separate patch in any case.

I had the same comment as Vasilis.  And, I agree with you that we can
enhance it in separate patches.

 :

> +static int memory_subsys_offline(struct device *dev)
> +{
> +	struct memory_block *mem = container_of(dev, struct memory_block, dev);
> +	int ret;
> +
> +	mutex_lock(&mem->state_mutex);
> +	ret = __memory_block_change_state(mem, MEM_OFFLINE, MEM_ONLINE, -1);

This function needs to check mem->state just like
offline_memory_block().  That is:

	int ret = 0;
		:
	if (mem->state != MEM_OFFLINE)
		ret = __memory_block_change_state(...);

Otherwise, memory hot-delete to an off-lined memory fails in
__memory_block_change_state() since mem->state is already set to
MEM_OFFLINE.

With that change, for the series:
Reviewed-by: Toshi Kani <toshi.kani@...com>

Thanks,
-Toshi

> +	mutex_unlock(&mem->state_mutex);
> +	return ret;
> +}
> +



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ