lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CA656AD.8020408@vlnb.net>
Date:	Sat, 02 Oct 2010 01:46:21 +0400
From:	Vladislav Bolkhovitin <vst@...b.net>
To:	linux-scsi@...r.kernel.org
CC:	linux-kernel@...r.kernel.org,
	scst-devel <scst-devel@...ts.sourceforge.net>,
	James Bottomley <James.Bottomley@...senPartnership.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
	Mike Christie <michaelc@...wisc.edu>,
	Vu Pham <vuhuong@...lanox.com>,
	Bart Van Assche <bart.vanassche@...il.com>,
	James Smart <James.Smart@...lex.Com>,
	Joe Eykholt <jeykholt@...co.com>, Andy Yan <ayan@...vell.com>,
	Chetan Loke <generationgnu@...oo.com>,
	Dmitry Torokhov <dmitry.torokhov@...il.com>,
	Hannes Reinecke <hare@...e.de>,
	Richard Sharpe <realrichardsharpe@...il.com>,
	Daniel Henrique Debonzi <debonzi@...ux.vnet.ibm.com>
Subject: [PATCH 8/19]: SCST SYSFS interface implementation

This patch contains SYSFS interface implementation.

This interface provides possibility for a user to configure his/her SCST
server configuration: add/delete/manage target drivers, targets, dev handlers,
virtual devices and access control to them as well as it allows to see current
SCST configuration with necessary statistical and debug info (e.g. SGV cache
statistics).

For some management events processing is redirected to dedicated thread for
2 reasons:

1. It naturally serializes all SYSFS management operations, so allows to
simplify locking in target drivers and dev handlers. For instance, it allows
in add_target() callback don't worry if del_target() or another add_target()
for the same target name called simultaneously.

2. To make processing be done outside of the internal SYSFS locking.

All internal SCST management is done for simplicity under scst_mutex.
It's simple, robust and worked well even under the highest load for ages.
But in 2.6.35 sysfs was improved to make lockdep to check s_active related
deadlocks and we discovered potential circular locking dependency
between scst_mutex and s_active. On some management operations lockdep triggered
output like:

[ 2036.926891] =======================================================
[ 2036.927670] [ INFO: possible circular locking dependency detected ]
[ 2036.927670] 2.6.35-scst-dbg #15
[ 2036.927670] -------------------------------------------------------
[ 2036.927670] rmmod/4715 is trying to acquire lock:
[ 2036.927670]  (s_active#230){++++.+}, at: [<78240a24>] sysfs_hash_and_remove+0x63/0x67
[ 2036.927670] 
[ 2036.927670] but task is already holding lock:
[ 2036.927670]  (&scst_mutex){+.+.+.}, at: [<fefd7fe2>] scst_unregister_virtual_device+0x58/0x216 [scst]
[ 2036.927670] 
[ 2036.927670] which lock already depends on the new lock.
[ 2036.927670] 
[ 2036.927670] 
[ 2036.927670] the existing dependency chain (in reverse order) is:
[ 2036.927670] 
[ 2036.927670] -> #2 (&scst_mutex){+.+.+.}:
[ 2036.927670]        [<78168d67>] lock_acquire+0x76/0x129
[ 2036.927670]        [<78467619>] __mutex_lock_common+0x58/0x3fc
[ 2036.927670]        [<78467a6d>] mutex_lock_nested+0x36/0x3d
[ 2036.927670]        [<f8ecec91>] vcdrom_change+0x1b9/0x500 [scst_vdisk]
[ 2036.927670]        [<f8ecf030>] vcdrom_sysfs_filename_store+0x58/0xd8 [scst_vdisk]
[ 2036.927670]        [<feffd139>] scst_dev_attr_store+0x44/0x5d [scst]
[ 2036.927670]        [<7824104f>] sysfs_write_file+0x9e/0xe8
[ 2036.927670]        [<781ee836>] vfs_write+0x91/0x17e
[ 2036.927670]        [<781ef213>] sys_write+0x42/0x69
[ 2036.927670]        [<78102d13>] sysenter_do_call+0x12/0x32
[ 2036.927670] 
[ 2036.927670] -> #1 (&virt_dev->vdev_sysfs_mutex){+.+.+.}:
[ 2036.927670]        [<78168d67>] lock_acquire+0x76/0x129
[ 2036.927670]        [<78467619>] __mutex_lock_common+0x58/0x3fc
[ 2036.927670]        [<784679f3>] mutex_lock_interruptible_nested+0x36/0x3d
[ 2036.927670]        [<f8ecebd6>] vcdrom_change+0xfe/0x500 [scst_vdisk]
[ 2036.927670]        [<f8ecf030>] vcdrom_sysfs_filename_store+0x58/0xd8 [scst_vdisk]
[ 2036.927670]        [<feffd139>] scst_dev_attr_store+0x44/0x5d [scst]
[ 2036.927670]        [<7824104f>] sysfs_write_file+0x9e/0xe8
[ 2036.927670]        [<781ee836>] vfs_write+0x91/0x17e
[ 2036.927670]        [<781ef213>] sys_write+0x42/0x69
[ 2036.927670]        [<78102d13>] sysenter_do_call+0x12/0x32
[ 2036.927670] 
[ 2036.927670] -> #0 (s_active#230){++++.+}:
[ 2036.927670]        [<78168af4>] __lock_acquire+0x1013/0x1210
[ 2036.927670]        [<78168d67>] lock_acquire+0x76/0x129
[ 2036.927670]        [<78242417>] sysfs_addrm_finish+0x100/0x150
[ 2036.927670]        [<78240a24>] sysfs_hash_and_remove+0x63/0x67
[ 2036.927670]        [<782415b6>] sysfs_remove_file+0x14/0x16
[ 2036.927670]        [<feffdb29>] scst_devt_dev_sysfs_put+0x75/0x133 [scst]
[ 2036.927670]        [<fefd6410>] scst_assign_dev_handler+0x109/0x5b6 [scst]
[ 2036.927670]        [<fefd80ce>] scst_unregister_virtual_device+0x144/0x216 [scst]
[ 2036.927670]        [<f8ed06f3>] vdev_del_device+0x47/0xd4 [scst_vdisk]
[ 2036.927670]        [<f8ed6701>] exit_scst_vdisk+0x60/0xe6 [scst_vdisk]
[ 2036.927670]        [<f8ed67b1>] exit_scst_vdisk_driver+0x12/0x46 [scst_vdisk]
[ 2036.927670]        [<7817253a>] sys_delete_module+0x139/0x214
[ 2036.927670]        [<78102d13>] sysenter_do_call+0x12/0x32
[ 2036.927670] 
[ 2036.927670] other info that might help us debug this:
[ 2036.927670] 
[ 2036.927670] 2 locks held by rmmod/4715:
[ 2036.927670]  #0:  (scst_vdisk_mutex){+.+.+.}, at: [<f8ed66f0>] exit_scst_vdisk+0x4f/0xe6 [scst_vdisk]
[ 2036.927670]  #1:  (&scst_mutex){+.+.+.}, at: [<fefd7fe2>] scst_unregister_virtual_device+0x58/0x216 [scst]
[ 2036.927670] 
[ 2036.927670] stack backtrace:
[ 2036.927670] Pid: 4715, comm: rmmod Not tainted 2.6.35-scst-dbg #15
[ 2036.927670] Call Trace:
[ 2036.927670]  [<784660a3>] ? printk+0x2d/0x32
[ 2036.927670]  [<78166cbc>] print_circular_bug+0xb4/0xb9
[ 2036.927670]  [<78168af4>] __lock_acquire+0x1013/0x1210
[ 2036.927670]  [<78168d67>] lock_acquire+0x76/0x129
[ 2036.927670]  [<78240a24>] ? sysfs_hash_and_remove+0x63/0x67
[ 2036.927670]  [<78242417>] sysfs_addrm_finish+0x100/0x150
[ 2036.927670]  [<78240a24>] ? sysfs_hash_and_remove+0x63/0x67
[ 2036.927670]  [<78240a24>] sysfs_hash_and_remove+0x63/0x67
[ 2036.927670]  [<782415b6>] sysfs_remove_file+0x14/0x16
[ 2036.927670]  [<feffdb29>] scst_devt_dev_sysfs_put+0x75/0x133 [scst]
[ 2036.927670]  [<fefd5b20>] ? scst_stop_dev_threads+0x77/0x111 [scst]
[ 2036.927670]  [<f8ece3c2>] ? vdisk_detach+0x88/0x133 [scst_vdisk]
[ 2036.927670]  [<fefd6410>] scst_assign_dev_handler+0x109/0x5b6 [scst]
[ 2036.927670]  [<ff007368>] ? scst_pr_clear_dev+0x8e/0xfc [scst]
[ 2036.927670]  [<fefd80ce>] scst_unregister_virtual_device+0x144/0x216 [scst]
[ 2036.927670]  [<f8ed06f3>] vdev_del_device+0x47/0xd4 [scst_vdisk]
[ 2036.927670]  [<f8ed6701>] exit_scst_vdisk+0x60/0xe6 [scst_vdisk]
[ 2036.927670]  [<f8ed67b1>] exit_scst_vdisk_driver+0x12/0x46 [scst_vdisk]
[ 2036.927670]  [<7817253a>] sys_delete_module+0x139/0x214
[ 2036.927670]  [<7846c87e>] ? sub_preempt_count+0x7e/0xad
[ 2036.927670]  [<78102d42>] ? sysenter_exit+0xf/0x1a
[ 2036.927670]  [<7816789d>] ? trace_hardirqs_on_caller+0x10c/0x14d
[ 2036.927670]  [<78102d13>] sysenter_do_call+0x12/0x32
[ 2036.927670]  [<7846007b>] ? init_intel_cacheinfo+0x317/0x38b

It is caused by a chicken and egg problem: SCST objects, including their sysfs hierarchy
(kobjects) are created under scst_mutex, but for some of them (ACGs and their names
attributes, ACNs, are the most problematic objects) the creation is triggered from
inside the SYSFS.

I spent a LOT of time trying to rule out this problem in an acceptable manner.
Particularly, I analyzed splitting creation of SCST objects and their kobjects, so the
latter would be created outside of scst_mutex, and making a fine grain locking for
the SCST management instead of the single scst_mutex, but all the options lead to
unacceptably complicated code. So, I have chosen the use of the separate thread for all
the SYSFS management operation with scst_mutex/s_active deadlock detecting (see
scst_sysfs_queue_wait_work()) and, if the deadlock possibility detected, returning EAGAIN
asking the user space to poll completion of the command using last_sysfs_mgmt_res
attribute. It is documented in the README and scstadmin is doing that. It, definitely,
isn't a piece of beauty, but it's simple and works, so, I believe, good enough. User space,
anyway, is supposes to hide all the complexities of the direct SYSFS manipulations under
higher level management tools like scstadmin.

Signed-off-by: Daniel Henrique Debonzi <debonzi@...ux.vnet.ibm.com>
Signed-off-by: Vladislav Bolkhovitin <vst@...b.net>
---
 scst_sysfs.c | 5194 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 5194 insertions(+)

diff -uprN orig/linux-2.6.35/drivers/scst/scst_sysfs.c linux-2.6.35/drivers/scst/scst_sysfs.c
--- orig/linux-2.6.35/drivers/scst/scst_sysfs.c
+++ linux-2.6.35/drivers/scst/scst_sysfs.c
@@ -0,0 +1,5194 @@
+/*
+ *  scst_sysfs.c
+ *
+ *  Copyright (C) 2009 Daniel Henrique Debonzi <debonzi@...ux.vnet.ibm.com>
+ *  Copyright (C) 2009 - 2010 Vladislav Bolkhovitin <vst@...b.net>
+ *  Copyright (C) 2009 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/kobject.h>
+#include <linux/string.h>
+#include <linux/sysfs.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/ctype.h>
+#include <linux/slab.h>
+#include <linux/kthread.h>
+
+#include <scst/scst.h>
+#include "scst_priv.h"
+#include "scst_mem.h"
+#include "scst_pres.h"
+
+static DECLARE_COMPLETION(scst_sysfs_root_release_completion);
+
+static struct kobject scst_sysfs_root_kobj;
+static struct kobject *scst_targets_kobj;
+static struct kobject *scst_devices_kobj;
+static struct kobject *scst_sgv_kobj;
+static struct kobject *scst_handlers_kobj;
+
+static const char *scst_dev_handler_types[] = {
+    "Direct-access device (e.g., magnetic disk)",
+    "Sequential-access device (e.g., magnetic tape)",
+    "Printer device",
+    "Processor device",
+    "Write-once device (e.g., some optical disks)",
+    "CD-ROM device",
+    "Scanner device (obsolete)",
+    "Optical memory device (e.g., some optical disks)",
+    "Medium changer device (e.g., jukeboxes)",
+    "Communications device (obsolete)",
+    "Defined by ASC IT8 (Graphic arts pre-press devices)",
+    "Defined by ASC IT8 (Graphic arts pre-press devices)",
+    "Storage array controller device (e.g., RAID)",
+    "Enclosure services device",
+    "Simplified direct-access device (e.g., magnetic disk)",
+    "Optical card reader/writer device"
+};
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static DEFINE_MUTEX(scst_log_mutex);
+
+static struct scst_trace_log scst_trace_tbl[] = {
+    { TRACE_OUT_OF_MEM,		"out_of_mem" },
+    { TRACE_MINOR,		"minor" },
+    { TRACE_SG_OP,		"sg" },
+    { TRACE_MEMORY,		"mem" },
+    { TRACE_BUFF,		"buff" },
+    { TRACE_PID,		"pid" },
+    { TRACE_LINE,		"line" },
+    { TRACE_FUNCTION,		"function" },
+    { TRACE_DEBUG,		"debug" },
+    { TRACE_SPECIAL,		"special" },
+    { TRACE_SCSI,		"scsi" },
+    { TRACE_MGMT,		"mgmt" },
+    { TRACE_MGMT_DEBUG,		"mgmt_dbg" },
+    { TRACE_FLOW_CONTROL,	"flow_control" },
+    { TRACE_PRES,		"pr" },
+    { 0,			NULL }
+};
+
+static struct scst_trace_log scst_local_trace_tbl[] = {
+    { TRACE_RTRY,		"retry" },
+    { TRACE_SCSI_SERIALIZING,	"scsi_serializing" },
+    { TRACE_RCV_BOT,		"recv_bot" },
+    { TRACE_SND_BOT,		"send_bot" },
+    { TRACE_RCV_TOP,		"recv_top" },
+    { TRACE_SND_TOP,		"send_top" },
+    { 0,			NULL }
+};
+
+static ssize_t scst_trace_level_show(const struct scst_trace_log *local_tbl,
+	unsigned long log_level, char *buf, const char *help);
+static int scst_write_trace(const char *buf, size_t length,
+	unsigned long *log_level, unsigned long default_level,
+	const char *name, const struct scst_trace_log *tbl);
+
+#endif /* defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static ssize_t scst_luns_mgmt_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_luns_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_tgt_addr_method_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_tgt_addr_method_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_tgt_io_grouping_type_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_tgt_io_grouping_type_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_tgt_cpu_mask_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_tgt_cpu_mask_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_ini_group_mgmt_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_ini_group_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_rel_tgt_id_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_rel_tgt_id_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_acg_luns_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_acg_ini_mgmt_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_acg_ini_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_acg_addr_method_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_acg_addr_method_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_acg_io_grouping_type_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_acg_io_grouping_type_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_acg_cpu_mask_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_acg_cpu_mask_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_acn_file_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf);
+
+/**
+ ** Sysfs work
+ **/
+
+static DEFINE_SPINLOCK(sysfs_work_lock);
+static LIST_HEAD(sysfs_work_list);
+static DECLARE_WAIT_QUEUE_HEAD(sysfs_work_waitQ);
+static int active_sysfs_works;
+static int last_sysfs_work_res;
+static struct task_struct *sysfs_work_thread;
+
+/**
+ * scst_alloc_sysfs_work() - allocates a sysfs work
+ */
+int scst_alloc_sysfs_work(int (*sysfs_work_fn)(struct scst_sysfs_work_item *),
+	bool read_only_action, struct scst_sysfs_work_item **res_work)
+{
+	int res = 0;
+	struct scst_sysfs_work_item *work;
+
+	if (sysfs_work_fn == NULL) {
+		PRINT_ERROR("%s", "sysfs_work_fn is NULL");
+		res = -EINVAL;
+		goto out;
+	}
+
+	*res_work = NULL;
+
+	work = kzalloc(sizeof(*work), GFP_KERNEL);
+	if (work == NULL) {
+		PRINT_ERROR("Unable to alloc sysfs work (size %zd)",
+			sizeof(*work));
+		res = -ENOMEM;
+		goto out;
+	}
+
+	work->read_only_action = read_only_action;
+	kref_init(&work->sysfs_work_kref);
+	init_completion(&work->sysfs_work_done);
+	work->sysfs_work_fn = sysfs_work_fn;
+
+	*res_work = work;
+
+out:
+	return res;
+}
+EXPORT_SYMBOL(scst_alloc_sysfs_work);
+
+static void scst_sysfs_work_release(struct kref *kref)
+{
+	struct scst_sysfs_work_item *work;
+
+	work = container_of(kref, struct scst_sysfs_work_item,
+			sysfs_work_kref);
+
+	TRACE_DBG("Freeing sysfs work %p (buf %p)", work, work->buf);
+
+	kfree(work->buf);
+	kfree(work->res_buf);
+	kfree(work);
+	return;
+}
+
+/**
+ * scst_sysfs_work_get() - increases ref counter of the sysfs work
+ */
+void scst_sysfs_work_get(struct scst_sysfs_work_item *work)
+{
+	kref_get(&work->sysfs_work_kref);
+}
+EXPORT_SYMBOL(scst_sysfs_work_get);
+
+/**
+ * scst_sysfs_work_put() - decreases ref counter of the sysfs work
+ */
+void scst_sysfs_work_put(struct scst_sysfs_work_item *work)
+{
+	kref_put(&work->sysfs_work_kref, scst_sysfs_work_release);
+}
+EXPORT_SYMBOL(scst_sysfs_work_put);
+
+/**
+ * scst_sysfs_queue_wait_work() - waits for the work to complete
+ *
+ * Returnes status of the completed work or -EAGAIN if the work not
+ * completed before timeout. In the latter case a user should poll
+ * last_sysfs_mgmt_res until it returns the result of the processing.
+ */
+int scst_sysfs_queue_wait_work(struct scst_sysfs_work_item *work)
+{
+	int res = 0, rc;
+	unsigned long timeout = 15*HZ;
+
+	spin_lock(&sysfs_work_lock);
+
+	TRACE_DBG("Adding sysfs work %p to the list", work);
+	list_add_tail(&work->sysfs_work_list_entry, &sysfs_work_list);
+
+	active_sysfs_works++;
+
+	spin_unlock(&sysfs_work_lock);
+
+	kref_get(&work->sysfs_work_kref);
+
+	wake_up(&sysfs_work_waitQ);
+
+	while (1) {
+		rc = wait_for_completion_interruptible_timeout(
+			&work->sysfs_work_done, timeout);
+		if (rc == 0) {
+			if (!mutex_is_locked(&scst_mutex)) {
+				TRACE_DBG("scst_mutex not locked, continue "
+					"waiting (work %p)", work);
+				timeout = 5*HZ;
+				continue;
+			}
+			TRACE_MGMT_DBG("Time out waiting for work %p",
+				work);
+			res = -EAGAIN;
+			goto out_put;
+		} else if (rc < 0) {
+			res = rc;
+			goto out_put;
+		}
+		break;
+	}
+
+	res = work->work_res;
+
+out_put:
+	kref_put(&work->sysfs_work_kref, scst_sysfs_work_release);
+	return res;
+}
+EXPORT_SYMBOL(scst_sysfs_queue_wait_work);
+
+/* Called under sysfs_work_lock and drops/reaquire it inside */
+static void scst_process_sysfs_works(void)
+{
+	struct scst_sysfs_work_item *work;
+
+	while (!list_empty(&sysfs_work_list)) {
+		work = list_entry(sysfs_work_list.next,
+			struct scst_sysfs_work_item, sysfs_work_list_entry);
+		list_del(&work->sysfs_work_list_entry);
+		spin_unlock(&sysfs_work_lock);
+
+		TRACE_DBG("Sysfs work %p", work);
+
+		work->work_res = work->sysfs_work_fn(work);
+
+		spin_lock(&sysfs_work_lock);
+		if (!work->read_only_action)
+			last_sysfs_work_res = work->work_res;
+		active_sysfs_works--;
+		spin_unlock(&sysfs_work_lock);
+
+		complete_all(&work->sysfs_work_done);
+		kref_put(&work->sysfs_work_kref, scst_sysfs_work_release);
+
+		spin_lock(&sysfs_work_lock);
+	}
+	return;
+}
+
+static inline int test_sysfs_work_list(void)
+{
+	int res = !list_empty(&sysfs_work_list) ||
+		  unlikely(kthread_should_stop());
+	return res;
+}
+
+static int sysfs_work_thread_fn(void *arg)
+{
+
+	PRINT_INFO("User interface thread started, PID %d", current->pid);
+
+	current->flags |= PF_NOFREEZE;
+
+	set_user_nice(current, -10);
+
+	spin_lock(&sysfs_work_lock);
+	while (!kthread_should_stop()) {
+		wait_queue_t wait;
+		init_waitqueue_entry(&wait, current);
+
+		if (!test_sysfs_work_list()) {
+			add_wait_queue_exclusive(&sysfs_work_waitQ, &wait);
+			for (;;) {
+				set_current_state(TASK_INTERRUPTIBLE);
+				if (test_sysfs_work_list())
+					break;
+				spin_unlock(&sysfs_work_lock);
+				schedule();
+				spin_lock(&sysfs_work_lock);
+			}
+			set_current_state(TASK_RUNNING);
+			remove_wait_queue(&sysfs_work_waitQ, &wait);
+		}
+
+		scst_process_sysfs_works();
+	}
+	spin_unlock(&sysfs_work_lock);
+
+	/*
+	 * If kthread_should_stop() is true, we are guaranteed to be
+	 * on the module unload, so both lists must be empty.
+	 */
+	BUG_ON(!list_empty(&sysfs_work_list));
+
+	PRINT_INFO("User interface thread PID %d finished", current->pid);
+	return 0;
+}
+
+/* No locks */
+static int scst_check_grab_tgtt_ptr(struct scst_tgt_template *tgtt)
+{
+	int res = 0;
+	struct scst_tgt_template *tt;
+
+	mutex_lock(&scst_mutex);
+
+	list_for_each_entry(tt, &scst_template_list, scst_template_list_entry) {
+		if (tt == tgtt) {
+			tgtt->tgtt_active_sysfs_works_count++;
+			goto out_unlock;
+		}
+	}
+
+	TRACE_DBG("Tgtt %p not found", tgtt);
+	res = -ENOENT;
+
+out_unlock:
+	mutex_unlock(&scst_mutex);
+	return res;
+}
+
+/* No locks */
+static void scst_ungrab_tgtt_ptr(struct scst_tgt_template *tgtt)
+{
+
+	mutex_lock(&scst_mutex);
+	tgtt->tgtt_active_sysfs_works_count--;
+	mutex_unlock(&scst_mutex);
+	return;
+}
+
+/* scst_mutex supposed to be locked */
+static int scst_check_tgt_acg_ptrs(struct scst_tgt *tgt, struct scst_acg *acg)
+{
+	int res = 0;
+	struct scst_tgt_template *tgtt;
+
+	list_for_each_entry(tgtt, &scst_template_list, scst_template_list_entry) {
+		struct scst_tgt *t;
+		list_for_each_entry(t, &tgtt->tgt_list, tgt_list_entry) {
+			if (t == tgt) {
+				struct scst_acg *a;
+				if (acg == NULL)
+					goto out;
+				if (acg == tgt->default_acg)
+					goto out;
+				list_for_each_entry(a, &tgt->tgt_acg_list,
+							acg_list_entry) {
+					if (a == acg)
+						goto out;
+				}
+			}
+		}
+	}
+
+	TRACE_DBG("Tgt %p/ACG %p not found", tgt, acg);
+	res = -ENOENT;
+
+out:
+	return res;
+}
+
+/* scst_mutex supposed to be locked */
+static int scst_check_devt_ptr(struct scst_dev_type *devt,
+	struct list_head *list)
+{
+	int res = 0;
+	struct scst_dev_type *dt;
+
+	list_for_each_entry(dt, list, dev_type_list_entry) {
+		if (dt == devt)
+			goto out;
+	}
+
+	TRACE_DBG("Devt %p not found", devt);
+	res = -ENOENT;
+
+out:
+	return res;
+}
+
+/* scst_mutex supposed to be locked */
+static int scst_check_dev_ptr(struct scst_device *dev)
+{
+	int res = 0;
+	struct scst_device *d;
+
+	list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+		if (d == dev)
+			goto out;
+	}
+
+	TRACE_DBG("Dev %p not found", dev);
+	res = -ENOENT;
+
+out:
+	return res;
+}
+
+/* No locks */
+static int scst_check_grab_devt_ptr(struct scst_dev_type *devt,
+	struct list_head *list)
+{
+	int res = 0;
+	struct scst_dev_type *dt;
+
+	mutex_lock(&scst_mutex);
+
+	list_for_each_entry(dt, list, dev_type_list_entry) {
+		if (dt == devt) {
+			devt->devt_active_sysfs_works_count++;
+			goto out_unlock;
+		}
+	}
+
+	TRACE_DBG("Devt %p not found", devt);
+	res = -ENOENT;
+
+out_unlock:
+	mutex_unlock(&scst_mutex);
+	return res;
+}
+
+/* No locks */
+static void scst_ungrab_devt_ptr(struct scst_dev_type *devt)
+{
+
+	mutex_lock(&scst_mutex);
+	devt->devt_active_sysfs_works_count--;
+	mutex_unlock(&scst_mutex);
+	return;
+}
+
+/**
+ ** Regular SCST sysfs ops
+ **/
+static ssize_t scst_show(struct kobject *kobj, struct attribute *attr,
+			 char *buf)
+{
+	struct kobj_attribute *kobj_attr;
+	kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+	return kobj_attr->show(kobj, kobj_attr, buf);
+}
+
+static ssize_t scst_store(struct kobject *kobj, struct attribute *attr,
+			  const char *buf, size_t count)
+{
+	struct kobj_attribute *kobj_attr;
+	kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+	if (kobj_attr->store)
+		return kobj_attr->store(kobj, kobj_attr, buf, count);
+	else
+		return -EIO;
+}
+
+static const struct sysfs_ops scst_sysfs_ops = {
+	.show = scst_show,
+	.store = scst_store,
+};
+
+const struct sysfs_ops *scst_sysfs_get_sysfs_ops(void)
+{
+	return &scst_sysfs_ops;
+}
+EXPORT_SYMBOL_GPL(scst_sysfs_get_sysfs_ops);
+
+/**
+ ** Target Template
+ **/
+
+static void scst_tgtt_release(struct kobject *kobj)
+{
+	struct scst_tgt_template *tgtt;
+
+	tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+	complete_all(&tgtt->tgtt_kobj_release_cmpl);
+	return;
+}
+
+static struct kobj_type tgtt_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_tgtt_release,
+};
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static ssize_t scst_tgtt_trace_level_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_tgt_template *tgtt;
+
+	tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+	return scst_trace_level_show(tgtt->trace_tbl,
+		tgtt->trace_flags ? *tgtt->trace_flags : 0, buf,
+		tgtt->trace_tbl_help);
+}
+
+static ssize_t scst_tgtt_trace_level_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_tgt_template *tgtt;
+
+	tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+	if (mutex_lock_interruptible(&scst_log_mutex) != 0) {
+		res = -EINTR;
+		goto out;
+	}
+
+	res = scst_write_trace(buf, count, tgtt->trace_flags,
+		tgtt->default_trace_flags, tgtt->name, tgtt->trace_tbl);
+
+	mutex_unlock(&scst_log_mutex);
+
+out:
+	return res;
+}
+
+static struct kobj_attribute tgtt_trace_attr =
+	__ATTR(trace_level, S_IRUGO | S_IWUSR,
+	       scst_tgtt_trace_level_show, scst_tgtt_trace_level_store);
+
+#endif /* #if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static ssize_t scst_tgtt_mgmt_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+char *help = "Usage: echo \"add_target target_name [parameters]\" "
+				">mgmt\n"
+		     "       echo \"del_target target_name\" >mgmt\n"
+		     "%s%s"
+		     "%s"
+		     "\n"
+		     "where parameters are one or more "
+		     "param_name=value pairs separated by ';'\n\n"
+		     "%s%s%s%s%s%s%s%s\n";
+		struct scst_tgt_template *tgtt;
+
+	tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+	return scnprintf(buf, SCST_SYSFS_BLOCK_SIZE, help,
+		(tgtt->tgtt_optional_attributes != NULL) ?
+			"       echo \"add_attribute <attribute> <value>\" >mgmt\n"
+			"       echo \"del_attribute <attribute> <value>\" >mgmt\n" : "",
+		(tgtt->tgt_optional_attributes != NULL) ?
+			"       echo \"add_target_attribute target_name <attribute> <value>\" >mgmt\n"
+			"       echo \"del_target_attribute target_name <attribute> <value>\" >mgmt\n" : "",
+		(tgtt->mgmt_cmd_help) ? tgtt->mgmt_cmd_help : "",
+		(tgtt->add_target_parameters != NULL) ?
+			"The following parameters available: " : "",
+		(tgtt->add_target_parameters != NULL) ?
+			tgtt->add_target_parameters : "",
+		(tgtt->tgtt_optional_attributes != NULL) ?
+			"The following target driver attributes available: " : "",
+		(tgtt->tgtt_optional_attributes != NULL) ?
+			tgtt->tgtt_optional_attributes : "",
+		(tgtt->tgtt_optional_attributes != NULL) ? "\n" : "",
+		(tgtt->tgt_optional_attributes != NULL) ?
+			"The following target attributes available: " : "",
+		(tgtt->tgt_optional_attributes != NULL) ?
+			tgtt->tgt_optional_attributes : "",
+		(tgtt->tgt_optional_attributes != NULL) ? "\n" : "");
+}
+
+static int scst_process_tgtt_mgmt_store(char *buffer,
+	struct scst_tgt_template *tgtt)
+{
+	int res = 0;
+	char *p, *pp, *target_name;
+
+	TRACE_DBG("buffer %s", buffer);
+
+	/* Check if our pointer is still alive and, if yes, grab it */
+	if (scst_check_grab_tgtt_ptr(tgtt) != 0)
+		goto out;
+
+	pp = buffer;
+	if (pp[strlen(pp) - 1] == '\n')
+		pp[strlen(pp) - 1] = '\0';
+
+	p = scst_get_next_lexem(&pp);
+
+	if (strcasecmp("add_target", p) == 0) {
+		target_name = scst_get_next_lexem(&pp);
+		if (*target_name == '\0') {
+			PRINT_ERROR("%s", "Target name required");
+			res = -EINVAL;
+			goto out_ungrab;
+		}
+		res = tgtt->add_target(target_name, pp);
+	} else if (strcasecmp("del_target", p) == 0) {
+		target_name = scst_get_next_lexem(&pp);
+		if (*target_name == '\0') {
+			PRINT_ERROR("%s", "Target name required");
+			res = -EINVAL;
+			goto out_ungrab;
+		}
+
+		p = scst_get_next_lexem(&pp);
+		if (*p != '\0')
+			goto out_syntax_err;
+
+		res = tgtt->del_target(target_name);
+	} else if (tgtt->mgmt_cmd != NULL) {
+		scst_restore_token_str(p, pp);
+		res = tgtt->mgmt_cmd(buffer);
+	} else {
+		PRINT_ERROR("Unknown action \"%s\"", p);
+		res = -EINVAL;
+		goto out_ungrab;
+	}
+
+out_ungrab:
+	scst_ungrab_tgtt_ptr(tgtt);
+
+out:
+	return res;
+
+out_syntax_err:
+	PRINT_ERROR("Syntax error on \"%s\"", p);
+	res = -EINVAL;
+	goto out_ungrab;
+}
+
+static int scst_tgtt_mgmt_store_work_fn(struct scst_sysfs_work_item *work)
+{
+	return scst_process_tgtt_mgmt_store(work->buf, work->tgtt);
+}
+
+static ssize_t scst_tgtt_mgmt_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	char *buffer;
+	struct scst_sysfs_work_item *work;
+	struct scst_tgt_template *tgtt;
+
+	tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+	buffer = kzalloc(count+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		res = -ENOMEM;
+		goto out;
+	}
+	memcpy(buffer, buf, count);
+	buffer[count] = '\0';
+
+	res = scst_alloc_sysfs_work(scst_tgtt_mgmt_store_work_fn, false, &work);
+	if (res != 0)
+		goto out_free;
+
+	work->buf = buffer;
+	work->tgtt = tgtt;
+
+	res = scst_sysfs_queue_wait_work(work);
+	if (res == 0)
+		res = count;
+
+out:
+	return res;
+
+out_free:
+	kfree(buffer);
+	goto out;
+}
+
+static struct kobj_attribute scst_tgtt_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_tgtt_mgmt_show,
+	       scst_tgtt_mgmt_store);
+
+int scst_tgtt_sysfs_create(struct scst_tgt_template *tgtt)
+{
+	int res = 0;
+	const struct attribute **pattr;
+
+	init_completion(&tgtt->tgtt_kobj_release_cmpl);
+
+	res = kobject_init_and_add(&tgtt->tgtt_kobj, &tgtt_ktype,
+			scst_targets_kobj, tgtt->name);
+	if (res != 0) {
+		PRINT_ERROR("Can't add tgtt %s to sysfs", tgtt->name);
+		goto out;
+	}
+
+	if (tgtt->add_target != NULL) {
+		res = sysfs_create_file(&tgtt->tgtt_kobj,
+				&scst_tgtt_mgmt.attr);
+		if (res != 0) {
+			PRINT_ERROR("Can't add mgmt attr for target driver %s",
+				tgtt->name);
+			goto out_del;
+		}
+	}
+
+	pattr = tgtt->tgtt_attrs;
+	if (pattr != NULL) {
+		while (*pattr != NULL) {
+			TRACE_DBG("Creating attr %s for target driver %s",
+				(*pattr)->name, tgtt->name);
+			res = sysfs_create_file(&tgtt->tgtt_kobj, *pattr);
+			if (res != 0) {
+				PRINT_ERROR("Can't add attr %s for target "
+					"driver %s", (*pattr)->name,
+					tgtt->name);
+				goto out_del;
+			}
+			pattr++;
+		}
+	}
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+	if (tgtt->trace_flags != NULL) {
+		res = sysfs_create_file(&tgtt->tgtt_kobj,
+				&tgtt_trace_attr.attr);
+		if (res != 0) {
+			PRINT_ERROR("Can't add trace_flag for target "
+				"driver %s", tgtt->name);
+			goto out_del;
+		}
+	}
+#endif
+
+out:
+	return res;
+
+out_del:
+	scst_tgtt_sysfs_del(tgtt);
+	goto out;
+}
+
+/*
+ * Must not be called under scst_mutex, due to possible deadlock with
+ * sysfs ref counting in sysfs works (it is waiting for the last put, but
+ * the last ref counter holder is waiting for scst_mutex)
+ */
+void scst_tgtt_sysfs_del(struct scst_tgt_template *tgtt)
+{
+	int rc;
+
+	kobject_del(&tgtt->tgtt_kobj);
+	kobject_put(&tgtt->tgtt_kobj);
+
+	rc = wait_for_completion_timeout(&tgtt->tgtt_kobj_release_cmpl, HZ);
+	if (rc == 0) {
+		PRINT_INFO("Waiting for releasing sysfs entry "
+			"for target template %s (%d refs)...", tgtt->name,
+			atomic_read(&tgtt->tgtt_kobj.kref.refcount));
+		wait_for_completion(&tgtt->tgtt_kobj_release_cmpl);
+		PRINT_INFO("Done waiting for releasing sysfs "
+			"entry for target template %s", tgtt->name);
+	}
+	return;
+}
+
+/**
+ ** Target directory implementation
+ **/
+
+static void scst_tgt_release(struct kobject *kobj)
+{
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+	complete_all(&tgt->tgt_kobj_release_cmpl);
+	return;
+}
+
+static struct kobj_type tgt_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_tgt_release,
+};
+
+static void scst_acg_release(struct kobject *kobj)
+{
+	struct scst_acg *acg;
+
+	acg = container_of(kobj, struct scst_acg, acg_kobj);
+	complete_all(&acg->acg_kobj_release_cmpl);
+	return;
+}
+
+static struct kobj_type acg_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_acg_release,
+};
+
+static struct kobj_attribute scst_luns_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_luns_mgmt_show,
+	       scst_luns_mgmt_store);
+
+static struct kobj_attribute scst_acg_luns_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_luns_mgmt_show,
+	       scst_acg_luns_mgmt_store);
+
+static struct kobj_attribute scst_acg_ini_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_acg_ini_mgmt_show,
+	       scst_acg_ini_mgmt_store);
+
+static struct kobj_attribute scst_ini_group_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_ini_group_mgmt_show,
+	       scst_ini_group_mgmt_store);
+
+static struct kobj_attribute scst_tgt_addr_method =
+	__ATTR(addr_method, S_IRUGO | S_IWUSR, scst_tgt_addr_method_show,
+	       scst_tgt_addr_method_store);
+
+static struct kobj_attribute scst_tgt_io_grouping_type =
+	__ATTR(io_grouping_type, S_IRUGO | S_IWUSR,
+	       scst_tgt_io_grouping_type_show,
+	       scst_tgt_io_grouping_type_store);
+
+static struct kobj_attribute scst_tgt_cpu_mask =
+	__ATTR(cpu_mask, S_IRUGO | S_IWUSR,
+	       scst_tgt_cpu_mask_show,
+	       scst_tgt_cpu_mask_store);
+
+static struct kobj_attribute scst_rel_tgt_id =
+	__ATTR(rel_tgt_id, S_IRUGO | S_IWUSR, scst_rel_tgt_id_show,
+	       scst_rel_tgt_id_store);
+
+static struct kobj_attribute scst_acg_addr_method =
+	__ATTR(addr_method, S_IRUGO | S_IWUSR, scst_acg_addr_method_show,
+		scst_acg_addr_method_store);
+
+static struct kobj_attribute scst_acg_io_grouping_type =
+	__ATTR(io_grouping_type, S_IRUGO | S_IWUSR,
+	       scst_acg_io_grouping_type_show,
+	       scst_acg_io_grouping_type_store);
+
+static struct kobj_attribute scst_acg_cpu_mask =
+	__ATTR(cpu_mask, S_IRUGO | S_IWUSR,
+	       scst_acg_cpu_mask_show,
+	       scst_acg_cpu_mask_store);
+
+static ssize_t scst_tgt_enable_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_tgt *tgt;
+	int res;
+	bool enabled;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+	enabled = tgt->tgtt->is_target_enabled(tgt);
+
+	res = sprintf(buf, "%d\n", enabled ? 1 : 0);
+	return res;
+}
+
+static int scst_process_tgt_enable_store(struct scst_tgt *tgt, bool enable)
+{
+	int res;
+
+	/* Tgt protected by kobject reference */
+
+	TRACE_DBG("tgt %s, enable %d", tgt->tgt_name, enable);
+
+	if (enable) {
+		if (tgt->rel_tgt_id == 0) {
+			res = gen_relative_target_port_id(&tgt->rel_tgt_id);
+			if (res != 0)
+				goto out_put;
+			PRINT_INFO("Using autogenerated rel ID %d for target "
+				"%s", tgt->rel_tgt_id, tgt->tgt_name);
+		} else {
+			if (!scst_is_relative_target_port_id_unique(
+					    tgt->rel_tgt_id, tgt)) {
+				PRINT_ERROR("Relative port id %d is not unique",
+					tgt->rel_tgt_id);
+				res = -EBADSLT;
+				goto out_put;
+			}
+		}
+	}
+
+	res = tgt->tgtt->enable_target(tgt, enable);
+
+out_put:
+	kobject_put(&tgt->tgt_kobj);
+	return res;
+}
+
+static int scst_tgt_enable_store_work_fn(struct scst_sysfs_work_item *work)
+{
+	return scst_process_tgt_enable_store(work->tgt, work->enable);
+}
+
+static ssize_t scst_tgt_enable_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_tgt *tgt;
+	bool enable;
+	struct scst_sysfs_work_item *work;
+
+	if (buf == NULL) {
+		PRINT_ERROR("%s: NULL buffer?", __func__);
+		res = -EINVAL;
+		goto out;
+	}
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+	switch (buf[0]) {
+	case '0':
+		enable = false;
+		break;
+	case '1':
+		enable = true;
+		break;
+	default:
+		PRINT_ERROR("%s: Requested action not understood: %s",
+		       __func__, buf);
+		res = -EINVAL;
+		goto out;
+	}
+
+	res = scst_alloc_sysfs_work(scst_tgt_enable_store_work_fn, false,
+					&work);
+	if (res != 0)
+		goto out;
+
+	work->tgt = tgt;
+	work->enable = enable;
+
+	kobject_get(&tgt->tgt_kobj);
+
+	res = scst_sysfs_queue_wait_work(work);
+	if (res == 0)
+		res = count;
+
+out:
+	return res;
+}
+
+static struct kobj_attribute tgt_enable_attr =
+	__ATTR(enabled, S_IRUGO | S_IWUSR,
+	       scst_tgt_enable_show, scst_tgt_enable_store);
+
+/*
+ * Supposed to be called under scst_mutex. In case of error will drop,
+ * then reacquire it.
+ */
+int scst_tgt_sysfs_create(struct scst_tgt *tgt)
+{
+	int res;
+	const struct attribute **pattr;
+
+	init_completion(&tgt->tgt_kobj_release_cmpl);
+
+	res = kobject_init_and_add(&tgt->tgt_kobj, &tgt_ktype,
+			&tgt->tgtt->tgtt_kobj, tgt->tgt_name);
+	if (res != 0) {
+		PRINT_ERROR("Can't add tgt %s to sysfs", tgt->tgt_name);
+		goto out;
+	}
+
+	if ((tgt->tgtt->enable_target != NULL) &&
+	    (tgt->tgtt->is_target_enabled != NULL)) {
+		res = sysfs_create_file(&tgt->tgt_kobj,
+				&tgt_enable_attr.attr);
+		if (res != 0) {
+			PRINT_ERROR("Can't add attr %s to sysfs",
+				tgt_enable_attr.attr.name);
+			goto out_err;
+		}
+	}
+
+	tgt->tgt_sess_kobj = kobject_create_and_add("sessions", &tgt->tgt_kobj);
+	if (tgt->tgt_sess_kobj == NULL) {
+		PRINT_ERROR("Can't create sess kobj for tgt %s", tgt->tgt_name);
+		goto out_nomem;
+	}
+
+	tgt->tgt_luns_kobj = kobject_create_and_add("luns", &tgt->tgt_kobj);
+	if (tgt->tgt_luns_kobj == NULL) {
+		PRINT_ERROR("Can't create luns kobj for tgt %s", tgt->tgt_name);
+		goto out_nomem;
+	}
+
+	res = sysfs_create_file(tgt->tgt_luns_kobj, &scst_luns_mgmt.attr);
+	if (res != 0) {
+		PRINT_ERROR("Can't add attribute %s for tgt %s",
+			scst_luns_mgmt.attr.name, tgt->tgt_name);
+		goto out_err;
+	}
+
+	tgt->tgt_ini_grp_kobj = kobject_create_and_add("ini_groups",
+					&tgt->tgt_kobj);
+	if (tgt->tgt_ini_grp_kobj == NULL) {
+		PRINT_ERROR("Can't create ini_grp kobj for tgt %s",
+			tgt->tgt_name);
+		goto out_nomem;
+	}
+
+	res = sysfs_create_file(tgt->tgt_ini_grp_kobj,
+			&scst_ini_group_mgmt.attr);
+	if (res != 0) {
+		PRINT_ERROR("Can't add attribute %s for tgt %s",
+			scst_ini_group_mgmt.attr.name, tgt->tgt_name);
+		goto out_err;
+	}
+
+	res = sysfs_create_file(&tgt->tgt_kobj,
+			&scst_rel_tgt_id.attr);
+	if (res != 0) {
+		PRINT_ERROR("Can't add attribute %s for tgt %s",
+			scst_rel_tgt_id.attr.name, tgt->tgt_name);
+		goto out_err;
+	}
+
+	res = sysfs_create_file(&tgt->tgt_kobj,
+			&scst_tgt_addr_method.attr);
+	if (res != 0) {
+		PRINT_ERROR("Can't add attribute %s for tgt %s",
+			scst_tgt_addr_method.attr.name, tgt->tgt_name);
+		goto out_err;
+	}
+
+	res = sysfs_create_file(&tgt->tgt_kobj,
+			&scst_tgt_io_grouping_type.attr);
+	if (res != 0) {
+		PRINT_ERROR("Can't add attribute %s for tgt %s",
+			scst_tgt_io_grouping_type.attr.name, tgt->tgt_name);
+		goto out_err;
+	}
+
+	res = sysfs_create_file(&tgt->tgt_kobj, &scst_tgt_cpu_mask.attr);
+	if (res != 0) {
+		PRINT_ERROR("Can't add attribute %s for tgt %s",
+			scst_tgt_cpu_mask.attr.name, tgt->tgt_name);
+		goto out_err;
+	}
+
+	pattr = tgt->tgtt->tgt_attrs;
+	if (pattr != NULL) {
+		while (*pattr != NULL) {
+			TRACE_DBG("Creating attr %s for tgt %s", (*pattr)->name,
+				tgt->tgt_name);
+			res = sysfs_create_file(&tgt->tgt_kobj, *pattr);
+			if (res != 0) {
+				PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+					(*pattr)->name, tgt->tgt_name);
+				goto out_err;
+			}
+			pattr++;
+		}
+	}
+
+out:
+	return res;
+
+out_nomem:
+	res = -ENOMEM;
+
+out_err:
+	mutex_unlock(&scst_mutex);
+	scst_tgt_sysfs_del(tgt);
+	mutex_lock(&scst_mutex);
+	goto out;
+}
+
+/*
+ * Must not be called under scst_mutex, due to possible deadlock with
+ * sysfs ref counting in sysfs works (it is waiting for the last put, but
+ * the last ref counter holder is waiting for scst_mutex)
+ */
+void scst_tgt_sysfs_del(struct scst_tgt *tgt)
+{
+	int rc;
+
+	kobject_del(tgt->tgt_sess_kobj);
+	kobject_put(tgt->tgt_sess_kobj);
+
+	kobject_del(tgt->tgt_luns_kobj);
+	kobject_put(tgt->tgt_luns_kobj);
+
+	kobject_del(tgt->tgt_ini_grp_kobj);
+	kobject_put(tgt->tgt_ini_grp_kobj);
+
+	kobject_del(&tgt->tgt_kobj);
+	kobject_put(&tgt->tgt_kobj);
+
+	rc = wait_for_completion_timeout(&tgt->tgt_kobj_release_cmpl, HZ);
+	if (rc == 0) {
+		PRINT_INFO("Waiting for releasing sysfs entry "
+			"for target %s (%d refs)...", tgt->tgt_name,
+			atomic_read(&tgt->tgt_kobj.kref.refcount));
+		wait_for_completion(&tgt->tgt_kobj_release_cmpl);
+		PRINT_INFO("Done waiting for releasing sysfs "
+			"entry for target %s", tgt->tgt_name);
+	}
+	return;
+}
+
+/**
+ ** Devices directory implementation
+ **/
+
+static ssize_t scst_dev_sysfs_type_show(struct kobject *kobj,
+			    struct kobj_attribute *attr, char *buf)
+{
+	int pos = 0;
+
+	struct scst_device *dev;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	pos = sprintf(buf, "%d - %s\n", dev->type,
+		(unsigned)dev->type > ARRAY_SIZE(scst_dev_handler_types) ?
+		      "unknown" : scst_dev_handler_types[dev->type]);
+
+	return pos;
+}
+
+static struct kobj_attribute dev_type_attr =
+	__ATTR(type, S_IRUGO, scst_dev_sysfs_type_show, NULL);
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static ssize_t scst_dev_sysfs_dump_prs(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	struct scst_device *dev;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	scst_pr_dump_prs(dev, true);
+	return count;
+}
+
+static struct kobj_attribute dev_dump_prs_attr =
+	__ATTR(dump_prs, S_IWUSR, NULL, scst_dev_sysfs_dump_prs);
+
+#endif /* defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static int scst_process_dev_sysfs_threads_data_store(
+	struct scst_device *dev, int threads_num,
+	enum scst_dev_type_threads_pool_type threads_pool_type)
+{
+	int res = 0;
+	int oldtn = dev->threads_num;
+	enum scst_dev_type_threads_pool_type oldtt = dev->threads_pool_type;
+
+	TRACE_DBG("dev %p, threads_num %d, threads_pool_type %d", dev,
+		threads_num, threads_pool_type);
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_resume;
+	}
+
+	/* Check if our pointer is still alive */
+	if (scst_check_dev_ptr(dev) != 0)
+		goto out_unlock;
+
+	scst_stop_dev_threads(dev);
+
+	dev->threads_num = threads_num;
+	dev->threads_pool_type = threads_pool_type;
+
+	res = scst_create_dev_threads(dev);
+	if (res != 0)
+		goto out_unlock;
+
+	if (oldtn != dev->threads_num)
+		PRINT_INFO("Changed cmd threads num to %d", dev->threads_num);
+	else if (oldtt != dev->threads_pool_type)
+		PRINT_INFO("Changed cmd threads pool type to %d",
+			dev->threads_pool_type);
+
+out_unlock:
+	mutex_unlock(&scst_mutex);
+
+out_resume:
+	scst_resume_activity();
+
+out:
+	return res;
+}
+
+static int scst_dev_sysfs_threads_data_store_work_fn(
+	struct scst_sysfs_work_item *work)
+{
+	return scst_process_dev_sysfs_threads_data_store(work->dev,
+		work->new_threads_num, work->new_threads_pool_type);
+}
+
+static ssize_t scst_dev_sysfs_check_threads_data(
+	struct scst_device *dev, int threads_num,
+	enum scst_dev_type_threads_pool_type threads_pool_type, bool *stop)
+{
+	int res = 0;
+
+	*stop = false;
+
+	if (dev->threads_num < 0) {
+		PRINT_ERROR("Threads pool disabled for device %s",
+			dev->virt_name);
+		res = -EPERM;
+		goto out;
+	}
+
+	if ((threads_num == dev->threads_num) &&
+	    (threads_pool_type == dev->threads_pool_type)) {
+		*stop = true;
+		goto out;
+	}
+
+out:
+	return res;
+}
+
+static ssize_t scst_dev_sysfs_threads_num_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos = 0;
+	struct scst_device *dev;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	pos = sprintf(buf, "%d\n%s", dev->threads_num,
+		(dev->threads_num != dev->handler->threads_num) ?
+			SCST_SYSFS_KEY_MARK "\n" : "");
+	return pos;
+}
+
+static ssize_t scst_dev_sysfs_threads_num_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_device *dev;
+	long newtn;
+	bool stop;
+	struct scst_sysfs_work_item *work;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	res = strict_strtol(buf, 0, &newtn);
+	if (res != 0) {
+		PRINT_ERROR("strict_strtol() for %s failed: %d ", buf, res);
+		goto out;
+	}
+	if (newtn < 0) {
+		PRINT_ERROR("Illegal threads num value %ld", newtn);
+		res = -EINVAL;
+		goto out;
+	}
+
+	res = scst_dev_sysfs_check_threads_data(dev, newtn,
+		dev->threads_pool_type, &stop);
+	if ((res != 0) || stop)
+		goto out;
+
+	res = scst_alloc_sysfs_work(scst_dev_sysfs_threads_data_store_work_fn,
+					false, &work);
+	if (res != 0)
+		goto out;
+
+	work->dev = dev;
+	work->new_threads_num = newtn;
+	work->new_threads_pool_type = dev->threads_pool_type;
+
+	res = scst_sysfs_queue_wait_work(work);
+	if (res == 0)
+		res = count;
+
+out:
+	return res;
+}
+
+static struct kobj_attribute dev_threads_num_attr =
+	__ATTR(threads_num, S_IRUGO | S_IWUSR,
+		scst_dev_sysfs_threads_num_show,
+		scst_dev_sysfs_threads_num_store);
+
+static ssize_t scst_dev_sysfs_threads_pool_type_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos = 0;
+	struct scst_device *dev;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	if (dev->threads_num == 0) {
+		pos = sprintf(buf, "Async\n");
+		goto out;
+	} else if (dev->threads_num < 0) {
+		pos = sprintf(buf, "Not valid\n");
+		goto out;
+	}
+
+	switch (dev->threads_pool_type) {
+	case SCST_THREADS_POOL_PER_INITIATOR:
+		pos = sprintf(buf, "%s\n%s", SCST_THREADS_POOL_PER_INITIATOR_STR,
+			(dev->threads_pool_type != dev->handler->threads_pool_type) ?
+				SCST_SYSFS_KEY_MARK "\n" : "");
+		break;
+	case SCST_THREADS_POOL_SHARED:
+		pos = sprintf(buf, "%s\n%s", SCST_THREADS_POOL_SHARED_STR,
+			(dev->threads_pool_type != dev->handler->threads_pool_type) ?
+				SCST_SYSFS_KEY_MARK "\n" : "");
+		break;
+	default:
+		pos = sprintf(buf, "Unknown\n");
+		break;
+	}
+
+out:
+	return pos;
+}
+
+static ssize_t scst_dev_sysfs_threads_pool_type_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_device *dev;
+	enum scst_dev_type_threads_pool_type newtpt;
+	struct scst_sysfs_work_item *work;
+	bool stop;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	newtpt = scst_parse_threads_pool_type(buf, count);
+	if (newtpt == SCST_THREADS_POOL_TYPE_INVALID) {
+		PRINT_ERROR("Illegal threads pool type %s", buf);
+		res = -EINVAL;
+		goto out;
+	}
+
+	TRACE_DBG("buf %s, count %zd, newtpt %d", buf, count, newtpt);
+
+	res = scst_dev_sysfs_check_threads_data(dev, dev->threads_num,
+		newtpt, &stop);
+	if ((res != 0) || stop)
+		goto out;
+
+	res = scst_alloc_sysfs_work(scst_dev_sysfs_threads_data_store_work_fn,
+					false, &work);
+	if (res != 0)
+		goto out;
+
+	work->dev = dev;
+	work->new_threads_num = dev->threads_num;
+	work->new_threads_pool_type = newtpt;
+
+	res = scst_sysfs_queue_wait_work(work);
+	if (res == 0)
+		res = count;
+
+out:
+	return res;
+}
+
+static struct kobj_attribute dev_threads_pool_type_attr =
+	__ATTR(threads_pool_type, S_IRUGO | S_IWUSR,
+		scst_dev_sysfs_threads_pool_type_show,
+		scst_dev_sysfs_threads_pool_type_store);
+
+static struct attribute *scst_dev_attrs[] = {
+	&dev_type_attr.attr,
+	NULL,
+};
+
+static void scst_sysfs_dev_release(struct kobject *kobj)
+{
+	struct scst_device *dev;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+	complete_all(&dev->dev_kobj_release_cmpl);
+	return;
+}
+
+int scst_devt_dev_sysfs_create(struct scst_device *dev)
+{
+	int res = 0;
+	const struct attribute **pattr;
+
+	if (dev->handler == &scst_null_devtype)
+		goto out;
+
+	res = sysfs_create_link(&dev->dev_kobj,
+			&dev->handler->devt_kobj, "handler");
+	if (res != 0) {
+		PRINT_ERROR("Can't create handler link for dev %s",
+			dev->virt_name);
+		goto out;
+	}
+
+	res = sysfs_create_link(&dev->handler->devt_kobj,
+			&dev->dev_kobj, dev->virt_name);
+	if (res != 0) {
+		PRINT_ERROR("Can't create handler link for dev %s",
+			dev->virt_name);
+		goto out_err;
+	}
+
+	if (dev->handler->threads_num >= 0) {
+		res = sysfs_create_file(&dev->dev_kobj,
+				&dev_threads_num_attr.attr);
+		if (res != 0) {
+			PRINT_ERROR("Can't add dev attr %s for dev %s",
+				dev_threads_num_attr.attr.name,
+				dev->virt_name);
+			goto out_err;
+		}
+		res = sysfs_create_file(&dev->dev_kobj,
+				&dev_threads_pool_type_attr.attr);
+		if (res != 0) {
+			PRINT_ERROR("Can't add dev attr %s for dev %s",
+				dev_threads_pool_type_attr.attr.name,
+				dev->virt_name);
+			goto out_err;
+		}
+	}
+
+	pattr = dev->handler->dev_attrs;
+	if (pattr != NULL) {
+		while (*pattr != NULL) {
+			res = sysfs_create_file(&dev->dev_kobj, *pattr);
+			if (res != 0) {
+				PRINT_ERROR("Can't add dev attr %s for dev %s",
+					(*pattr)->name, dev->virt_name);
+				goto out_err;
+			}
+			pattr++;
+		}
+	}
+
+out:
+	return res;
+
+out_err:
+	scst_devt_dev_sysfs_del(dev);
+	goto out;
+}
+
+void scst_devt_dev_sysfs_del(struct scst_device *dev)
+{
+	const struct attribute **pattr;
+
+	if (dev->handler == &scst_null_devtype)
+		goto out;
+
+	pattr = dev->handler->dev_attrs;
+	if (pattr != NULL) {
+		while (*pattr != NULL) {
+			sysfs_remove_file(&dev->dev_kobj, *pattr);
+			pattr++;
+		}
+	}
+
+	sysfs_remove_link(&dev->dev_kobj, "handler");
+	sysfs_remove_link(&dev->handler->devt_kobj, dev->virt_name);
+
+	if (dev->handler->threads_num >= 0) {
+		sysfs_remove_file(&dev->dev_kobj,
+			&dev_threads_num_attr.attr);
+		sysfs_remove_file(&dev->dev_kobj,
+			&dev_threads_pool_type_attr.attr);
+	}
+
+out:
+	return;
+}
+
+static struct kobj_type scst_dev_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_sysfs_dev_release,
+	.default_attrs = scst_dev_attrs,
+};
+
+/*
+ * Must not be called under scst_mutex, because it can call
+ * scst_dev_sysfs_del()
+ */
+int scst_dev_sysfs_create(struct scst_device *dev)
+{
+	int res = 0;
+
+	init_completion(&dev->dev_kobj_release_cmpl);
+
+	res = kobject_init_and_add(&dev->dev_kobj, &scst_dev_ktype,
+				      scst_devices_kobj, dev->virt_name);
+	if (res != 0) {
+		PRINT_ERROR("Can't add device %s to sysfs", dev->virt_name);
+		goto out;
+	}
+
+	dev->dev_exp_kobj = kobject_create_and_add("exported",
+						   &dev->dev_kobj);
+	if (dev->dev_exp_kobj == NULL) {
+		PRINT_ERROR("Can't create exported link for device %s",
+			dev->virt_name);
+		res = -ENOMEM;
+		goto out_del;
+	}
+
+	if (dev->scsi_dev != NULL) {
+		res = sysfs_create_link(&dev->dev_kobj,
+			&dev->scsi_dev->sdev_dev.kobj, "scsi_device");
+		if (res != 0) {
+			PRINT_ERROR("Can't create scsi_device link for dev %s",
+				dev->virt_name);
+			goto out_del;
+		}
+	}
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+	if (dev->scsi_dev == NULL) {
+		res = sysfs_create_file(&dev->dev_kobj,
+				&dev_dump_prs_attr.attr);
+		if (res != 0) {
+			PRINT_ERROR("Can't create attr %s for dev %s",
+				dev_dump_prs_attr.attr.name, dev->virt_name);
+			goto out_del;
+		}
+	}
+#endif
+
+out:
+	return res;
+
+out_del:
+	scst_dev_sysfs_del(dev);
+	goto out;
+}
+
+/*
+ * Must not be called under scst_mutex, due to possible deadlock with
+ * sysfs ref counting in sysfs works (it is waiting for the last put, but
+ * the last ref counter holder is waiting for scst_mutex)
+ */
+void scst_dev_sysfs_del(struct scst_device *dev)
+{
+	int rc;
+
+	kobject_del(dev->dev_exp_kobj);
+	kobject_put(dev->dev_exp_kobj);
+
+	kobject_del(&dev->dev_kobj);
+	kobject_put(&dev->dev_kobj);
+
+	rc = wait_for_completion_timeout(&dev->dev_kobj_release_cmpl, HZ);
+	if (rc == 0) {
+		PRINT_INFO("Waiting for releasing sysfs entry "
+			"for device %s (%d refs)...", dev->virt_name,
+			atomic_read(&dev->dev_kobj.kref.refcount));
+		wait_for_completion(&dev->dev_kobj_release_cmpl);
+		PRINT_INFO("Done waiting for releasing sysfs "
+			"entry for device %s", dev->virt_name);
+	}
+	return;
+}
+
+/**
+ ** Tgt_dev's directory implementation
+ **/
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+
+static char *scst_io_size_names[] = {
+	"<=8K  ",
+	"<=32K ",
+	"<=128K",
+	"<=512K",
+	">512K "
+};
+
+static ssize_t scst_tgt_dev_latency_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buffer)
+{
+	int res = 0, i;
+	char buf[50];
+	struct scst_tgt_dev *tgt_dev;
+
+	tgt_dev = container_of(kobj, struct scst_tgt_dev, tgt_dev_kobj);
+
+	for (i = 0; i < SCST_LATENCY_STATS_NUM; i++) {
+		uint64_t scst_time_wr, tgt_time_wr, dev_time_wr;
+		unsigned int processed_cmds_wr;
+		uint64_t scst_time_rd, tgt_time_rd, dev_time_rd;
+		unsigned int processed_cmds_rd;
+		struct scst_ext_latency_stat *latency_stat;
+
+		latency_stat = &tgt_dev->dev_latency_stat[i];
+		scst_time_wr = latency_stat->scst_time_wr;
+		scst_time_rd = latency_stat->scst_time_rd;
+		tgt_time_wr = latency_stat->tgt_time_wr;
+		tgt_time_rd = latency_stat->tgt_time_rd;
+		dev_time_wr = latency_stat->dev_time_wr;
+		dev_time_rd = latency_stat->dev_time_rd;
+		processed_cmds_wr = latency_stat->processed_cmds_wr;
+		processed_cmds_rd = latency_stat->processed_cmds_rd;
+
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			 "%-5s %-9s %-15lu ", "Write", scst_io_size_names[i],
+			(unsigned long)processed_cmds_wr);
+		if (processed_cmds_wr == 0)
+			processed_cmds_wr = 1;
+
+		do_div(scst_time_wr, processed_cmds_wr);
+		snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+			(unsigned long)latency_stat->min_scst_time_wr,
+			(unsigned long)scst_time_wr,
+			(unsigned long)latency_stat->max_scst_time_wr,
+			(unsigned long)latency_stat->scst_time_wr);
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-47s", buf);
+
+		do_div(tgt_time_wr, processed_cmds_wr);
+		snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+			(unsigned long)latency_stat->min_tgt_time_wr,
+			(unsigned long)tgt_time_wr,
+			(unsigned long)latency_stat->max_tgt_time_wr,
+			(unsigned long)latency_stat->tgt_time_wr);
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-47s", buf);
+
+		do_div(dev_time_wr, processed_cmds_wr);
+		snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+			(unsigned long)latency_stat->min_dev_time_wr,
+			(unsigned long)dev_time_wr,
+			(unsigned long)latency_stat->max_dev_time_wr,
+			(unsigned long)latency_stat->dev_time_wr);
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-47s\n", buf);
+
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-5s %-9s %-15lu ", "Read", scst_io_size_names[i],
+			(unsigned long)processed_cmds_rd);
+		if (processed_cmds_rd == 0)
+			processed_cmds_rd = 1;
+
+		do_div(scst_time_rd, processed_cmds_rd);
+		snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+			(unsigned long)latency_stat->min_scst_time_rd,
+			(unsigned long)scst_time_rd,
+			(unsigned long)latency_stat->max_scst_time_rd,
+			(unsigned long)latency_stat->scst_time_rd);
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-47s", buf);
+
+		do_div(tgt_time_rd, processed_cmds_rd);
+		snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+			(unsigned long)latency_stat->min_tgt_time_rd,
+			(unsigned long)tgt_time_rd,
+			(unsigned long)latency_stat->max_tgt_time_rd,
+			(unsigned long)latency_stat->tgt_time_rd);
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-47s", buf);
+
+		do_div(dev_time_rd, processed_cmds_rd);
+		snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+			(unsigned long)latency_stat->min_dev_time_rd,
+			(unsigned long)dev_time_rd,
+			(unsigned long)latency_stat->max_dev_time_rd,
+			(unsigned long)latency_stat->dev_time_rd);
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-47s\n", buf);
+	}
+	return res;
+}
+
+static struct kobj_attribute tgt_dev_latency_attr =
+	__ATTR(latency, S_IRUGO,
+		scst_tgt_dev_latency_show, NULL);
+
+#endif /* CONFIG_SCST_MEASURE_LATENCY */
+
+static ssize_t scst_tgt_dev_active_commands_show(struct kobject *kobj,
+			    struct kobj_attribute *attr, char *buf)
+{
+	int pos = 0;
+	struct scst_tgt_dev *tgt_dev;
+
+	tgt_dev = container_of(kobj, struct scst_tgt_dev, tgt_dev_kobj);
+
+	pos = sprintf(buf, "%d\n", atomic_read(&tgt_dev->tgt_dev_cmd_count));
+
+	return pos;
+}
+
+static struct kobj_attribute tgt_dev_active_commands_attr =
+	__ATTR(active_commands, S_IRUGO,
+		scst_tgt_dev_active_commands_show, NULL);
+
+static struct attribute *scst_tgt_dev_attrs[] = {
+	&tgt_dev_active_commands_attr.attr,
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+	&tgt_dev_latency_attr.attr,
+#endif
+	NULL,
+};
+
+static void scst_sysfs_tgt_dev_release(struct kobject *kobj)
+{
+	struct scst_tgt_dev *tgt_dev;
+
+	tgt_dev = container_of(kobj, struct scst_tgt_dev, tgt_dev_kobj);
+	complete_all(&tgt_dev->tgt_dev_kobj_release_cmpl);
+	return;
+}
+
+static struct kobj_type scst_tgt_dev_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_sysfs_tgt_dev_release,
+	.default_attrs = scst_tgt_dev_attrs,
+};
+
+int scst_tgt_dev_sysfs_create(struct scst_tgt_dev *tgt_dev)
+{
+	int res = 0;
+
+	init_completion(&tgt_dev->tgt_dev_kobj_release_cmpl);
+
+	res = kobject_init_and_add(&tgt_dev->tgt_dev_kobj, &scst_tgt_dev_ktype,
+			      &tgt_dev->sess->sess_kobj, "lun%lld",
+			      (unsigned long long)tgt_dev->lun);
+	if (res != 0) {
+		PRINT_ERROR("Can't add tgt_dev %lld to sysfs",
+			(unsigned long long)tgt_dev->lun);
+		goto out;
+	}
+
+out:
+	return res;
+}
+
+/*
+ * Called with scst_mutex held.
+ *
+ * !! No sysfs works must use kobject_get() to protect tgt_dev, due to possible
+ * !! deadlock with scst_mutex (it is waiting for the last put, but
+ * !! the last ref counter holder is waiting for scst_mutex)
+ */
+void scst_tgt_dev_sysfs_del(struct scst_tgt_dev *tgt_dev)
+{
+	int rc;
+
+	kobject_del(&tgt_dev->tgt_dev_kobj);
+	kobject_put(&tgt_dev->tgt_dev_kobj);
+
+	rc = wait_for_completion_timeout(
+			&tgt_dev->tgt_dev_kobj_release_cmpl, HZ);
+	if (rc == 0) {
+		PRINT_INFO("Waiting for releasing sysfs entry "
+			"for tgt_dev %lld (%d refs)...",
+			(unsigned long long)tgt_dev->lun,
+			atomic_read(&tgt_dev->tgt_dev_kobj.kref.refcount));
+		wait_for_completion(&tgt_dev->tgt_dev_kobj_release_cmpl);
+		PRINT_INFO("Done waiting for releasing sysfs entry for "
+			"tgt_dev %lld", (unsigned long long)tgt_dev->lun);
+	}
+	return;
+}
+
+/**
+ ** Sessions subdirectory implementation
+ **/
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+
+static ssize_t scst_sess_latency_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buffer)
+{
+	ssize_t res = 0;
+	struct scst_session *sess;
+	int i;
+	char buf[50];
+	uint64_t scst_time, tgt_time, dev_time;
+	unsigned int processed_cmds;
+
+	sess = container_of(kobj, struct scst_session, sess_kobj);
+
+	res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+		"%-15s %-15s %-46s %-46s %-46s\n",
+		"T-L names", "Total commands", "SCST latency",
+		"Target latency", "Dev latency (min/avg/max/all ns)");
+
+	spin_lock_bh(&sess->lat_lock);
+
+	for (i = 0; i < SCST_LATENCY_STATS_NUM ; i++) {
+		uint64_t scst_time_wr, tgt_time_wr, dev_time_wr;
+		unsigned int processed_cmds_wr;
+		uint64_t scst_time_rd, tgt_time_rd, dev_time_rd;
+		unsigned int processed_cmds_rd;
+		struct scst_ext_latency_stat *latency_stat;
+
+		latency_stat = &sess->sess_latency_stat[i];
+		scst_time_wr = latency_stat->scst_time_wr;
+		scst_time_rd = latency_stat->scst_time_rd;
+		tgt_time_wr = latency_stat->tgt_time_wr;
+		tgt_time_rd = latency_stat->tgt_time_rd;
+		dev_time_wr = latency_stat->dev_time_wr;
+		dev_time_rd = latency_stat->dev_time_rd;
+		processed_cmds_wr = latency_stat->processed_cmds_wr;
+		processed_cmds_rd = latency_stat->processed_cmds_rd;
+
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-5s %-9s %-15lu ",
+			"Write", scst_io_size_names[i],
+			(unsigned long)processed_cmds_wr);
+		if (processed_cmds_wr == 0)
+			processed_cmds_wr = 1;
+
+		do_div(scst_time_wr, processed_cmds_wr);
+		snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+			(unsigned long)latency_stat->min_scst_time_wr,
+			(unsigned long)scst_time_wr,
+			(unsigned long)latency_stat->max_scst_time_wr,
+			(unsigned long)latency_stat->scst_time_wr);
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-47s", buf);
+
+		do_div(tgt_time_wr, processed_cmds_wr);
+		snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+			(unsigned long)latency_stat->min_tgt_time_wr,
+			(unsigned long)tgt_time_wr,
+			(unsigned long)latency_stat->max_tgt_time_wr,
+			(unsigned long)latency_stat->tgt_time_wr);
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-47s", buf);
+
+		do_div(dev_time_wr, processed_cmds_wr);
+		snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+			(unsigned long)latency_stat->min_dev_time_wr,
+			(unsigned long)dev_time_wr,
+			(unsigned long)latency_stat->max_dev_time_wr,
+			(unsigned long)latency_stat->dev_time_wr);
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-47s\n", buf);
+
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-5s %-9s %-15lu ",
+			"Read", scst_io_size_names[i],
+			(unsigned long)processed_cmds_rd);
+		if (processed_cmds_rd == 0)
+			processed_cmds_rd = 1;
+
+		do_div(scst_time_rd, processed_cmds_rd);
+		snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+			(unsigned long)latency_stat->min_scst_time_rd,
+			(unsigned long)scst_time_rd,
+			(unsigned long)latency_stat->max_scst_time_rd,
+			(unsigned long)latency_stat->scst_time_rd);
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-47s", buf);
+
+		do_div(tgt_time_rd, processed_cmds_rd);
+		snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+			(unsigned long)latency_stat->min_tgt_time_rd,
+			(unsigned long)tgt_time_rd,
+			(unsigned long)latency_stat->max_tgt_time_rd,
+			(unsigned long)latency_stat->tgt_time_rd);
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-47s", buf);
+
+		do_div(dev_time_rd, processed_cmds_rd);
+		snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+			(unsigned long)latency_stat->min_dev_time_rd,
+			(unsigned long)dev_time_rd,
+			(unsigned long)latency_stat->max_dev_time_rd,
+			(unsigned long)latency_stat->dev_time_rd);
+		res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+			"%-47s\n", buf);
+	}
+
+	scst_time = sess->scst_time;
+	tgt_time = sess->tgt_time;
+	dev_time = sess->dev_time;
+	processed_cmds = sess->processed_cmds;
+
+	res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+		"\n%-15s %-16d", "Overall ", processed_cmds);
+
+	if (processed_cmds == 0)
+		processed_cmds = 1;
+
+	do_div(scst_time, processed_cmds);
+	snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+		(unsigned long)sess->min_scst_time,
+		(unsigned long)scst_time,
+		(unsigned long)sess->max_scst_time,
+		(unsigned long)sess->scst_time);
+	res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+		"%-47s", buf);
+
+	do_div(tgt_time, processed_cmds);
+	snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+		(unsigned long)sess->min_tgt_time,
+		(unsigned long)tgt_time,
+		(unsigned long)sess->max_tgt_time,
+		(unsigned long)sess->tgt_time);
+	res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+		"%-47s", buf);
+
+	do_div(dev_time, processed_cmds);
+	snprintf(buf, sizeof(buf), "%lu/%lu/%lu/%lu",
+		(unsigned long)sess->min_dev_time,
+		(unsigned long)dev_time,
+		(unsigned long)sess->max_dev_time,
+		(unsigned long)sess->dev_time);
+	res += scnprintf(&buffer[res], SCST_SYSFS_BLOCK_SIZE - res,
+		"%-47s\n\n", buf);
+
+	spin_unlock_bh(&sess->lat_lock);
+	return res;
+}
+
+static int scst_sess_zero_latency(struct scst_sysfs_work_item *work)
+{
+	int res = 0, t;
+	struct scst_session *sess = work->sess;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_put;
+	}
+
+	PRINT_INFO("Zeroing latency statistics for initiator "
+		"%s", sess->initiator_name);
+
+	spin_lock_bh(&sess->lat_lock);
+
+	sess->scst_time = 0;
+	sess->tgt_time = 0;
+	sess->dev_time = 0;
+	sess->min_scst_time = 0;
+	sess->min_tgt_time = 0;
+	sess->min_dev_time = 0;
+	sess->max_scst_time = 0;
+	sess->max_tgt_time = 0;
+	sess->max_dev_time = 0;
+	sess->processed_cmds = 0;
+	memset(sess->sess_latency_stat, 0,
+		sizeof(sess->sess_latency_stat));
+
+	for (t = SESS_TGT_DEV_LIST_HASH_SIZE-1; t >= 0; t--) {
+		struct list_head *head = &sess->sess_tgt_dev_list[t];
+		struct scst_tgt_dev *tgt_dev;
+		list_for_each_entry(tgt_dev, head, sess_tgt_dev_list_entry) {
+			tgt_dev->scst_time = 0;
+			tgt_dev->tgt_time = 0;
+			tgt_dev->dev_time = 0;
+			tgt_dev->processed_cmds = 0;
+			memset(tgt_dev->dev_latency_stat, 0,
+				sizeof(tgt_dev->dev_latency_stat));
+		}
+	}
+
+	spin_unlock_bh(&sess->lat_lock);
+
+	mutex_unlock(&scst_mutex);
+
+out_put:
+	kobject_put(&sess->sess_kobj);
+	return res;
+}
+
+static ssize_t scst_sess_latency_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_session *sess;
+	struct scst_sysfs_work_item *work;
+
+	sess = container_of(kobj, struct scst_session, sess_kobj);
+
+	res = scst_alloc_sysfs_work(scst_sess_zero_latency, false, &work);
+	if (res != 0)
+		goto out;
+
+	work->sess = sess;
+
+	kobject_get(&sess->sess_kobj);
+
+	res = scst_sysfs_queue_wait_work(work);
+	if (res == 0)
+		res = count;
+
+out:
+	return res;
+}
+
+static struct kobj_attribute session_latency_attr =
+	__ATTR(latency, S_IRUGO | S_IWUSR, scst_sess_latency_show,
+	       scst_sess_latency_store);
+
+#endif /* CONFIG_SCST_MEASURE_LATENCY */
+
+static ssize_t scst_sess_sysfs_commands_show(struct kobject *kobj,
+			    struct kobj_attribute *attr, char *buf)
+{
+	struct scst_session *sess;
+
+	sess = container_of(kobj, struct scst_session, sess_kobj);
+
+	return sprintf(buf, "%i\n", atomic_read(&sess->sess_cmd_count));
+}
+
+static struct kobj_attribute session_commands_attr =
+	__ATTR(commands, S_IRUGO, scst_sess_sysfs_commands_show, NULL);
+
+static int scst_sysfs_sess_get_active_commands(struct scst_session *sess)
+{
+	int res;
+	int active_cmds = 0, t;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_put;
+	}
+
+	for (t = SESS_TGT_DEV_LIST_HASH_SIZE-1; t >= 0; t--) {
+		struct list_head *head = &sess->sess_tgt_dev_list[t];
+		struct scst_tgt_dev *tgt_dev;
+		list_for_each_entry(tgt_dev, head, sess_tgt_dev_list_entry) {
+			active_cmds += atomic_read(&tgt_dev->tgt_dev_cmd_count);
+		}
+	}
+
+	mutex_unlock(&scst_mutex);
+
+	res = active_cmds;
+
+out_put:
+	kobject_put(&sess->sess_kobj);
+	return res;
+}
+
+static int scst_sysfs_sess_get_active_commands_work_fn(struct scst_sysfs_work_item *work)
+{
+	return scst_sysfs_sess_get_active_commands(work->sess);
+}
+
+static ssize_t scst_sess_sysfs_active_commands_show(struct kobject *kobj,
+			    struct kobj_attribute *attr, char *buf)
+{
+	int res;
+	struct scst_session *sess;
+	struct scst_sysfs_work_item *work;
+
+	sess = container_of(kobj, struct scst_session, sess_kobj);
+
+	res = scst_alloc_sysfs_work(scst_sysfs_sess_get_active_commands_work_fn,
+			true, &work);
+	if (res != 0)
+		goto out;
+
+	work->sess = sess;
+
+	kobject_get(&sess->sess_kobj);
+
+	res = scst_sysfs_queue_wait_work(work);
+	if (res != -EAGAIN)
+		res = sprintf(buf, "%i\n", res);
+
+out:
+	return res;
+}
+
+static struct kobj_attribute session_active_commands_attr =
+	__ATTR(active_commands, S_IRUGO, scst_sess_sysfs_active_commands_show,
+		NULL);
+
+static ssize_t scst_sess_sysfs_initiator_name_show(struct kobject *kobj,
+			    struct kobj_attribute *attr, char *buf)
+{
+	struct scst_session *sess;
+
+	sess = container_of(kobj, struct scst_session, sess_kobj);
+
+	return scnprintf(buf, SCST_SYSFS_BLOCK_SIZE, "%s\n",
+		sess->initiator_name);
+}
+
+static struct kobj_attribute session_initiator_name_attr =
+	__ATTR(initiator_name, S_IRUGO, scst_sess_sysfs_initiator_name_show, NULL);
+
+static struct attribute *scst_session_attrs[] = {
+	&session_commands_attr.attr,
+	&session_active_commands_attr.attr,
+	&session_initiator_name_attr.attr,
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+	&session_latency_attr.attr,
+#endif /* CONFIG_SCST_MEASURE_LATENCY */
+	NULL,
+};
+
+static void scst_sysfs_session_release(struct kobject *kobj)
+{
+	struct scst_session *sess;
+
+	sess = container_of(kobj, struct scst_session, sess_kobj);
+	complete_all(&sess->sess_kobj_release_cmpl);
+	return;
+}
+
+static struct kobj_type scst_session_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_sysfs_session_release,
+	.default_attrs = scst_session_attrs,
+};
+
+static int scst_create_sess_luns_link(struct scst_session *sess)
+{
+	int res;
+
+	/*
+	 * No locks are needed, because sess supposed to be in acg->acg_sess_list
+	 * and tgt->sess_list, so blocking them from disappearing.
+	 */
+
+	if (sess->acg == sess->tgt->default_acg)
+		res = sysfs_create_link(&sess->sess_kobj,
+				sess->tgt->tgt_luns_kobj, "luns");
+	else
+		res = sysfs_create_link(&sess->sess_kobj,
+				sess->acg->luns_kobj, "luns");
+
+	if (res != 0)
+		PRINT_ERROR("Can't create luns link for initiator %s",
+			sess->initiator_name);
+
+	return res;
+}
+
+int scst_recreate_sess_luns_link(struct scst_session *sess)
+{
+	sysfs_remove_link(&sess->sess_kobj, "luns");
+	return scst_create_sess_luns_link(sess);
+}
+
+/* Supposed to be called under scst_mutex */
+int scst_sess_sysfs_create(struct scst_session *sess)
+{
+	int res = 0;
+	struct scst_session *s;
+	const struct attribute **pattr;
+	char *name = (char *)sess->initiator_name;
+	int len = strlen(name) + 1, n = 1;
+
+restart:
+	list_for_each_entry(s, &sess->tgt->sess_list, sess_list_entry) {
+		if (!s->sess_kobj_ready)
+			continue;
+
+		if (strcmp(name, kobject_name(&s->sess_kobj)) == 0) {
+			if (s == sess)
+				continue;
+
+			TRACE_DBG("Dublicated session from the same initiator "
+				"%s found", name);
+
+			if (name == sess->initiator_name) {
+				len = strlen(sess->initiator_name);
+				len += 20;
+				name = kmalloc(len, GFP_KERNEL);
+				if (name == NULL) {
+					PRINT_ERROR("Unable to allocate a "
+						"replacement name (size %d)",
+						len);
+				}
+			}
+
+			snprintf(name, len, "%s_%d", sess->initiator_name, n);
+			n++;
+			goto restart;
+		}
+	}
+
+	init_completion(&sess->sess_kobj_release_cmpl);
+
+	TRACE_DBG("Adding session %s to sysfs", name);
+
+	res = kobject_init_and_add(&sess->sess_kobj, &scst_session_ktype,
+			      sess->tgt->tgt_sess_kobj, name);
+	if (res != 0) {
+		PRINT_ERROR("Can't add session %s to sysfs", name);
+		goto out_free;
+	}
+
+	sess->sess_kobj_ready = 1;
+
+	pattr = sess->tgt->tgtt->sess_attrs;
+	if (pattr != NULL) {
+		while (*pattr != NULL) {
+			res = sysfs_create_file(&sess->sess_kobj, *pattr);
+			if (res != 0) {
+				PRINT_ERROR("Can't add sess attr %s for sess "
+					"for initiator %s", (*pattr)->name,
+					name);
+				goto out_free;
+			}
+			pattr++;
+		}
+	}
+
+	res = scst_create_sess_luns_link(sess);
+
+out_free:
+	if (name != sess->initiator_name)
+		kfree(name);
+	return res;
+}
+
+/*
+ * Must not be called under scst_mutex, due to possible deadlock with
+ * sysfs ref counting in sysfs works (it is waiting for the last put, but
+ * the last ref counter holder is waiting for scst_mutex)
+ */
+void scst_sess_sysfs_del(struct scst_session *sess)
+{
+	int rc;
+
+	if (!sess->sess_kobj_ready)
+		goto out;
+
+	TRACE_DBG("Deleting session %s from sysfs",
+		kobject_name(&sess->sess_kobj));
+
+	kobject_del(&sess->sess_kobj);
+	kobject_put(&sess->sess_kobj);
+
+	rc = wait_for_completion_timeout(&sess->sess_kobj_release_cmpl, HZ);
+	if (rc == 0) {
+		PRINT_INFO("Waiting for releasing sysfs entry "
+			"for session from %s (%d refs)...", sess->initiator_name,
+			atomic_read(&sess->sess_kobj.kref.refcount));
+		wait_for_completion(&sess->sess_kobj_release_cmpl);
+		PRINT_INFO("Done waiting for releasing sysfs "
+			"entry for session %s", sess->initiator_name);
+	}
+
+out:
+	return;
+}
+
+/**
+ ** Target luns directory implementation
+ **/
+
+static void scst_acg_dev_release(struct kobject *kobj)
+{
+	struct scst_acg_dev *acg_dev;
+
+	acg_dev = container_of(kobj, struct scst_acg_dev, acg_dev_kobj);
+	complete_all(&acg_dev->acg_dev_kobj_release_cmpl);
+	return;
+}
+
+static ssize_t scst_lun_rd_only_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf)
+{
+	struct scst_acg_dev *acg_dev;
+
+	acg_dev = container_of(kobj, struct scst_acg_dev, acg_dev_kobj);
+
+	if (acg_dev->rd_only || acg_dev->dev->rd_only)
+		return sprintf(buf, "%d\n%s\n", 1, SCST_SYSFS_KEY_MARK);
+	else
+		return sprintf(buf, "%d\n", 0);
+}
+
+static struct kobj_attribute lun_options_attr =
+	__ATTR(read_only, S_IRUGO, scst_lun_rd_only_show, NULL);
+
+static struct attribute *lun_attrs[] = {
+	&lun_options_attr.attr,
+	NULL,
+};
+
+static struct kobj_type acg_dev_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_acg_dev_release,
+	.default_attrs = lun_attrs,
+};
+
+/*
+ * Called with scst_mutex held.
+ *
+ * !! No sysfs works must use kobject_get() to protect acg_dev, due to possible
+ * !! deadlock with scst_mutex (it is waiting for the last put, but
+ * !! the last ref counter holder is waiting for scst_mutex)
+ */
+void scst_acg_dev_sysfs_del(struct scst_acg_dev *acg_dev)
+{
+	int rc;
+
+	if (acg_dev->dev != NULL) {
+		sysfs_remove_link(acg_dev->dev->dev_exp_kobj,
+			acg_dev->acg_dev_link_name);
+		kobject_put(&acg_dev->dev->dev_kobj);
+	}
+
+	kobject_del(&acg_dev->acg_dev_kobj);
+	kobject_put(&acg_dev->acg_dev_kobj);
+
+	rc = wait_for_completion_timeout(&acg_dev->acg_dev_kobj_release_cmpl, HZ);
+	if (rc == 0) {
+		PRINT_INFO("Waiting for releasing sysfs entry "
+			"for acg_dev %p (%d refs)...", acg_dev,
+			atomic_read(&acg_dev->acg_dev_kobj.kref.refcount));
+		wait_for_completion(&acg_dev->acg_dev_kobj_release_cmpl);
+		PRINT_INFO("Done waiting for releasing sysfs "
+			"entry for acg_dev %p", acg_dev);
+	}
+	return;
+}
+
+int scst_acg_dev_sysfs_create(struct scst_acg_dev *acg_dev,
+	struct kobject *parent)
+{
+	int res;
+
+	init_completion(&acg_dev->acg_dev_kobj_release_cmpl);
+
+	res = kobject_init_and_add(&acg_dev->acg_dev_kobj, &acg_dev_ktype,
+				      parent, "%u", acg_dev->lun);
+	if (res != 0) {
+		PRINT_ERROR("Can't add acg_dev %p to sysfs", acg_dev);
+		goto out;
+	}
+
+	kobject_get(&acg_dev->dev->dev_kobj);
+
+	snprintf(acg_dev->acg_dev_link_name, sizeof(acg_dev->acg_dev_link_name),
+		"export%u", acg_dev->dev->dev_exported_lun_num++);
+
+	res = sysfs_create_link(acg_dev->dev->dev_exp_kobj,
+			   &acg_dev->acg_dev_kobj, acg_dev->acg_dev_link_name);
+	if (res != 0) {
+		PRINT_ERROR("Can't create acg %s LUN link",
+			acg_dev->acg->acg_name);
+		goto out_del;
+	}
+
+	res = sysfs_create_link(&acg_dev->acg_dev_kobj,
+			&acg_dev->dev->dev_kobj, "device");
+	if (res != 0) {
+		PRINT_ERROR("Can't create acg %s device link",
+			acg_dev->acg->acg_name);
+		goto out_del;
+	}
+
+out:
+	return res;
+
+out_del:
+	scst_acg_dev_sysfs_del(acg_dev);
+	goto out;
+}
+
+static int __scst_process_luns_mgmt_store(char *buffer,
+	struct scst_tgt *tgt, struct scst_acg *acg, bool tgt_kobj)
+{
+	int res, read_only = 0, action;
+	char *p, *e = NULL;
+	unsigned int virt_lun;
+	struct scst_acg_dev *acg_dev = NULL, *acg_dev_tmp;
+	struct scst_device *d, *dev = NULL;
+
+#define SCST_LUN_ACTION_ADD	1
+#define SCST_LUN_ACTION_DEL	2
+#define SCST_LUN_ACTION_REPLACE	3
+#define SCST_LUN_ACTION_CLEAR	4
+
+	TRACE_DBG("buffer %s", buffer);
+
+	p = buffer;
+	if (p[strlen(p) - 1] == '\n')
+		p[strlen(p) - 1] = '\0';
+	if (strncasecmp("add", p, 3) == 0) {
+		p += 3;
+		action = SCST_LUN_ACTION_ADD;
+	} else if (strncasecmp("del", p, 3) == 0) {
+		p += 3;
+		action = SCST_LUN_ACTION_DEL;
+	} else if (!strncasecmp("replace", p, 7)) {
+		p += 7;
+		action = SCST_LUN_ACTION_REPLACE;
+	} else if (!strncasecmp("clear", p, 5)) {
+		p += 5;
+		action = SCST_LUN_ACTION_CLEAR;
+	} else {
+		PRINT_ERROR("Unknown action \"%s\"", p);
+		res = -EINVAL;
+		goto out;
+	}
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_resume;
+	}
+
+	/* Check if tgt and acg not already freed while we were coming here */
+	if (scst_check_tgt_acg_ptrs(tgt, acg) != 0)
+		goto out_unlock;
+
+	if ((action != SCST_LUN_ACTION_CLEAR) &&
+	    (action != SCST_LUN_ACTION_DEL)) {
+		if (!isspace(*p)) {
+			PRINT_ERROR("%s", "Syntax error");
+			res = -EINVAL;
+			goto out_unlock;
+		}
+
+		while (isspace(*p) && *p != '\0')
+			p++;
+		e = p; /* save p */
+		while (!isspace(*e) && *e != '\0')
+			e++;
+		*e = '\0';
+
+		list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+			if (!strcmp(d->virt_name, p)) {
+				dev = d;
+				TRACE_DBG("Device %p (%s) found", dev, p);
+				break;
+			}
+		}
+		if (dev == NULL) {
+			PRINT_ERROR("Device '%s' not found", p);
+			res = -EINVAL;
+			goto out_unlock;
+		}
+	}
+
+	switch (action) {
+	case SCST_LUN_ACTION_ADD:
+	case SCST_LUN_ACTION_REPLACE:
+	{
+		bool dev_replaced = false;
+
+		e++;
+		while (isspace(*e) && *e != '\0')
+			e++;
+		virt_lun = simple_strtoul(e, &e, 0);
+
+		while (isspace(*e) && *e != '\0')
+			e++;
+
+		while (1) {
+			char *pp;
+			unsigned long val;
+			char *param = scst_get_next_token_str(&e);
+			if (param == NULL)
+				break;
+
+			p = scst_get_next_lexem(&param);
+			if (*p == '\0') {
+				PRINT_ERROR("Syntax error at %s (device %s)",
+					param, dev->virt_name);
+				res = -EINVAL;
+				goto out_unlock;
+			}
+
+			pp = scst_get_next_lexem(&param);
+			if (*pp == '\0') {
+				PRINT_ERROR("Parameter %s value missed for device %s",
+					p, dev->virt_name);
+				res = -EINVAL;
+				goto out_unlock;
+			}
+
+			if (scst_get_next_lexem(&param)[0] != '\0') {
+				PRINT_ERROR("Too many parameter's %s values (device %s)",
+					p, dev->virt_name);
+				res = -EINVAL;
+				goto out_unlock;
+			}
+
+			res = strict_strtoul(pp, 0, &val);
+			if (res != 0) {
+				PRINT_ERROR("strict_strtoul() for %s failed: %d "
+					"(device %s)", pp, res, dev->virt_name);
+				goto out_unlock;
+			}
+
+			if (!strcasecmp("read_only", p)) {
+				read_only = val;
+				TRACE_DBG("READ ONLY %d", read_only);
+			} else {
+				PRINT_ERROR("Unknown parameter %s (device %s)",
+					p, dev->virt_name);
+				res = -EINVAL;
+				goto out_unlock;
+			}
+		}
+
+		acg_dev = NULL;
+		list_for_each_entry(acg_dev_tmp, &acg->acg_dev_list,
+				    acg_dev_list_entry) {
+			if (acg_dev_tmp->lun == virt_lun) {
+				acg_dev = acg_dev_tmp;
+				break;
+			}
+		}
+
+		if (acg_dev != NULL) {
+			if (action == SCST_LUN_ACTION_ADD) {
+				PRINT_ERROR("virt lun %d already exists in "
+					"group %s", virt_lun, acg->acg_name);
+				res = -EEXIST;
+				goto out_unlock;
+			} else {
+				/* Replace */
+				res = scst_acg_del_lun(acg, acg_dev->lun,
+						false);
+				if (res != 0)
+					goto out_unlock;
+
+				dev_replaced = true;
+			}
+		}
+
+		res = scst_acg_add_lun(acg,
+			tgt_kobj ? tgt->tgt_luns_kobj : acg->luns_kobj,
+			dev, virt_lun, read_only, !dev_replaced, NULL);
+		if (res != 0)
+			goto out_unlock;
+
+		if (dev_replaced) {
+			struct scst_tgt_dev *tgt_dev;
+
+			list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+				dev_tgt_dev_list_entry) {
+				if ((tgt_dev->acg_dev->acg == acg) &&
+				    (tgt_dev->lun == virt_lun)) {
+					TRACE_MGMT_DBG("INQUIRY DATA HAS CHANGED"
+						" on tgt_dev %p", tgt_dev);
+					scst_gen_aen_or_ua(tgt_dev,
+						SCST_LOAD_SENSE(scst_sense_inquery_data_changed));
+				}
+			}
+		}
+
+		break;
+	}
+	case SCST_LUN_ACTION_DEL:
+		while (isspace(*p) && *p != '\0')
+			p++;
+		virt_lun = simple_strtoul(p, &p, 0);
+
+		res = scst_acg_del_lun(acg, virt_lun, true);
+		if (res != 0)
+			goto out_unlock;
+		break;
+	case SCST_LUN_ACTION_CLEAR:
+		PRINT_INFO("Removed all devices from group %s",
+			acg->acg_name);
+		list_for_each_entry_safe(acg_dev, acg_dev_tmp,
+					 &acg->acg_dev_list,
+					 acg_dev_list_entry) {
+			res = scst_acg_del_lun(acg, acg_dev->lun,
+				list_is_last(&acg_dev->acg_dev_list_entry,
+					     &acg->acg_dev_list));
+			if (res)
+				goto out_unlock;
+		}
+		break;
+	}
+
+	res = 0;
+
+out_unlock:
+	mutex_unlock(&scst_mutex);
+
+out_resume:
+	scst_resume_activity();
+
+out:
+	return res;
+
+#undef SCST_LUN_ACTION_ADD
+#undef SCST_LUN_ACTION_DEL
+#undef SCST_LUN_ACTION_REPLACE
+#undef SCST_LUN_ACTION_CLEAR
+}
+
+static int scst_luns_mgmt_store_work_fn(struct scst_sysfs_work_item *work)
+{
+	return __scst_process_luns_mgmt_store(work->buf, work->tgt, work->acg,
+			work->is_tgt_kobj);
+}
+
+static ssize_t __scst_acg_mgmt_store(struct scst_acg *acg,
+	const char *buf, size_t count, bool is_tgt_kobj,
+	int (*sysfs_work_fn)(struct scst_sysfs_work_item *))
+{
+	int res;
+	char *buffer;
+	struct scst_sysfs_work_item *work;
+
+	buffer = kzalloc(count+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		res = -ENOMEM;
+		goto out;
+	}
+	memcpy(buffer, buf, count);
+	buffer[count] = '\0';
+
+	res = scst_alloc_sysfs_work(sysfs_work_fn, false, &work);
+	if (res != 0)
+		goto out_free;
+
+	work->buf = buffer;
+	work->tgt = acg->tgt;
+	work->acg = acg;
+	work->is_tgt_kobj = is_tgt_kobj;
+
+	res = scst_sysfs_queue_wait_work(work);
+	if (res == 0)
+		res = count;
+
+out:
+	return res;
+
+out_free:
+	kfree(buffer);
+	goto out;
+}
+
+static ssize_t __scst_luns_mgmt_store(struct scst_acg *acg,
+	bool tgt_kobj, const char *buf, size_t count)
+{
+	return __scst_acg_mgmt_store(acg, buf, count, tgt_kobj,
+			scst_luns_mgmt_store_work_fn);
+}
+
+static ssize_t scst_luns_mgmt_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf)
+{
+	static char *help = "Usage: echo \"add|del H:C:I:L lun [parameters]\" >mgmt\n"
+			    "       echo \"add VNAME lun [parameters]\" >mgmt\n"
+			    "       echo \"del lun\" >mgmt\n"
+			    "       echo \"replace H:C:I:L lun [parameters]\" >mgmt\n"
+			    "       echo \"replace VNAME lun [parameters]\" >mgmt\n"
+			    "       echo \"clear\" >mgmt\n"
+			    "\n"
+			    "where parameters are one or more "
+			    "param_name=value pairs separated by ';'\n"
+			    "\nThe following parameters available: read_only.";
+
+	return sprintf(buf, "%s", help);
+}
+
+static ssize_t scst_luns_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj->parent, struct scst_tgt, tgt_kobj);
+	acg = tgt->default_acg;
+
+	res = __scst_luns_mgmt_store(acg, true, buf, count);
+	return res;
+}
+
+static ssize_t __scst_acg_addr_method_show(struct scst_acg *acg, char *buf)
+{
+	int res;
+
+	switch (acg->addr_method) {
+	case SCST_LUN_ADDR_METHOD_FLAT:
+		res = sprintf(buf, "FLAT\n%s\n", SCST_SYSFS_KEY_MARK);
+		break;
+	case SCST_LUN_ADDR_METHOD_PERIPHERAL:
+		res = sprintf(buf, "PERIPHERAL\n");
+		break;
+	default:
+		res = sprintf(buf, "UNKNOWN\n");
+		break;
+	}
+
+	return res;
+}
+
+static ssize_t __scst_acg_addr_method_store(struct scst_acg *acg,
+	const char *buf, size_t count)
+{
+	int res = count;
+
+	if (strncasecmp(buf, "FLAT", min_t(int, 4, count)) == 0)
+		acg->addr_method = SCST_LUN_ADDR_METHOD_FLAT;
+	else if (strncasecmp(buf, "PERIPHERAL", min_t(int, 10, count)) == 0)
+		acg->addr_method = SCST_LUN_ADDR_METHOD_PERIPHERAL;
+	else {
+		PRINT_ERROR("Unknown address method %s", buf);
+		res = -EINVAL;
+	}
+
+	TRACE_DBG("acg %p, addr_method %d", acg, acg->addr_method);
+
+	return res;
+}
+
+static ssize_t scst_tgt_addr_method_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_acg *acg;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+	acg = tgt->default_acg;
+
+	return __scst_acg_addr_method_show(acg, buf);
+}
+
+static ssize_t scst_tgt_addr_method_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+	acg = tgt->default_acg;
+
+	res = __scst_acg_addr_method_store(acg, buf, count);
+	return res;
+}
+
+static ssize_t __scst_acg_io_grouping_type_show(struct scst_acg *acg, char *buf)
+{
+	int res;
+
+	switch (acg->acg_io_grouping_type) {
+	case SCST_IO_GROUPING_AUTO:
+		res = sprintf(buf, "%s\n", SCST_IO_GROUPING_AUTO_STR);
+		break;
+	case SCST_IO_GROUPING_THIS_GROUP_ONLY:
+		res = sprintf(buf, "%s\n%s\n",
+			SCST_IO_GROUPING_THIS_GROUP_ONLY_STR,
+			SCST_SYSFS_KEY_MARK);
+		break;
+	case SCST_IO_GROUPING_NEVER:
+		res = sprintf(buf, "%s\n%s\n", SCST_IO_GROUPING_NEVER_STR,
+			SCST_SYSFS_KEY_MARK);
+		break;
+	default:
+		res = sprintf(buf, "%d\n%s\n", acg->acg_io_grouping_type,
+			SCST_SYSFS_KEY_MARK);
+		break;
+	}
+
+	return res;
+}
+
+static int __scst_acg_process_io_grouping_type_store(struct scst_tgt *tgt,
+	struct scst_acg *acg, int io_grouping_type)
+{
+	int res = 0;
+	struct scst_acg_dev *acg_dev;
+
+	TRACE_DBG("tgt %p, acg %p, io_grouping_type %d", tgt, acg,
+		io_grouping_type);
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_resume;
+	}
+
+	/* Check if tgt and acg not already freed while we were coming here */
+	if (scst_check_tgt_acg_ptrs(tgt, acg) != 0)
+		goto out_unlock;
+
+	acg->acg_io_grouping_type = io_grouping_type;
+
+	list_for_each_entry(acg_dev, &acg->acg_dev_list, acg_dev_list_entry) {
+		int rc;
+
+		scst_stop_dev_threads(acg_dev->dev);
+
+		rc = scst_create_dev_threads(acg_dev->dev);
+		if (rc != 0)
+			res = rc;
+	}
+
+out_unlock:
+	mutex_unlock(&scst_mutex);
+
+out_resume:
+	scst_resume_activity();
+
+out:
+	return res;
+}
+
+static int __scst_acg_io_grouping_type_store_work_fn(struct scst_sysfs_work_item *work)
+{
+	return __scst_acg_process_io_grouping_type_store(work->tgt, work->acg,
+			work->io_grouping_type);
+}
+
+static ssize_t __scst_acg_io_grouping_type_store(struct scst_acg *acg,
+	const char *buf, size_t count)
+{
+	int res = 0;
+	int prev = acg->acg_io_grouping_type;
+	long io_grouping_type;
+	struct scst_sysfs_work_item *work;
+
+	if (strncasecmp(buf, SCST_IO_GROUPING_AUTO_STR,
+			min_t(int, strlen(SCST_IO_GROUPING_AUTO_STR), count)) == 0)
+		io_grouping_type = SCST_IO_GROUPING_AUTO;
+	else if (strncasecmp(buf, SCST_IO_GROUPING_THIS_GROUP_ONLY_STR,
+			min_t(int, strlen(SCST_IO_GROUPING_THIS_GROUP_ONLY_STR), count)) == 0)
+		io_grouping_type = SCST_IO_GROUPING_THIS_GROUP_ONLY;
+	else if (strncasecmp(buf, SCST_IO_GROUPING_NEVER_STR,
+			min_t(int, strlen(SCST_IO_GROUPING_NEVER_STR), count)) == 0)
+		io_grouping_type = SCST_IO_GROUPING_NEVER;
+	else {
+		res = strict_strtol(buf, 0, &io_grouping_type);
+		if ((res != 0) || (io_grouping_type <= 0)) {
+			PRINT_ERROR("Unknown or not allowed I/O grouping type "
+				"%s", buf);
+			res = -EINVAL;
+			goto out;
+		}
+	}
+
+	if (prev == io_grouping_type)
+		goto out;
+
+	res = scst_alloc_sysfs_work(__scst_acg_io_grouping_type_store_work_fn,
+					false, &work);
+	if (res != 0)
+		goto out;
+
+	work->tgt = acg->tgt;
+	work->acg = acg;
+	work->io_grouping_type = io_grouping_type;
+
+	res = scst_sysfs_queue_wait_work(work);
+
+out:
+	return res;
+}
+
+static ssize_t scst_tgt_io_grouping_type_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_acg *acg;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+	acg = tgt->default_acg;
+
+	return __scst_acg_io_grouping_type_show(acg, buf);
+}
+
+static ssize_t scst_tgt_io_grouping_type_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+	acg = tgt->default_acg;
+
+	res = __scst_acg_io_grouping_type_store(acg, buf, count);
+	if (res != 0)
+		goto out;
+
+	res = count;
+
+out:
+	return res;
+}
+
+static ssize_t __scst_acg_cpu_mask_show(struct scst_acg *acg, char *buf)
+{
+	int res;
+
+	res = cpumask_scnprintf(buf, SCST_SYSFS_BLOCK_SIZE,
+		&acg->acg_cpu_mask);
+	if (!cpus_equal(acg->acg_cpu_mask, default_cpu_mask))
+		res += sprintf(&buf[res], "\n%s\n", SCST_SYSFS_KEY_MARK);
+
+	return res;
+}
+
+static int __scst_acg_process_cpu_mask_store(struct scst_tgt *tgt,
+	struct scst_acg *acg, cpumask_t *cpu_mask)
+{
+	int res = 0;
+	struct scst_session *sess;
+
+	TRACE_DBG("tgt %p, acg %p", tgt, acg);
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out;
+	}
+
+	/* Check if tgt and acg not already freed while we were coming here */
+	if (scst_check_tgt_acg_ptrs(tgt, acg) != 0)
+		goto out_unlock;
+
+	cpumask_copy(&acg->acg_cpu_mask, cpu_mask);
+
+	list_for_each_entry(sess, &acg->acg_sess_list, acg_sess_list_entry) {
+		int i;
+		for (i = 0; i < SESS_TGT_DEV_LIST_HASH_SIZE; i++) {
+			struct scst_tgt_dev *tgt_dev;
+			struct list_head *head = &sess->sess_tgt_dev_list[i];
+			list_for_each_entry(tgt_dev, head,
+						sess_tgt_dev_list_entry) {
+				struct scst_cmd_thread_t *thr;
+				if (tgt_dev->active_cmd_threads != &tgt_dev->tgt_dev_cmd_threads)
+					continue;
+				list_for_each_entry(thr,
+						&tgt_dev->active_cmd_threads->threads_list,
+						thread_list_entry) {
+					int rc;
+					rc = set_cpus_allowed_ptr(thr->cmd_thread, cpu_mask);
+					if (rc != 0)
+						PRINT_ERROR("Setting CPU "
+							"affinity failed: %d", rc);
+				}
+			}
+		}
+		if (tgt->tgtt->report_aen != NULL) {
+			struct scst_aen *aen;
+			int rc;
+
+			aen = scst_alloc_aen(sess, 0);
+			if (aen == NULL) {
+				PRINT_ERROR("Unable to notify target driver %s "
+					"about cpu_mask change", tgt->tgt_name);
+				continue;
+			}
+
+			aen->event_fn = SCST_AEN_CPU_MASK_CHANGED;
+
+			TRACE_DBG("Calling target's %s report_aen(%p)",
+				tgt->tgtt->name, aen);
+			rc = tgt->tgtt->report_aen(aen);
+			TRACE_DBG("Target's %s report_aen(%p) returned %d",
+				tgt->tgtt->name, aen, rc);
+			if (rc != SCST_AEN_RES_SUCCESS)
+				scst_free_aen(aen);
+		}
+	}
+
+out_unlock:
+	mutex_unlock(&scst_mutex);
+
+out:
+	return res;
+}
+
+static int __scst_acg_cpu_mask_store_work_fn(struct scst_sysfs_work_item *work)
+{
+	return __scst_acg_process_cpu_mask_store(work->tgt, work->acg,
+			&work->cpu_mask);
+}
+
+static ssize_t __scst_acg_cpu_mask_store(struct scst_acg *acg,
+	const char *buf, size_t count)
+{
+	int res;
+	struct scst_sysfs_work_item *work;
+
+	/* cpumask might be too big for stack */
+
+	res = scst_alloc_sysfs_work(__scst_acg_cpu_mask_store_work_fn,
+					false, &work);
+	if (res != 0)
+		goto out;
+
+	/*
+	 * We can't use cpumask_parse_user() here, because it expects
+	 * buffer in the user space.
+	 */
+	res = __bitmap_parse(buf, count, 0, cpumask_bits(&work->cpu_mask),
+				nr_cpumask_bits);
+	if (res != 0) {
+		PRINT_ERROR("__bitmap_parse() failed: %d", res);
+		goto out_release;
+	}
+
+	if (cpus_equal(acg->acg_cpu_mask, work->cpu_mask))
+		goto out;
+
+	work->tgt = acg->tgt;
+	work->acg = acg;
+
+	res = scst_sysfs_queue_wait_work(work);
+
+out:
+	return res;
+
+out_release:
+	scst_sysfs_work_release(&work->sysfs_work_kref);
+	goto out;
+}
+
+static ssize_t scst_tgt_cpu_mask_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_acg *acg;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+	acg = tgt->default_acg;
+
+	return __scst_acg_cpu_mask_show(acg, buf);
+}
+
+static ssize_t scst_tgt_cpu_mask_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+	acg = tgt->default_acg;
+
+	res = __scst_acg_cpu_mask_store(acg, buf, count);
+	if (res != 0)
+		goto out;
+
+	res = count;
+
+out:
+	return res;
+}
+
+/*
+ * Called with scst_mutex held.
+ *
+ * !! No sysfs works must use kobject_get() to protect acg, due to possible
+ * !! deadlock with scst_mutex (it is waiting for the last put, but
+ * !! the last ref counter holder is waiting for scst_mutex)
+ */
+void scst_acg_sysfs_del(struct scst_acg *acg)
+{
+	int rc;
+
+	kobject_del(acg->luns_kobj);
+	kobject_put(acg->luns_kobj);
+
+	kobject_del(acg->initiators_kobj);
+	kobject_put(acg->initiators_kobj);
+
+	kobject_del(&acg->acg_kobj);
+	kobject_put(&acg->acg_kobj);
+
+	rc = wait_for_completion_timeout(&acg->acg_kobj_release_cmpl, HZ);
+	if (rc == 0) {
+		PRINT_INFO("Waiting for releasing sysfs entry "
+			"for acg %s (%d refs)...", acg->acg_name,
+			atomic_read(&acg->acg_kobj.kref.refcount));
+		wait_for_completion(&acg->acg_kobj_release_cmpl);
+		PRINT_INFO("Done waiting for releasing sysfs "
+			"entry for acg %s", acg->acg_name);
+	}
+	return;
+}
+
+int scst_acg_sysfs_create(struct scst_tgt *tgt,
+	struct scst_acg *acg)
+{
+	int res = 0;
+
+	init_completion(&acg->acg_kobj_release_cmpl);
+
+	res = kobject_init_and_add(&acg->acg_kobj, &acg_ktype,
+		tgt->tgt_ini_grp_kobj, acg->acg_name);
+	if (res != 0) {
+		PRINT_ERROR("Can't add acg '%s' to sysfs", acg->acg_name);
+		goto out;
+	}
+
+	acg->luns_kobj = kobject_create_and_add("luns", &acg->acg_kobj);
+	if (acg->luns_kobj == NULL) {
+		PRINT_ERROR("Can't create luns kobj for tgt %s",
+			tgt->tgt_name);
+		res = -ENOMEM;
+		goto out_del;
+	}
+
+	res = sysfs_create_file(acg->luns_kobj, &scst_acg_luns_mgmt.attr);
+	if (res != 0) {
+		PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+			scst_acg_luns_mgmt.attr.name, tgt->tgt_name);
+		goto out_del;
+	}
+
+	acg->initiators_kobj = kobject_create_and_add("initiators",
+					&acg->acg_kobj);
+	if (acg->initiators_kobj == NULL) {
+		PRINT_ERROR("Can't create initiators kobj for tgt %s",
+			tgt->tgt_name);
+		res = -ENOMEM;
+		goto out_del;
+	}
+
+	res = sysfs_create_file(acg->initiators_kobj,
+			&scst_acg_ini_mgmt.attr);
+	if (res != 0) {
+		PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+			scst_acg_ini_mgmt.attr.name, tgt->tgt_name);
+		goto out_del;
+	}
+
+	res = sysfs_create_file(&acg->acg_kobj, &scst_acg_addr_method.attr);
+	if (res != 0) {
+		PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+			scst_acg_addr_method.attr.name, tgt->tgt_name);
+		goto out_del;
+	}
+
+	res = sysfs_create_file(&acg->acg_kobj, &scst_acg_io_grouping_type.attr);
+	if (res != 0) {
+		PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+			scst_acg_io_grouping_type.attr.name, tgt->tgt_name);
+		goto out_del;
+	}
+
+	res = sysfs_create_file(&acg->acg_kobj, &scst_acg_cpu_mask.attr);
+	if (res != 0) {
+		PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+			scst_acg_cpu_mask.attr.name, tgt->tgt_name);
+		goto out_del;
+	}
+
+out:
+	return res;
+
+out_del:
+	scst_acg_sysfs_del(acg);
+	goto out;
+}
+
+static ssize_t scst_acg_addr_method_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_acg *acg;
+
+	acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+	return __scst_acg_addr_method_show(acg, buf);
+}
+
+static ssize_t scst_acg_addr_method_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+
+	acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+	res = __scst_acg_addr_method_store(acg, buf, count);
+	return res;
+}
+
+static ssize_t scst_acg_io_grouping_type_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_acg *acg;
+
+	acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+	return __scst_acg_io_grouping_type_show(acg, buf);
+}
+
+static ssize_t scst_acg_io_grouping_type_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+
+	acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+	res = __scst_acg_io_grouping_type_store(acg, buf, count);
+	if (res != 0)
+		goto out;
+
+	res = count;
+
+out:
+	return res;
+}
+
+static ssize_t scst_acg_cpu_mask_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_acg *acg;
+
+	acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+	return __scst_acg_cpu_mask_show(acg, buf);
+}
+
+static ssize_t scst_acg_cpu_mask_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+
+	acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+	res = __scst_acg_cpu_mask_store(acg, buf, count);
+	if (res != 0)
+		goto out;
+
+	res = count;
+
+out:
+	return res;
+}
+
+static ssize_t scst_ini_group_mgmt_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	static char *help = "Usage: echo \"create GROUP_NAME\" >mgmt\n"
+			    "       echo \"del GROUP_NAME\" >mgmt\n";
+
+	return sprintf(buf, "%s", help);
+}
+
+static int scst_process_ini_group_mgmt_store(char *buffer,
+	struct scst_tgt *tgt)
+{
+	int res, action;
+	int len;
+	char *name;
+	char *p, *e = NULL;
+	struct scst_acg *a, *acg = NULL;
+
+#define SCST_INI_GROUP_ACTION_CREATE	1
+#define SCST_INI_GROUP_ACTION_DEL	2
+
+	TRACE_DBG("tgt %p, buffer %s", tgt, buffer);
+
+	p = buffer;
+	if (p[strlen(p) - 1] == '\n')
+		p[strlen(p) - 1] = '\0';
+	if (strncasecmp("create ", p, 7) == 0) {
+		p += 7;
+		action = SCST_INI_GROUP_ACTION_CREATE;
+	} else if (strncasecmp("del ", p, 4) == 0) {
+		p += 4;
+		action = SCST_INI_GROUP_ACTION_DEL;
+	} else {
+		PRINT_ERROR("Unknown action \"%s\"", p);
+		res = -EINVAL;
+		goto out;
+	}
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_resume;
+	}
+
+	/* Check if our pointer is still alive */
+	if (scst_check_tgt_acg_ptrs(tgt, NULL) != 0)
+		goto out_unlock;
+
+	while (isspace(*p) && *p != '\0')
+		p++;
+	e = p;
+	while (!isspace(*e) && *e != '\0')
+		e++;
+	*e = '\0';
+
+	if (p[0] == '\0') {
+		PRINT_ERROR("%s", "Group name required");
+		res = -EINVAL;
+		goto out_unlock;
+	}
+
+	list_for_each_entry(a, &tgt->tgt_acg_list, acg_list_entry) {
+		if (strcmp(a->acg_name, p) == 0) {
+			TRACE_DBG("group (acg) %p %s found",
+				  a, a->acg_name);
+			acg = a;
+			break;
+		}
+	}
+
+	switch (action) {
+	case SCST_INI_GROUP_ACTION_CREATE:
+		TRACE_DBG("Creating group '%s'", p);
+		if (acg != NULL) {
+			PRINT_ERROR("acg name %s exist", p);
+			res = -EINVAL;
+			goto out_unlock;
+		}
+
+		len = strlen(p) + 1;
+		name = kmalloc(len, GFP_KERNEL);
+		if (name == NULL) {
+			PRINT_ERROR("%s", "Allocation of name failed");
+			res = -ENOMEM;
+			goto out_unlock;
+		}
+		strlcpy(name, p, len);
+
+		acg = scst_alloc_add_acg(tgt, name, true);
+		kfree(name);
+		if (acg == NULL)
+			goto out_unlock;
+		break;
+	case SCST_INI_GROUP_ACTION_DEL:
+		TRACE_DBG("Deleting group '%s'", p);
+		if (acg == NULL) {
+			PRINT_ERROR("Group %s not found", p);
+			res = -EINVAL;
+			goto out_unlock;
+		}
+		if (!scst_acg_sess_is_empty(acg)) {
+			PRINT_ERROR("Group %s is not empty", acg->acg_name);
+			res = -EBUSY;
+			goto out_unlock;
+		}
+		scst_del_free_acg(acg);
+		break;
+	}
+
+	res = 0;
+
+out_unlock:
+	mutex_unlock(&scst_mutex);
+
+out_resume:
+	scst_resume_activity();
+
+out:
+	return res;
+
+#undef SCST_LUN_ACTION_CREATE
+#undef SCST_LUN_ACTION_DEL
+}
+
+static int scst_ini_group_mgmt_store_work_fn(struct scst_sysfs_work_item *work)
+{
+	return scst_process_ini_group_mgmt_store(work->buf, work->tgt);
+}
+
+static ssize_t scst_ini_group_mgmt_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	char *buffer;
+	struct scst_tgt *tgt;
+	struct scst_sysfs_work_item *work;
+
+	tgt = container_of(kobj->parent, struct scst_tgt, tgt_kobj);
+
+	buffer = kzalloc(count+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		res = -ENOMEM;
+		goto out;
+	}
+	memcpy(buffer, buf, count);
+	buffer[count] = '\0';
+
+	res = scst_alloc_sysfs_work(scst_ini_group_mgmt_store_work_fn, false,
+					&work);
+	if (res != 0)
+		goto out_free;
+
+	work->buf = buffer;
+	work->tgt = tgt;
+
+	res = scst_sysfs_queue_wait_work(work);
+	if (res == 0)
+		res = count;
+
+out:
+	return res;
+
+out_free:
+	kfree(buffer);
+	goto out;
+}
+
+static ssize_t scst_rel_tgt_id_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_tgt *tgt;
+	int res;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+	res = sprintf(buf, "%d\n%s", tgt->rel_tgt_id,
+		(tgt->rel_tgt_id != 0) ? SCST_SYSFS_KEY_MARK "\n" : "");
+	return res;
+}
+
+static int scst_process_rel_tgt_id_store(struct scst_sysfs_work_item *work)
+{
+	int res = 0;
+	struct scst_tgt *tgt = work->tgt;
+	unsigned long rel_tgt_id = work->l;
+
+	/* tgt protected by kobject_get() */
+
+	TRACE_DBG("Trying to set relative target port id %d",
+		(uint16_t)rel_tgt_id);
+
+	if (tgt->tgtt->is_target_enabled(tgt) &&
+	    rel_tgt_id != tgt->rel_tgt_id) {
+		if (!scst_is_relative_target_port_id_unique(rel_tgt_id, tgt)) {
+			PRINT_ERROR("Relative port id %d is not unique",
+				(uint16_t)rel_tgt_id);
+			res = -EBADSLT;
+			goto out_put;
+		}
+	}
+
+	if (rel_tgt_id < SCST_MIN_REL_TGT_ID ||
+	    rel_tgt_id > SCST_MAX_REL_TGT_ID) {
+		if ((rel_tgt_id == 0) && !tgt->tgtt->is_target_enabled(tgt))
+			goto set;
+
+		PRINT_ERROR("Invalid relative port id %d",
+			(uint16_t)rel_tgt_id);
+		res = -EINVAL;
+		goto out_put;
+	}
+
+set:
+	tgt->rel_tgt_id = (uint16_t)rel_tgt_id;
+
+out_put:
+	kobject_put(&tgt->tgt_kobj);
+	return res;
+}
+
+static ssize_t scst_rel_tgt_id_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res = 0;
+	struct scst_tgt *tgt;
+	unsigned long rel_tgt_id;
+	struct scst_sysfs_work_item *work;
+
+	if (buf == NULL)
+		goto out;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+	res = strict_strtoul(buf, 0, &rel_tgt_id);
+	if (res != 0) {
+		PRINT_ERROR("%s", "Wrong rel_tgt_id");
+		res = -EINVAL;
+		goto out;
+	}
+
+	res = scst_alloc_sysfs_work(scst_process_rel_tgt_id_store, false,
+					&work);
+	if (res != 0)
+		goto out;
+
+	work->tgt = tgt;
+	work->l = rel_tgt_id;
+
+	kobject_get(&tgt->tgt_kobj);
+
+	res = scst_sysfs_queue_wait_work(work);
+	if (res == 0)
+		res = count;
+
+out:
+	return res;
+}
+
+int scst_acn_sysfs_create(struct scst_acn *acn)
+{
+	int res = 0;
+	int len;
+	struct scst_acg *acg = acn->acg;
+	struct kobj_attribute *attr = NULL;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	static struct lock_class_key __key;
+#endif
+
+	acn->acn_attr = NULL;
+
+	attr = kzalloc(sizeof(struct kobj_attribute), GFP_KERNEL);
+	if (attr == NULL) {
+		PRINT_ERROR("Unable to allocate attributes for initiator '%s'",
+			acn->name);
+		res = -ENOMEM;
+		goto out;
+	}
+
+	len = strlen(acn->name) + 1;
+	attr->attr.name = kzalloc(len, GFP_KERNEL);
+	if (attr->attr.name == NULL) {
+		PRINT_ERROR("Unable to allocate attributes for initiator '%s'",
+			acn->name);
+		res = -ENOMEM;
+		goto out_free;
+	}
+	strlcpy((char *)attr->attr.name, acn->name, len);
+
+	attr->attr.owner = THIS_MODULE;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	attr->attr.key = &__key;
+#endif
+
+	attr->attr.mode = S_IRUGO;
+	attr->show = scst_acn_file_show;
+	attr->store = NULL;
+
+	res = sysfs_create_file(acg->initiators_kobj, &attr->attr);
+	if (res != 0) {
+		PRINT_ERROR("Unable to create acn '%s' for group '%s'",
+			acn->name, acg->acg_name);
+		kfree(attr->attr.name);
+		goto out_free;
+	}
+
+	acn->acn_attr = attr;
+
+out:
+	return res;
+
+out_free:
+	kfree(attr);
+	goto out;
+}
+
+void scst_acn_sysfs_del(struct scst_acn *acn)
+{
+	struct scst_acg *acg = acn->acg;
+
+	if (acn->acn_attr != NULL) {
+		sysfs_remove_file(acg->initiators_kobj,
+			&acn->acn_attr->attr);
+		kfree(acn->acn_attr->attr.name);
+		kfree(acn->acn_attr);
+	}
+	return;
+}
+
+static ssize_t scst_acn_file_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	return scnprintf(buf, SCST_SYSFS_BLOCK_SIZE, "%s\n",
+		attr->attr.name);
+}
+
+static ssize_t scst_acg_luns_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+
+	acg = container_of(kobj->parent, struct scst_acg, acg_kobj);
+	res = __scst_luns_mgmt_store(acg, false, buf, count);
+	return res;
+}
+
+static ssize_t scst_acg_ini_mgmt_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	static char *help = "Usage: echo \"add INITIATOR_NAME\" "
+					">mgmt\n"
+			    "       echo \"del INITIATOR_NAME\" "
+					">mgmt\n"
+			    "       echo \"move INITIATOR_NAME DEST_GROUP_NAME\" "
+					">mgmt\n"
+			    "       echo \"clear\" "
+					">mgmt\n";
+
+	return sprintf(buf, "%s", help);
+}
+
+static int scst_process_acg_ini_mgmt_store(char *buffer,
+	struct scst_tgt *tgt, struct scst_acg *acg)
+{
+	int res, action;
+	char *p, *e = NULL;
+	char *name = NULL, *group = NULL;
+	struct scst_acg *acg_dest = NULL;
+	struct scst_acn *acn = NULL, *acn_tmp;
+
+#define SCST_ACG_ACTION_INI_ADD		1
+#define SCST_ACG_ACTION_INI_DEL		2
+#define SCST_ACG_ACTION_INI_CLEAR	3
+#define SCST_ACG_ACTION_INI_MOVE	4
+
+	TRACE_DBG("tgt %p, acg %p, buffer %s", tgt, acg, buffer);
+
+	p = buffer;
+	if (p[strlen(p) - 1] == '\n')
+		p[strlen(p) - 1] = '\0';
+
+	if (strncasecmp("add", p, 3) == 0) {
+		p += 3;
+		action = SCST_ACG_ACTION_INI_ADD;
+	} else if (strncasecmp("del", p, 3) == 0) {
+		p += 3;
+		action = SCST_ACG_ACTION_INI_DEL;
+	} else if (strncasecmp("clear", p, 5) == 0) {
+		p += 5;
+		action = SCST_ACG_ACTION_INI_CLEAR;
+	} else if (strncasecmp("move", p, 4) == 0) {
+		p += 4;
+		action = SCST_ACG_ACTION_INI_MOVE;
+	} else {
+		PRINT_ERROR("Unknown action \"%s\"", p);
+		res = -EINVAL;
+		goto out;
+	}
+
+	if (action != SCST_ACG_ACTION_INI_CLEAR)
+		if (!isspace(*p)) {
+			PRINT_ERROR("%s", "Syntax error");
+			res = -EINVAL;
+			goto out;
+		}
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_resume;
+	}
+
+	/* Check if tgt and acg not already freed while we were coming here */
+	if (scst_check_tgt_acg_ptrs(tgt, acg) != 0)
+		goto out_unlock;
+
+	if (action != SCST_ACG_ACTION_INI_CLEAR)
+		while (isspace(*p) && *p != '\0')
+			p++;
+
+	switch (action) {
+	case SCST_ACG_ACTION_INI_ADD:
+		e = p;
+		while (!isspace(*e) && *e != '\0')
+			e++;
+		*e = '\0';
+		name = p;
+
+		if (name[0] == '\0') {
+			PRINT_ERROR("%s", "Invalid initiator name");
+			res = -EINVAL;
+			goto out_unlock;
+		}
+
+		res = scst_acg_add_acn(acg, name);
+		if (res != 0)
+			goto out_unlock;
+		break;
+	case SCST_ACG_ACTION_INI_DEL:
+		e = p;
+		while (!isspace(*e) && *e != '\0')
+			e++;
+		*e = '\0';
+		name = p;
+
+		if (name[0] == '\0') {
+			PRINT_ERROR("%s", "Invalid initiator name");
+			res = -EINVAL;
+			goto out_unlock;
+		}
+
+		acn = scst_find_acn(acg, name);
+		if (acn == NULL) {
+			PRINT_ERROR("Unable to find "
+				"initiator '%s' in group '%s'",
+				name, acg->acg_name);
+			res = -EINVAL;
+			goto out_unlock;
+		}
+		scst_del_free_acn(acn, true);
+		break;
+	case SCST_ACG_ACTION_INI_CLEAR:
+		list_for_each_entry_safe(acn, acn_tmp, &acg->acn_list,
+				acn_list_entry) {
+			scst_del_free_acn(acn, false);
+		}
+		scst_check_reassign_sessions();
+		break;
+	case SCST_ACG_ACTION_INI_MOVE:
+		e = p;
+		while (!isspace(*e) && *e != '\0')
+			e++;
+		if (*e == '\0') {
+			PRINT_ERROR("%s", "Too few parameters");
+			res = -EINVAL;
+			goto out_unlock;
+		}
+		*e = '\0';
+		name = p;
+
+		if (name[0] == '\0') {
+			PRINT_ERROR("%s", "Invalid initiator name");
+			res = -EINVAL;
+			goto out_unlock;
+		}
+
+		e++;
+		p = e;
+		while (!isspace(*e) && *e != '\0')
+			e++;
+		*e = '\0';
+		group = p;
+
+		if (group[0] == '\0') {
+			PRINT_ERROR("%s", "Invalid group name");
+			res = -EINVAL;
+			goto out_unlock;
+		}
+
+		TRACE_DBG("Move initiator '%s' to group '%s'",
+			name, group);
+
+		acn = scst_find_acn(acg, name);
+		if (acn == NULL) {
+			PRINT_ERROR("Unable to find "
+				"initiator '%s' in group '%s'",
+				name, acg->acg_name);
+			res = -EINVAL;
+			goto out_unlock;
+		}
+		acg_dest = scst_tgt_find_acg(tgt, group);
+		if (acg_dest == NULL) {
+			PRINT_ERROR("Unable to find group '%s' in target '%s'",
+				group, tgt->tgt_name);
+			res = -EINVAL;
+			goto out_unlock;
+		}
+		if (scst_find_acn(acg_dest, name) != NULL) {
+			PRINT_ERROR("Initiator '%s' already exists in group '%s'",
+				name, acg_dest->acg_name);
+			res = -EEXIST;
+			goto out_unlock;
+		}
+		scst_del_free_acn(acn, false);
+
+		res = scst_acg_add_acn(acg_dest, name);
+		if (res != 0)
+			goto out_unlock;
+		break;
+	}
+
+	res = 0;
+
+out_unlock:
+	mutex_unlock(&scst_mutex);
+
+out_resume:
+	scst_resume_activity();
+
+out:
+	return res;
+
+#undef SCST_ACG_ACTION_INI_ADD
+#undef SCST_ACG_ACTION_INI_DEL
+#undef SCST_ACG_ACTION_INI_CLEAR
+#undef SCST_ACG_ACTION_INI_MOVE
+}
+
+static int scst_acg_ini_mgmt_store_work_fn(struct scst_sysfs_work_item *work)
+{
+	return scst_process_acg_ini_mgmt_store(work->buf, work->tgt, work->acg);
+}
+
+static ssize_t scst_acg_ini_mgmt_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	struct scst_acg *acg;
+
+	acg = container_of(kobj->parent, struct scst_acg, acg_kobj);
+
+	return __scst_acg_mgmt_store(acg, buf, count, false,
+		scst_acg_ini_mgmt_store_work_fn);
+}
+
+/**
+ ** SGV directory implementation
+ **/
+
+static struct kobj_attribute sgv_stat_attr =
+	__ATTR(stats, S_IRUGO | S_IWUSR, sgv_sysfs_stat_show,
+		sgv_sysfs_stat_reset);
+
+static struct attribute *sgv_attrs[] = {
+	&sgv_stat_attr.attr,
+	NULL,
+};
+
+static void sgv_kobj_release(struct kobject *kobj)
+{
+	struct sgv_pool *pool;
+
+	pool = container_of(kobj, struct sgv_pool, sgv_kobj);
+	complete_all(&pool->sgv_kobj_release_cmpl);
+	return;
+}
+
+static struct kobj_type sgv_pool_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = sgv_kobj_release,
+	.default_attrs = sgv_attrs,
+};
+
+int scst_sgv_sysfs_create(struct sgv_pool *pool)
+{
+	int res;
+
+	init_completion(&pool->sgv_kobj_release_cmpl);
+
+	res = kobject_init_and_add(&pool->sgv_kobj, &sgv_pool_ktype,
+			scst_sgv_kobj, pool->name);
+	if (res != 0) {
+		PRINT_ERROR("Can't add sgv pool %s to sysfs", pool->name);
+		goto out;
+	}
+
+out:
+	return res;
+}
+
+void scst_sgv_sysfs_del(struct sgv_pool *pool)
+{
+	int rc;
+
+	kobject_del(&pool->sgv_kobj);
+	kobject_put(&pool->sgv_kobj);
+
+	rc = wait_for_completion_timeout(&pool->sgv_kobj_release_cmpl, HZ);
+	if (rc == 0) {
+		PRINT_INFO("Waiting for releasing sysfs entry "
+			"for SGV pool %s (%d refs)...", pool->name,
+			atomic_read(&pool->sgv_kobj.kref.refcount));
+		wait_for_completion(&pool->sgv_kobj_release_cmpl);
+		PRINT_INFO("Done waiting for releasing sysfs "
+			"entry for SGV pool %s", pool->name);
+	}
+	return;
+}
+
+static struct kobj_attribute sgv_global_stat_attr =
+	__ATTR(global_stats, S_IRUGO | S_IWUSR, sgv_sysfs_global_stat_show,
+		sgv_sysfs_global_stat_reset);
+
+static struct attribute *sgv_default_attrs[] = {
+	&sgv_global_stat_attr.attr,
+	NULL,
+};
+
+static void scst_sysfs_release(struct kobject *kobj)
+{
+	kfree(kobj);
+}
+
+static struct kobj_type sgv_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_sysfs_release,
+	.default_attrs = sgv_default_attrs,
+};
+
+/**
+ ** SCST sysfs root directory implementation
+ **/
+
+static ssize_t scst_threads_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int count;
+
+	count = sprintf(buf, "%d\n%s", scst_main_cmd_threads.nr_threads,
+		(scst_main_cmd_threads.nr_threads != scst_threads) ?
+			SCST_SYSFS_KEY_MARK "\n" : "");
+	return count;
+}
+
+static int scst_process_threads_store(int newtn)
+{
+	int res;
+	long oldtn, delta;
+
+	TRACE_DBG("newtn %d", newtn);
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out;
+	}
+
+	oldtn = scst_main_cmd_threads.nr_threads;
+
+	delta = newtn - oldtn;
+	if (delta < 0)
+		scst_del_threads(&scst_main_cmd_threads, -delta);
+	else {
+		res = scst_add_threads(&scst_main_cmd_threads, NULL, NULL, delta);
+		if (res != 0)
+			goto out_up;
+	}
+
+	PRINT_INFO("Changed cmd threads num: old %ld, new %d", oldtn, newtn);
+
+out_up:
+	mutex_unlock(&scst_mutex);
+
+out:
+	return res;
+}
+
+static int scst_threads_store_work_fn(struct scst_sysfs_work_item *work)
+{
+	return scst_process_threads_store(work->new_threads_num);
+}
+
+static ssize_t scst_threads_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	long newtn;
+	struct scst_sysfs_work_item *work;
+
+	res = strict_strtol(buf, 0, &newtn);
+	if (res != 0) {
+		PRINT_ERROR("strict_strtol() for %s failed: %d ", buf, res);
+		goto out;
+	}
+	if (newtn <= 0) {
+		PRINT_ERROR("Illegal threads num value %ld", newtn);
+		res = -EINVAL;
+		goto out;
+	}
+
+	res = scst_alloc_sysfs_work(scst_threads_store_work_fn, false, &work);
+	if (res != 0)
+		goto out;
+
+	work->new_threads_num = newtn;
+
+	res = scst_sysfs_queue_wait_work(work);
+	if (res == 0)
+		res = count;
+
+out:
+	return res;
+}
+
+static ssize_t scst_setup_id_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int count;
+
+	count = sprintf(buf, "0x%x\n%s\n", scst_setup_id,
+		(scst_setup_id == 0) ? "" : SCST_SYSFS_KEY_MARK);
+	return count;
+}
+
+static ssize_t scst_setup_id_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	unsigned long val;
+
+	res = strict_strtoul(buf, 0, &val);
+	if (res != 0) {
+		PRINT_ERROR("strict_strtoul() for %s failed: %d ", buf, res);
+		goto out;
+	}
+
+	scst_setup_id = val;
+	PRINT_INFO("Changed scst_setup_id to %x", scst_setup_id);
+
+	res = count;
+
+out:
+	return res;
+}
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static void scst_read_trace_tlb(const struct scst_trace_log *tbl, char *buf,
+	unsigned long log_level, int *pos)
+{
+	const struct scst_trace_log *t = tbl;
+
+	if (t == NULL)
+		goto out;
+
+	while (t->token) {
+		if (log_level & t->val) {
+			*pos += sprintf(&buf[*pos], "%s%s",
+					(*pos == 0) ? "" : " | ",
+					t->token);
+		}
+		t++;
+	}
+out:
+	return;
+}
+
+static ssize_t scst_trace_level_show(const struct scst_trace_log *local_tbl,
+	unsigned long log_level, char *buf, const char *help)
+{
+	int pos = 0;
+
+	scst_read_trace_tlb(scst_trace_tbl, buf, log_level, &pos);
+	scst_read_trace_tlb(local_tbl, buf, log_level, &pos);
+
+	pos += sprintf(&buf[pos], "\n\n\nUsage:\n"
+		"	echo \"all|none|default\" >trace_level\n"
+		"	echo \"value DEC|0xHEX|0OCT\" >trace_level\n"
+		"	echo \"add|del TOKEN\" >trace_level\n"
+		"\nwhere TOKEN is one of [debug, function, line, pid,\n"
+		"		       buff, mem, sg, out_of_mem,\n"
+		"		       special, scsi, mgmt, minor,\n"
+		"		       mgmt_dbg, scsi_serializing,\n"
+		"		       retry, recv_bot, send_bot, recv_top, pr,\n"
+		"		       send_top%s]", help != NULL ? help : "");
+
+	return pos;
+}
+
+static ssize_t scst_main_trace_level_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	return scst_trace_level_show(scst_local_trace_tbl, trace_flag,
+			buf, NULL);
+}
+
+static int scst_write_trace(const char *buf, size_t length,
+	unsigned long *log_level, unsigned long default_level,
+	const char *name, const struct scst_trace_log *tbl)
+{
+	int res = length;
+	int action;
+	unsigned long level = 0, oldlevel;
+	char *buffer, *p, *e;
+	const struct scst_trace_log *t;
+
+#define SCST_TRACE_ACTION_ALL		1
+#define SCST_TRACE_ACTION_NONE		2
+#define SCST_TRACE_ACTION_DEFAULT	3
+#define SCST_TRACE_ACTION_ADD		4
+#define SCST_TRACE_ACTION_DEL		5
+#define SCST_TRACE_ACTION_VALUE		6
+
+	if ((buf == NULL) || (length == 0)) {
+		res = -EINVAL;
+		goto out;
+	}
+
+	buffer = kmalloc(length+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		PRINT_ERROR("Unable to alloc intermediate buffer (size %zd)",
+			length+1);
+		res = -ENOMEM;
+		goto out;
+	}
+	memcpy(buffer, buf, length);
+	buffer[length] = '\0';
+
+	TRACE_DBG("buffer %s", buffer);
+
+	p = buffer;
+	if (!strncasecmp("all", p, 3)) {
+		action = SCST_TRACE_ACTION_ALL;
+	} else if (!strncasecmp("none", p, 4) || !strncasecmp("null", p, 4)) {
+		action = SCST_TRACE_ACTION_NONE;
+	} else if (!strncasecmp("default", p, 7)) {
+		action = SCST_TRACE_ACTION_DEFAULT;
+	} else if (!strncasecmp("add", p, 3)) {
+		p += 3;
+		action = SCST_TRACE_ACTION_ADD;
+	} else if (!strncasecmp("del", p, 3)) {
+		p += 3;
+		action = SCST_TRACE_ACTION_DEL;
+	} else if (!strncasecmp("value", p, 5)) {
+		p += 5;
+		action = SCST_TRACE_ACTION_VALUE;
+	} else {
+		if (p[strlen(p) - 1] == '\n')
+			p[strlen(p) - 1] = '\0';
+		PRINT_ERROR("Unknown action \"%s\"", p);
+		res = -EINVAL;
+		goto out_free;
+	}
+
+	switch (action) {
+	case SCST_TRACE_ACTION_ADD:
+	case SCST_TRACE_ACTION_DEL:
+	case SCST_TRACE_ACTION_VALUE:
+		if (!isspace(*p)) {
+			PRINT_ERROR("%s", "Syntax error");
+			res = -EINVAL;
+			goto out_free;
+		}
+	}
+
+	switch (action) {
+	case SCST_TRACE_ACTION_ALL:
+		level = TRACE_ALL;
+		break;
+	case SCST_TRACE_ACTION_DEFAULT:
+		level = default_level;
+		break;
+	case SCST_TRACE_ACTION_NONE:
+		level = TRACE_NULL;
+		break;
+	case SCST_TRACE_ACTION_ADD:
+	case SCST_TRACE_ACTION_DEL:
+		while (isspace(*p) && *p != '\0')
+			p++;
+		e = p;
+		while (!isspace(*e) && *e != '\0')
+			e++;
+		*e = 0;
+		if (tbl) {
+			t = tbl;
+			while (t->token) {
+				if (!strcasecmp(p, t->token)) {
+					level = t->val;
+					break;
+				}
+				t++;
+			}
+		}
+		if (level == 0) {
+			t = scst_trace_tbl;
+			while (t->token) {
+				if (!strcasecmp(p, t->token)) {
+					level = t->val;
+					break;
+				}
+				t++;
+			}
+		}
+		if (level == 0) {
+			PRINT_ERROR("Unknown token \"%s\"", p);
+			res = -EINVAL;
+			goto out_free;
+		}
+		break;
+	case SCST_TRACE_ACTION_VALUE:
+		while (isspace(*p) && *p != '\0')
+			p++;
+		res = strict_strtoul(p, 0, &level);
+		if (res != 0) {
+			PRINT_ERROR("Invalid trace value \"%s\"", p);
+			res = -EINVAL;
+			goto out_free;
+		}
+		break;
+	}
+
+	oldlevel = *log_level;
+
+	switch (action) {
+	case SCST_TRACE_ACTION_ADD:
+		*log_level |= level;
+		break;
+	case SCST_TRACE_ACTION_DEL:
+		*log_level &= ~level;
+		break;
+	default:
+		*log_level = level;
+		break;
+	}
+
+	PRINT_INFO("Changed trace level for \"%s\": old 0x%08lx, new 0x%08lx",
+		name, oldlevel, *log_level);
+
+out_free:
+	kfree(buffer);
+out:
+	return res;
+
+#undef SCST_TRACE_ACTION_ALL
+#undef SCST_TRACE_ACTION_NONE
+#undef SCST_TRACE_ACTION_DEFAULT
+#undef SCST_TRACE_ACTION_ADD
+#undef SCST_TRACE_ACTION_DEL
+#undef SCST_TRACE_ACTION_VALUE
+}
+
+static ssize_t scst_main_trace_level_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+
+	if (mutex_lock_interruptible(&scst_log_mutex) != 0) {
+		res = -EINTR;
+		goto out;
+	}
+
+	res = scst_write_trace(buf, count, &trace_flag,
+		SCST_DEFAULT_LOG_FLAGS, "scst", scst_local_trace_tbl);
+
+	mutex_unlock(&scst_log_mutex);
+
+out:
+	return res;
+}
+
+#endif /* defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static ssize_t scst_version_show(struct kobject *kobj,
+				 struct kobj_attribute *attr,
+				 char *buf)
+{
+
+	sprintf(buf, "%s\n", SCST_VERSION_STRING);
+
+#ifdef CONFIG_SCST_STRICT_SERIALIZING
+	strcat(buf, "STRICT_SERIALIZING\n");
+#endif
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	strcat(buf, "EXTRACHECKS\n");
+#endif
+
+#ifdef CONFIG_SCST_TRACING
+	strcat(buf, "TRACING\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG
+	strcat(buf, "DEBUG\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_TM
+	strcat(buf, "DEBUG_TM\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_RETRY
+	strcat(buf, "DEBUG_RETRY\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_OOM
+	strcat(buf, "DEBUG_OOM\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_SN
+	strcat(buf, "DEBUG_SN\n");
+#endif
+
+#ifdef CONFIG_SCST_USE_EXPECTED_VALUES
+	strcat(buf, "USE_EXPECTED_VALUES\n");
+#endif
+
+#ifdef CONFIG_SCST_TEST_IO_IN_SIRQ
+	strcat(buf, "TEST_IO_IN_SIRQ\n");
+#endif
+
+#ifdef CONFIG_SCST_STRICT_SECURITY
+	strcat(buf, "STRICT_SECURITY\n");
+#endif
+	return strlen(buf);
+}
+
+static ssize_t scst_last_sysfs_mgmt_res_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int res;
+
+	spin_lock(&sysfs_work_lock);
+	TRACE_DBG("active_sysfs_works %d", active_sysfs_works);
+	if (active_sysfs_works > 0)
+		res = -EAGAIN;
+	else
+		res = sprintf(buf, "%d\n", last_sysfs_work_res);
+	spin_unlock(&sysfs_work_lock);
+	return res;
+}
+
+static struct kobj_attribute scst_threads_attr =
+	__ATTR(threads, S_IRUGO | S_IWUSR, scst_threads_show,
+	       scst_threads_store);
+
+static struct kobj_attribute scst_setup_id_attr =
+	__ATTR(setup_id, S_IRUGO | S_IWUSR, scst_setup_id_show,
+	       scst_setup_id_store);
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+static struct kobj_attribute scst_trace_level_attr =
+	__ATTR(trace_level, S_IRUGO | S_IWUSR, scst_main_trace_level_show,
+	       scst_main_trace_level_store);
+#endif
+
+static struct kobj_attribute scst_version_attr =
+	__ATTR(version, S_IRUGO, scst_version_show, NULL);
+
+static struct kobj_attribute scst_last_sysfs_mgmt_res_attr =
+	__ATTR(last_sysfs_mgmt_res, S_IRUGO,
+		scst_last_sysfs_mgmt_res_show, NULL);
+
+static struct attribute *scst_sysfs_root_default_attrs[] = {
+	&scst_threads_attr.attr,
+	&scst_setup_id_attr.attr,
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+	&scst_trace_level_attr.attr,
+#endif
+	&scst_version_attr.attr,
+	&scst_last_sysfs_mgmt_res_attr.attr,
+	NULL,
+};
+
+static void scst_sysfs_root_release(struct kobject *kobj)
+{
+	complete_all(&scst_sysfs_root_release_completion);
+}
+
+static struct kobj_type scst_sysfs_root_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_sysfs_root_release,
+	.default_attrs = scst_sysfs_root_default_attrs,
+};
+
+/**
+ ** Dev handlers
+ **/
+
+static void scst_devt_release(struct kobject *kobj)
+{
+	struct scst_dev_type *devt;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+	complete_all(&devt->devt_kobj_release_compl);
+	return;
+}
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static ssize_t scst_devt_trace_level_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_dev_type *devt;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+	return scst_trace_level_show(devt->trace_tbl,
+		devt->trace_flags ? *devt->trace_flags : 0, buf,
+		devt->trace_tbl_help);
+}
+
+static ssize_t scst_devt_trace_level_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_dev_type *devt;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+	if (mutex_lock_interruptible(&scst_log_mutex) != 0) {
+		res = -EINTR;
+		goto out;
+	}
+
+	res = scst_write_trace(buf, count, devt->trace_flags,
+		devt->default_trace_flags, devt->name, devt->trace_tbl);
+
+	mutex_unlock(&scst_log_mutex);
+
+out:
+	return res;
+}
+
+static struct kobj_attribute devt_trace_attr =
+	__ATTR(trace_level, S_IRUGO | S_IWUSR,
+	       scst_devt_trace_level_show, scst_devt_trace_level_store);
+
+#endif /* #if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static ssize_t scst_devt_type_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos;
+	struct scst_dev_type *devt;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+	pos = sprintf(buf, "%d - %s\n", devt->type,
+		(unsigned)devt->type > ARRAY_SIZE(scst_dev_handler_types) ?
+			"unknown" : scst_dev_handler_types[devt->type]);
+
+	return pos;
+}
+
+static struct kobj_attribute scst_devt_type_attr =
+	__ATTR(type, S_IRUGO, scst_devt_type_show, NULL);
+
+static struct attribute *scst_devt_default_attrs[] = {
+	&scst_devt_type_attr.attr,
+	NULL,
+};
+
+static struct kobj_type scst_devt_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_devt_release,
+	.default_attrs = scst_devt_default_attrs,
+};
+
+static ssize_t scst_devt_mgmt_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	char *help = "Usage: echo \"add_device device_name [parameters]\" "
+				">mgmt\n"
+		     "       echo \"del_device device_name\" >mgmt\n"
+		     "%s%s"
+		     "%s"
+		     "\n"
+		     "where parameters are one or more "
+		     "param_name=value pairs separated by ';'\n\n"
+		     "%s%s%s%s%s%s%s%s\n";
+	struct scst_dev_type *devt;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+	return scnprintf(buf, SCST_SYSFS_BLOCK_SIZE, help,
+		(devt->devt_optional_attributes != NULL) ?
+			"       echo \"add_attribute <attribute> <value>\" >mgmt\n"
+			"       echo \"del_attribute <attribute> <value>\" >mgmt\n" : "",
+		(devt->dev_optional_attributes != NULL) ?
+			"       echo \"add_device_attribute device_name <attribute> <value>\" >mgmt"
+			"       echo \"del_device_attribute device_name <attribute> <value>\" >mgmt\n" : "",
+		(devt->mgmt_cmd_help) ? devt->mgmt_cmd_help : "",
+		(devt->add_device_parameters != NULL) ?
+			"The following parameters available: " : "",
+		(devt->add_device_parameters != NULL) ?
+			devt->add_device_parameters : "",
+		(devt->devt_optional_attributes != NULL) ?
+			"The following dev handler attributes available: " : "",
+		(devt->devt_optional_attributes != NULL) ?
+			devt->devt_optional_attributes : "",
+		(devt->devt_optional_attributes != NULL) ? "\n" : "",
+		(devt->dev_optional_attributes != NULL) ?
+			"The following device attributes available: " : "",
+		(devt->dev_optional_attributes != NULL) ?
+			devt->dev_optional_attributes : "",
+		(devt->dev_optional_attributes != NULL) ? "\n" : "");
+}
+
+static int scst_process_devt_mgmt_store(char *buffer,
+	struct scst_dev_type *devt)
+{
+	int res = 0;
+	char *p, *pp, *dev_name;
+
+	/* Check if our pointer is still alive and, if yes, grab it */
+	if (scst_check_grab_devt_ptr(devt, &scst_virtual_dev_type_list) != 0)
+		goto out;
+
+	TRACE_DBG("devt %p, buffer %s", devt, buffer);
+
+	pp = buffer;
+	if (pp[strlen(pp) - 1] == '\n')
+		pp[strlen(pp) - 1] = '\0';
+
+	p = scst_get_next_lexem(&pp);
+
+	if (strcasecmp("add_device", p) == 0) {
+		dev_name = scst_get_next_lexem(&pp);
+		if (*dev_name == '\0') {
+			PRINT_ERROR("%s", "Device name required");
+			res = -EINVAL;
+			goto out_ungrab;
+		}
+		res = devt->add_device(dev_name, pp);
+	} else if (strcasecmp("del_device", p) == 0) {
+		dev_name = scst_get_next_lexem(&pp);
+		if (*dev_name == '\0') {
+			PRINT_ERROR("%s", "Device name required");
+			res = -EINVAL;
+			goto out_ungrab;
+		}
+
+		p = scst_get_next_lexem(&pp);
+		if (*p != '\0')
+			goto out_syntax_err;
+
+		res = devt->del_device(dev_name);
+	} else if (devt->mgmt_cmd != NULL) {
+		scst_restore_token_str(p, pp);
+		res = devt->mgmt_cmd(buffer);
+	} else {
+		PRINT_ERROR("Unknown action \"%s\"", p);
+		res = -EINVAL;
+		goto out_ungrab;
+	}
+
+out_ungrab:
+	scst_ungrab_devt_ptr(devt);
+
+out:
+	return res;
+
+out_syntax_err:
+	PRINT_ERROR("Syntax error on \"%s\"", p);
+	res = -EINVAL;
+	goto out_ungrab;
+}
+
+static int scst_devt_mgmt_store_work_fn(struct scst_sysfs_work_item *work)
+{
+	return scst_process_devt_mgmt_store(work->buf, work->devt);
+}
+
+static ssize_t __scst_devt_mgmt_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count,
+	int (*sysfs_work_fn)(struct scst_sysfs_work_item *work))
+{
+	int res;
+	char *buffer;
+	struct scst_dev_type *devt;
+	struct scst_sysfs_work_item *work;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+	buffer = kzalloc(count+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		res = -ENOMEM;
+		goto out;
+	}
+	memcpy(buffer, buf, count);
+	buffer[count] = '\0';
+
+	res = scst_alloc_sysfs_work(sysfs_work_fn, false, &work);
+	if (res != 0)
+		goto out_free;
+
+	work->buf = buffer;
+	work->devt = devt;
+
+	res = scst_sysfs_queue_wait_work(work);
+	if (res == 0)
+		res = count;
+
+out:
+	return res;
+
+out_free:
+	kfree(buffer);
+	goto out;
+}
+
+static ssize_t scst_devt_mgmt_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	return __scst_devt_mgmt_store(kobj, attr, buf, count,
+		scst_devt_mgmt_store_work_fn);
+}
+
+static struct kobj_attribute scst_devt_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_devt_mgmt_show,
+	       scst_devt_mgmt_store);
+
+static ssize_t scst_devt_pass_through_mgmt_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	char *help = "Usage: echo \"add_device H:C:I:L\" >mgmt\n"
+		     "       echo \"del_device H:C:I:L\" >mgmt\n";
+	return sprintf(buf, "%s", help);
+}
+
+static int scst_process_devt_pass_through_mgmt_store(char *buffer,
+	struct scst_dev_type *devt)
+{
+	int res = 0;
+	char *p, *pp, *action;
+	unsigned long host, channel, id, lun;
+	struct scst_device *d, *dev = NULL;
+
+	TRACE_DBG("devt %p, buffer %s", devt, buffer);
+
+	pp = buffer;
+	if (pp[strlen(pp) - 1] == '\n')
+		pp[strlen(pp) - 1] = '\0';
+
+	action = scst_get_next_lexem(&pp);
+	p = scst_get_next_lexem(&pp);
+	if (*p == '\0') {
+		PRINT_ERROR("%s", "Device required");
+		res = -EINVAL;
+		goto out;
+	}
+
+	if (*scst_get_next_lexem(&pp) != '\0') {
+		PRINT_ERROR("%s", "Too many parameters");
+		res = -EINVAL;
+		goto out_syntax_err;
+	}
+
+	host = simple_strtoul(p, &p, 0);
+	if ((host == ULONG_MAX) || (*p != ':'))
+		goto out_syntax_err;
+	p++;
+	channel = simple_strtoul(p, &p, 0);
+	if ((channel == ULONG_MAX) || (*p != ':'))
+		goto out_syntax_err;
+	p++;
+	id = simple_strtoul(p, &p, 0);
+	if ((channel == ULONG_MAX) || (*p != ':'))
+		goto out_syntax_err;
+	p++;
+	lun = simple_strtoul(p, &p, 0);
+	if (lun == ULONG_MAX)
+		goto out_syntax_err;
+
+	TRACE_DBG("Dev %ld:%ld:%ld:%ld", host, channel, id, lun);
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out;
+	}
+
+	/* Check if devt not be already freed while we were coming here */
+	if (scst_check_devt_ptr(devt, &scst_dev_type_list) != 0)
+		goto out_unlock;
+
+	list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+		if ((d->virt_id == 0) &&
+		    d->scsi_dev->host->host_no == host &&
+		    d->scsi_dev->channel == channel &&
+		    d->scsi_dev->id == id &&
+		    d->scsi_dev->lun == lun) {
+			dev = d;
+			TRACE_DBG("Dev %p (%ld:%ld:%ld:%ld) found",
+				  dev, host, channel, id, lun);
+			break;
+		}
+	}
+	if (dev == NULL) {
+		PRINT_ERROR("Device %ld:%ld:%ld:%ld not found",
+			       host, channel, id, lun);
+		res = -EINVAL;
+		goto out_unlock;
+	}
+
+	if (dev->scsi_dev->type != devt->type) {
+		PRINT_ERROR("Type %d of device %s differs from type "
+			"%d of dev handler %s", dev->type,
+			dev->virt_name, devt->type, devt->name);
+		res = -EINVAL;
+		goto out_unlock;
+	}
+
+	if (strcasecmp("add_device", action) == 0) {
+		res = scst_assign_dev_handler(dev, devt);
+		if (res == 0)
+			PRINT_INFO("Device %s assigned to dev handler %s",
+				dev->virt_name, devt->name);
+	} else if (strcasecmp("del_device", action) == 0) {
+		if (dev->handler != devt) {
+			PRINT_ERROR("Device %s is not assigned to handler %s",
+				dev->virt_name, devt->name);
+			res = -EINVAL;
+			goto out_unlock;
+		}
+		res = scst_assign_dev_handler(dev, &scst_null_devtype);
+		if (res == 0)
+			PRINT_INFO("Device %s unassigned from dev handler %s",
+				dev->virt_name, devt->name);
+	} else {
+		PRINT_ERROR("Unknown action \"%s\"", action);
+		res = -EINVAL;
+		goto out_unlock;
+	}
+
+out_unlock:
+	mutex_unlock(&scst_mutex);
+
+out:
+	return res;
+
+out_syntax_err:
+	PRINT_ERROR("Syntax error on \"%s\"", p);
+	res = -EINVAL;
+	goto out;
+}
+
+static int scst_devt_pass_through_mgmt_store_work_fn(
+	struct scst_sysfs_work_item *work)
+{
+	return scst_process_devt_pass_through_mgmt_store(work->buf, work->devt);
+}
+
+static ssize_t scst_devt_pass_through_mgmt_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	return __scst_devt_mgmt_store(kobj, attr, buf, count,
+		scst_devt_pass_through_mgmt_store_work_fn);
+}
+
+static struct kobj_attribute scst_devt_pass_through_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_devt_pass_through_mgmt_show,
+	       scst_devt_pass_through_mgmt_store);
+
+int scst_devt_sysfs_create(struct scst_dev_type *devt)
+{
+	int res;
+	struct kobject *parent;
+	const struct attribute **pattr;
+
+	init_completion(&devt->devt_kobj_release_compl);
+
+	if (devt->parent != NULL)
+		parent = &devt->parent->devt_kobj;
+	else
+		parent = scst_handlers_kobj;
+
+	res = kobject_init_and_add(&devt->devt_kobj, &scst_devt_ktype,
+			parent, devt->name);
+	if (res != 0) {
+		PRINT_ERROR("Can't add devt %s to sysfs", devt->name);
+		goto out;
+	}
+
+	if (devt->add_device != NULL) {
+		res = sysfs_create_file(&devt->devt_kobj,
+				&scst_devt_mgmt.attr);
+	} else {
+		res = sysfs_create_file(&devt->devt_kobj,
+				&scst_devt_pass_through_mgmt.attr);
+	}
+	if (res != 0) {
+		PRINT_ERROR("Can't add mgmt attr for dev handler %s",
+			devt->name);
+		goto out_err;
+	}
+
+	pattr = devt->devt_attrs;
+	if (pattr != NULL) {
+		while (*pattr != NULL) {
+			res = sysfs_create_file(&devt->devt_kobj, *pattr);
+			if (res != 0) {
+				PRINT_ERROR("Can't add devt attr %s for dev "
+					"handler %s", (*pattr)->name,
+					devt->name);
+				goto out_err;
+			}
+			pattr++;
+		}
+	}
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+	if (devt->trace_flags != NULL) {
+		res = sysfs_create_file(&devt->devt_kobj,
+				&devt_trace_attr.attr);
+		if (res != 0) {
+			PRINT_ERROR("Can't add devt trace_flag for dev "
+				"handler %s", devt->name);
+			goto out_err;
+		}
+	}
+#endif
+
+out:
+	return res;
+
+out_err:
+	scst_devt_sysfs_del(devt);
+	goto out;
+}
+
+void scst_devt_sysfs_del(struct scst_dev_type *devt)
+{
+	int rc;
+
+	kobject_del(&devt->devt_kobj);
+	kobject_put(&devt->devt_kobj);
+
+	rc = wait_for_completion_timeout(&devt->devt_kobj_release_compl, HZ);
+	if (rc == 0) {
+		PRINT_INFO("Waiting for releasing of sysfs entry "
+			"for dev handler template %s (%d refs)...", devt->name,
+			atomic_read(&devt->devt_kobj.kref.refcount));
+		wait_for_completion(&devt->devt_kobj_release_compl);
+		PRINT_INFO("Done waiting for releasing sysfs entry "
+			"for dev handler template %s", devt->name);
+	}
+	return;
+}
+
+/**
+ ** Sysfs user info
+ **/
+
+static DEFINE_MUTEX(scst_sysfs_user_info_mutex);
+
+/* All protected by scst_sysfs_user_info_mutex */
+static LIST_HEAD(scst_sysfs_user_info_list);
+static uint32_t scst_sysfs_info_cur_cookie;
+
+/* scst_sysfs_user_info_mutex supposed to be held */
+static struct scst_sysfs_user_info *scst_sysfs_user_find_info(uint32_t cookie)
+{
+	struct scst_sysfs_user_info *info, *res = NULL;
+
+	list_for_each_entry(info, &scst_sysfs_user_info_list,
+			info_list_entry) {
+		if (info->info_cookie == cookie) {
+			res = info;
+			break;
+		}
+	}
+	return res;
+}
+
+/**
+ * scst_sysfs_user_get_info() - get user_info
+ *
+ * Finds the user_info based on cookie and mark it as received the reply by
+ * setting for it flag info_being_executed.
+ *
+ * Returns found entry or NULL.
+ */
+struct scst_sysfs_user_info *scst_sysfs_user_get_info(uint32_t cookie)
+{
+	struct scst_sysfs_user_info *res = NULL;
+
+	mutex_lock(&scst_sysfs_user_info_mutex);
+
+	res = scst_sysfs_user_find_info(cookie);
+	if (res != NULL) {
+		if (!res->info_being_executed)
+			res->info_being_executed = 1;
+	}
+
+	mutex_unlock(&scst_sysfs_user_info_mutex);
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_sysfs_user_get_info);
+
+/**
+ ** Helper functionality to help target drivers and dev handlers support
+ ** sending events to user space and wait for their completion in a safe
+ ** manner. See samples how to use it in iscsi-scst or scst_user.
+ **/
+
+/**
+ * scst_sysfs_user_add_info() - create and add user_info in the global list
+ *
+ * Creates an info structure and adds it in the info_list.
+ * Returns 0 and out_info on success, error code otherwise.
+ */
+int scst_sysfs_user_add_info(struct scst_sysfs_user_info **out_info)
+{
+	int res = 0;
+	struct scst_sysfs_user_info *info;
+
+	info = kzalloc(sizeof(*info), GFP_KERNEL);
+	if (info == NULL) {
+		PRINT_ERROR("Unable to allocate sysfs user info (size %zd)",
+			sizeof(*info));
+		res = -ENOMEM;
+		goto out;
+	}
+
+	mutex_lock(&scst_sysfs_user_info_mutex);
+
+	while ((info->info_cookie == 0) ||
+	       (scst_sysfs_user_find_info(info->info_cookie) != NULL))
+		info->info_cookie = scst_sysfs_info_cur_cookie++;
+
+	init_completion(&info->info_completion);
+
+	list_add_tail(&info->info_list_entry, &scst_sysfs_user_info_list);
+	info->info_in_list = 1;
+
+	*out_info = info;
+
+	mutex_unlock(&scst_sysfs_user_info_mutex);
+
+out:
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_sysfs_user_add_info);
+
+/**
+ * scst_sysfs_user_del_info - delete and frees user_info
+ */
+void scst_sysfs_user_del_info(struct scst_sysfs_user_info *info)
+{
+
+	mutex_lock(&scst_sysfs_user_info_mutex);
+
+	if (info->info_in_list)
+		list_del(&info->info_list_entry);
+
+	mutex_unlock(&scst_sysfs_user_info_mutex);
+
+	kfree(info);
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_sysfs_user_del_info);
+
+/*
+ * Returns true if the reply received and being processed by another part of
+ * the kernel, false otherwise. Also removes the user_info from the list to
+ * fix for the user space that it missed the timeout.
+ */
+static bool scst_sysfs_user_info_executing(struct scst_sysfs_user_info *info)
+{
+	bool res;
+
+	mutex_lock(&scst_sysfs_user_info_mutex);
+
+	res = info->info_being_executed;
+
+	if (info->info_in_list) {
+		list_del(&info->info_list_entry);
+		info->info_in_list = 0;
+	}
+
+	mutex_unlock(&scst_sysfs_user_info_mutex);
+	return res;
+}
+
+/**
+ * scst_wait_info_completion() - wait an user space event's completion
+ *
+ * Waits for the info request been completed by user space at most timeout
+ * jiffies. If the reply received before timeout and being processed by
+ * another part of the kernel, i.e. scst_sysfs_user_info_executing()
+ * returned true, waits for it to complete indefinitely.
+ *
+ * Returns status of the request completion.
+ */
+int scst_wait_info_completion(struct scst_sysfs_user_info *info,
+	unsigned long timeout)
+{
+	int res, rc;
+
+	TRACE_DBG("Waiting for info %p completion", info);
+
+	while (1) {
+		rc = wait_for_completion_interruptible_timeout(
+			&info->info_completion, timeout);
+		if (rc > 0) {
+			TRACE_DBG("Waiting for info %p finished with %d",
+				info, rc);
+			break;
+		} else if (rc == 0) {
+			if (!scst_sysfs_user_info_executing(info)) {
+				PRINT_ERROR("Timeout waiting for user "
+					"space event %p", info);
+				res = -EBUSY;
+				goto out;
+			} else {
+				/* Req is being executed in the kernel */
+				TRACE_DBG("Keep waiting for info %p completion",
+					info);
+				wait_for_completion(&info->info_completion);
+				break;
+			}
+		} else if (rc != -ERESTARTSYS) {
+				res = rc;
+				PRINT_ERROR("wait_for_completion() failed: %d",
+					res);
+				goto out;
+		} else {
+			TRACE_DBG("Waiting for info %p finished with %d, "
+				"retrying", info, rc);
+		}
+	}
+
+	TRACE_DBG("info %p, status %d", info, info->info_status);
+	res = info->info_status;
+
+out:
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_wait_info_completion);
+
+int __init scst_sysfs_init(void)
+{
+	int res = 0;
+
+	sysfs_work_thread = kthread_run(sysfs_work_thread_fn,
+		NULL, "scst_uid");
+	if (IS_ERR(sysfs_work_thread)) {
+		res = PTR_ERR(sysfs_work_thread);
+		PRINT_ERROR("kthread_run() for user interface thread "
+			"failed: %d", res);
+		sysfs_work_thread = NULL;
+		goto out;
+	}
+
+	res = kobject_init_and_add(&scst_sysfs_root_kobj,
+			&scst_sysfs_root_ktype, kernel_kobj, "%s", "scst_tgt");
+	if (res != 0)
+		goto sysfs_root_add_error;
+
+	scst_targets_kobj = kobject_create_and_add("targets",
+				&scst_sysfs_root_kobj);
+	if (scst_targets_kobj == NULL)
+		goto targets_kobj_error;
+
+	scst_devices_kobj = kobject_create_and_add("devices",
+				&scst_sysfs_root_kobj);
+	if (scst_devices_kobj == NULL)
+		goto devices_kobj_error;
+
+	scst_sgv_kobj = kzalloc(sizeof(*scst_sgv_kobj), GFP_KERNEL);
+	if (scst_sgv_kobj == NULL)
+		goto sgv_kobj_error;
+
+	res = kobject_init_and_add(scst_sgv_kobj, &sgv_ktype,
+			&scst_sysfs_root_kobj, "%s", "sgv");
+	if (res != 0)
+		goto sgv_kobj_add_error;
+
+	scst_handlers_kobj = kobject_create_and_add("handlers",
+					&scst_sysfs_root_kobj);
+	if (scst_handlers_kobj == NULL)
+		goto handlers_kobj_error;
+
+out:
+	return res;
+
+handlers_kobj_error:
+	kobject_del(scst_sgv_kobj);
+
+sgv_kobj_add_error:
+	kobject_put(scst_sgv_kobj);
+
+sgv_kobj_error:
+	kobject_del(scst_devices_kobj);
+	kobject_put(scst_devices_kobj);
+
+devices_kobj_error:
+	kobject_del(scst_targets_kobj);
+	kobject_put(scst_targets_kobj);
+
+targets_kobj_error:
+	kobject_del(&scst_sysfs_root_kobj);
+
+sysfs_root_add_error:
+	kobject_put(&scst_sysfs_root_kobj);
+
+	kthread_stop(sysfs_work_thread);
+
+	if (res == 0)
+		res = -EINVAL;
+
+	goto out;
+}
+
+void scst_sysfs_cleanup(void)
+{
+
+	PRINT_INFO("%s", "Exiting SCST sysfs hierarchy...");
+
+	kobject_del(scst_sgv_kobj);
+	kobject_put(scst_sgv_kobj);
+
+	kobject_del(scst_devices_kobj);
+	kobject_put(scst_devices_kobj);
+
+	kobject_del(scst_targets_kobj);
+	kobject_put(scst_targets_kobj);
+
+	kobject_del(scst_handlers_kobj);
+	kobject_put(scst_handlers_kobj);
+
+	kobject_del(&scst_sysfs_root_kobj);
+	kobject_put(&scst_sysfs_root_kobj);
+
+	wait_for_completion(&scst_sysfs_root_release_completion);
+	/*
+	 * There is a race, when in the release() schedule happens just after
+	 * calling complete(), so if we exit and unload scst module immediately,
+	 * there will be oops there. So let's give it a chance to quit
+	 * gracefully. Unfortunately, current kobjects implementation
+	 * doesn't allow better ways to handle it.
+	 */
+	msleep(3000);
+
+	if (sysfs_work_thread)
+		kthread_stop(sysfs_work_thread);
+
+	PRINT_INFO("%s", "Exiting SCST sysfs hierarchy done");
+	return;
+}



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ