[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1466398515-1005-5-git-send-email-byungchul.park@lge.com>
Date: Mon, 20 Jun 2016 13:55:14 +0900
From: Byungchul Park <byungchul.park@....com>
To: peterz@...radead.org, mingo@...nel.org
Cc: linux-kernel@...r.kernel.org, npiggin@...e.de,
sergey.senozhatsky@...il.com, gregkh@...uxfoundation.org,
minchan@...nel.org
Subject: [PATCH 4/5] fs/buffer.c: Remove trailing white space
Trailing white space is not accepted in kernel coding style. Remove
them.
Signed-off-by: Byungchul Park <byungchul.park@....com>
---
fs/buffer.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index e1632ab..a75ca74 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -439,7 +439,7 @@ EXPORT_SYMBOL(mark_buffer_async_write);
* try_to_free_buffers() will be operating against the *blockdev* mapping
* at the time, not against the S_ISREG file which depends on those buffers.
* So the locking for private_list is via the private_lock in the address_space
- * which backs the buffers. Which is different from the address_space
+ * which backs the buffers. Which is different from the address_space
* against which the buffers are listed. So for a particular address_space,
* mapping->private_lock does *not* protect mapping->private_list! In fact,
* mapping->private_list will always be protected by the backing blockdev's
@@ -713,7 +713,7 @@ EXPORT_SYMBOL(__set_page_dirty_buffers);
* Do this in two main stages: first we copy dirty buffers to a
* temporary inode list, queueing the writes as we go. Then we clean
* up, waiting for those writes to complete.
- *
+ *
* During this second stage, any subsequent updates to the file may end
* up refiling the buffer on the original inode's dirty list again, so
* there is a chance we will end up with a buffer queued for write but
@@ -791,7 +791,7 @@ static int fsync_buffers_list(spinlock_t *lock, struct list_head *list)
brelse(bh);
spin_lock(lock);
}
-
+
spin_unlock(lock);
err2 = osync_buffers_list(lock, list);
if (err)
@@ -901,7 +901,7 @@ no_grow:
/*
* Return failure for non-async IO requests. Async IO requests
* are not allowed to fail, so we have to wait until buffer heads
- * become available. But we don't want tasks sleeping with
+ * become available. But we don't want tasks sleeping with
* partially complete buffers, so all were released above.
*/
if (!retry)
@@ -910,7 +910,7 @@ no_grow:
/* We're _really_ low on memory. Now we just
* wait for old buffer heads to become free due to
* finishing IO. Since this is an async request and
- * the reserve list is empty, we're sure there are
+ * the reserve list is empty, we're sure there are
* async buffer heads in use.
*/
free_more_memory();
@@ -946,7 +946,7 @@ static sector_t blkdev_max_block(struct block_device *bdev, unsigned int size)
/*
* Initialise the state of a blockdev page's buffers.
- */
+ */
static sector_t
init_page_buffers(struct page *page, struct block_device *bdev,
sector_t block, int size)
@@ -1448,7 +1448,7 @@ static bool has_bh_in_lru(int cpu, void *dummy)
{
struct bh_lru *b = per_cpu_ptr(&bh_lrus, cpu);
int i;
-
+
for (i = 0; i < BH_LRU_SIZE; i++) {
if (b->bhs[i])
return 1;
@@ -1952,7 +1952,7 @@ int __block_write_begin(struct page *page, loff_t pos, unsigned len,
if (PageUptodate(page)) {
if (!buffer_uptodate(bh))
set_buffer_uptodate(bh);
- continue;
+ continue;
}
if (!buffer_uptodate(bh) && !buffer_delay(bh) &&
!buffer_unwritten(bh) &&
@@ -2258,7 +2258,7 @@ EXPORT_SYMBOL(block_read_full_page);
/* utility function for filesystems that need to do work on expanding
* truncates. Uses filesystem pagecache writes to allow the filesystem to
- * deal with the hole.
+ * deal with the hole.
*/
int generic_cont_expand_simple(struct inode *inode, loff_t size)
{
@@ -2819,7 +2819,7 @@ int block_truncate_page(struct address_space *mapping,
length = blocksize - length;
iblock = (sector_t)index << (PAGE_CACHE_SHIFT - inode->i_blkbits);
-
+
page = grab_cache_page(mapping, index);
err = -ENOMEM;
if (!page)
@@ -3069,7 +3069,7 @@ EXPORT_SYMBOL(submit_bh);
*
* ll_rw_block sets b_end_io to simple completion handler that marks
* the buffer up-to-date (if appropriate), unlocks the buffer and wakes
- * any waiters.
+ * any waiters.
*
* All of the buffers must be for the same device, and must also be a
* multiple of the current approved size for the device.
--
1.9.1
Powered by blists - more mailing lists