I have set up a single node cluster (initially) and am attempting to write a file from a client outside the cluster. While the write call returns, the close call hangs for a very long time, eventually returns, but the resulting file in HDFS is 0 bytes in length. The log says:
2016-10-03 22:01:41,367 INFO BlockStateChange: chooseUnderReplicatedBlocks selected 1 blocks at priority level 0; Total=1 Reset bookmarks? true
2016-10-03 22:01:41,367 INFO BlockStateChange: BLOCK* neededReplications = 1, pendingReplications = 0.
2016-10-03 22:01:41,367 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Blocks chosen but could not be replicated = 1; of which 1 have no target, 0 have no source, 0 are UC, 0 are abandoned, 0 already have enough replicas.
Why is the block not written to the single datanode (same as namenode)? What does it mean to "have no target"? The replication count is 1 and I would have thought that a single copy of the file would be stored on the single cluster node.