Interface ClientProtocol

All Known Implementing Classes:
ClientNamenodeProtocolTranslatorPB

@Private @Evolving public interface ClientProtocol
ClientProtocol is used by user code via the DistributedFileSystem class to communicate with the NameNode. User code can manipulate the directory namespace, as well as open/close file streams, etc.
  • Field Details

    • versionID

      static final long versionID
      Until version 69, this class ClientProtocol served as both the client interface to the NN AND the RPC protocol used to communicate with the NN. This class is used by both the DFSClient and the NN server side to insulate from the protocol serialization. If you are adding/changing this interface then you need to change both this class and ALSO related protocol buffer wire protocol definition in ClientNamenodeProtocol.proto. For more details on protocol buffer wire protocol, please see .../org/apache/hadoop/hdfs/protocolPB/overview.html The log of historical changes can be retrieved from the svn). 69: Eliminate overloaded method names. 69L is the last version id when this class was used for protocols serialization. DO not update this version any further.
      See Also:
    • GET_STATS_CAPACITY_IDX

      static final int GET_STATS_CAPACITY_IDX
      Constants to index the array of aggregated stats returned by getStats().
      See Also:
    • GET_STATS_USED_IDX

      static final int GET_STATS_USED_IDX
      See Also:
    • GET_STATS_REMAINING_IDX

      static final int GET_STATS_REMAINING_IDX
      See Also:
    • GET_STATS_UNDER_REPLICATED_IDX

      @Deprecated static final int GET_STATS_UNDER_REPLICATED_IDX
      Deprecated.
      See Also:
    • GET_STATS_LOW_REDUNDANCY_IDX

      static final int GET_STATS_LOW_REDUNDANCY_IDX
      See Also:
    • GET_STATS_CORRUPT_BLOCKS_IDX

      static final int GET_STATS_CORRUPT_BLOCKS_IDX
      See Also:
    • GET_STATS_MISSING_BLOCKS_IDX

      static final int GET_STATS_MISSING_BLOCKS_IDX
      See Also:
    • GET_STATS_MISSING_REPL_ONE_BLOCKS_IDX

      static final int GET_STATS_MISSING_REPL_ONE_BLOCKS_IDX
      See Also:
    • GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX

      static final int GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX
      See Also:
    • GET_STATS_PENDING_DELETION_BLOCKS_IDX

      static final int GET_STATS_PENDING_DELETION_BLOCKS_IDX
      See Also:
    • STATS_ARRAY_LENGTH

      static final int STATS_ARRAY_LENGTH
      See Also:
  • Method Details

    • getBlockLocations

      LocatedBlocks getBlockLocations(String src, long offset, long length) throws IOException
      Get locations of the blocks of the specified file within the specified range. DataNode locations for each block are sorted by the proximity to the client.

      Return LocatedBlocks which contains file length, blocks and their locations. DataNode locations for each block are sorted by the distance to the client's address.

      The client will then have to contact one of the indicated DataNodes to obtain the actual data.

      Parameters:
      src - file name
      offset - range start offset
      length - range length
      Returns:
      file length and array of blocks with their locations
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src does not exist
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      IOException - If an I/O error occurred
    • getServerDefaults

      org.apache.hadoop.fs.FsServerDefaults getServerDefaults() throws IOException
      Get server default values for a number of configuration params.
      Returns:
      a set of server default configuration values
      Throws:
      IOException
    • create

      HdfsFileStatus create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) throws IOException
      Create a new file entry in the namespace.

      This will create an empty file specified by the source path. The path should reflect a full path originated at the root. The name-node does not have a notion of "current" directory for a client.

      Once created, the file is visible and available for read to other clients. Although, other clients cannot delete(String, boolean), re-create or rename(String, String) it until the file is completed or explicitly as a result of lease expiration.

      Blocks have a maximum size. Clients that intend to create multi-block files must also use addBlock(java.lang.String, java.lang.String, org.apache.hadoop.hdfs.protocol.ExtendedBlock, org.apache.hadoop.hdfs.protocol.DatanodeInfo[], long, java.lang.String[], java.util.EnumSet<org.apache.hadoop.hdfs.AddBlockFlag>)

      Parameters:
      src - path of the file being created.
      masked - masked permission.
      clientName - name of the current client.
      flag - indicates whether the file should be overwritten if it already exists or create if it does not exist or append, or whether the file should be a replicate file, no matter what its ancestor's replication or erasure coding policy is.
      createParent - create missing parent directory if true
      replication - block replication factor.
      blockSize - maximum block size.
      supportedVersions - CryptoProtocolVersions supported by the client
      ecPolicyName - the name of erasure coding policy. A null value means this file will inherit its parent directory's policy, either traditional replication or erasure coding policy. ecPolicyName and SHOULD_REPLICATE CreateFlag are mutually exclusive. It's invalid to set both SHOULD_REPLICATE flag and a non-null ecPolicyName.
      storagePolicy - the name of the storage policy.
      Returns:
      the status of the created file, it could be null if the server doesn't support returning the file status
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      AlreadyBeingCreatedException - if the path does not exist.
      DSQuotaExceededException - If file creation violates disk space quota restriction
      org.apache.hadoop.fs.FileAlreadyExistsException - If file src already exists
      FileNotFoundException - If parent of src does not exist and createParent is false
      org.apache.hadoop.fs.ParentNotDirectoryException - If parent of src is not a directory.
      NSQuotaExceededException - If file creation violates name space quota restriction
      SafeModeException - create not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred RuntimeExceptions:
      org.apache.hadoop.fs.InvalidPathException - Path src is invalid

      Note that create with CreateFlag.OVERWRITE is idempotent.

    • append

      LastBlockWithStatus append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) throws IOException
      Append to the end of the file.
      Parameters:
      src - path of the file being created.
      clientName - name of the current client.
      flag - indicates whether the data is appended to a new block.
      Returns:
      wrapper with information about the last partial block and file status if any
      Throws:
      org.apache.hadoop.security.AccessControlException - if permission to append file is denied by the system. As usually on the client side the exception will be wrapped into RemoteException. Allows appending to an existing file if the server is configured with the parameter dfs.support.append set to true, otherwise throws an IOException.
      org.apache.hadoop.security.AccessControlException - If permission to append to file is denied
      FileNotFoundException - If file src is not found
      DSQuotaExceededException - If append violates disk space quota restriction
      SafeModeException - append not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred. RuntimeExceptions:
      UnsupportedOperationException - if append is not supported
    • setReplication

      boolean setReplication(String src, short replication) throws IOException
      Set replication for an existing file.

      The NameNode sets replication to the new value and returns. The actual block replication is not expected to be performed during this method call. The blocks will be populated or removed in the background as the result of the routine block maintenance procedures.

      Parameters:
      src - file name
      replication - new replication
      Returns:
      true if successful; false if file does not exist or is a directory
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      DSQuotaExceededException - If replication violates disk space quota restriction
      FileNotFoundException - If file src is not found
      SafeModeException - not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • getStoragePolicies

      BlockStoragePolicy[] getStoragePolicies() throws IOException
      Get all the available block storage policies.
      Returns:
      All the in-use block storage policies currently.
      Throws:
      IOException
    • setStoragePolicy

      void setStoragePolicy(String src, String policyName) throws IOException
      Set the storage policy for a file/directory.
      Parameters:
      src - Path of an existing file/directory.
      policyName - The name of the storage policy
      Throws:
      SnapshotAccessControlException - If access is denied
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink
      FileNotFoundException - If file/dir src is not found
      QuotaExceededException - If changes violate the quota restriction
      IOException
    • unsetStoragePolicy

      void unsetStoragePolicy(String src) throws IOException
      Unset the storage policy set for a given file or directory.
      Parameters:
      src - Path of an existing file/directory.
      Throws:
      SnapshotAccessControlException - If access is denied
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink
      FileNotFoundException - If file/dir src is not found
      QuotaExceededException - If changes violate the quota restriction
      IOException
    • getStoragePolicy

      BlockStoragePolicy getStoragePolicy(String path) throws IOException
      Get the storage policy for a file/directory.
      Parameters:
      path - Path of an existing file/directory.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink
      FileNotFoundException - If file/dir src is not found
      IOException
    • setPermission

      void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException
      Set permissions for an existing file/directory.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      SafeModeException - not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • setOwner

      void setOwner(String src, String username, String groupname) throws IOException
      Set Owner of a path (i.e. a file or a directory). The parameters username and groupname cannot both be null.
      Parameters:
      src - file path
      username - If it is null, the original username remains unchanged.
      groupname - If it is null, the original groupname remains unchanged.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      SafeModeException - not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • abandonBlock

      void abandonBlock(ExtendedBlock b, long fileId, String src, String holder) throws IOException
      The client can give up on a block by calling abandonBlock(). The client can then either obtain a new block, or complete or abandon the file. Any partial writes to the block will be discarded.
      Parameters:
      b - Block to abandon
      fileId - The id of the file where the block resides. Older clients will pass GRANDFATHER_INODE_ID here.
      src - The path of the file where the block resides.
      holder - Lease holder.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - file src is not found
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      IOException - If an I/O error occurred
    • addBlock

      LocatedBlock addBlock(String src, String clientName, ExtendedBlock previous, DatanodeInfo[] excludeNodes, long fileId, String[] favoredNodes, EnumSet<AddBlockFlag> addBlockFlags) throws IOException
      A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock(). addBlock() allocates a new block and datanodes the block data should be replicated to. addBlock() also commits the previous block by reporting to the name-node the actual generation stamp and the length of the block that the client has transmitted to data-nodes.
      Parameters:
      src - the file being created
      clientName - the name of the client that adds the block
      previous - previous block
      excludeNodes - a list of nodes that should not be allocated for the current block
      fileId - the id uniquely identifying a file
      favoredNodes - the list of nodes where the client wants the blocks. Nodes are identified by either host name or address.
      addBlockFlags - flags to advise the behavior of allocating and placing a new block.
      Returns:
      LocatedBlock allocated block information.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      NotReplicatedYetException - previous blocks of the file are not replicated yet. Blocks cannot be added until replication completes.
      SafeModeException - create not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      IOException - If an I/O error occurred
    • getAdditionalDatanode

      LocatedBlock getAdditionalDatanode(String src, long fileId, ExtendedBlock blk, DatanodeInfo[] existings, String[] existingStorageIDs, DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) throws IOException
      Get a datanode for an existing pipeline.
      Parameters:
      src - the file being written
      fileId - the ID of the file being written
      blk - the block being written
      existings - the existing nodes in the pipeline
      excludes - the excluded nodes
      numAdditionalNodes - number of additional datanodes
      clientName - the name of the client
      Returns:
      the located block.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      SafeModeException - create not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      IOException - If an I/O error occurred
    • complete

      boolean complete(String src, String clientName, ExtendedBlock last, long fileId) throws IOException
      The client is done writing data to the given filename, and would like to complete it. The function returns whether the file has been closed successfully. If the function returns false, the caller should try again. close() also commits the last block of file by reporting to the name-node the actual generation stamp and the length of the block that the client has transmitted to data-nodes. A call to complete() will not return true until all the file's blocks have been replicated the minimum number of times. Thus, DataNode failures may cause a client to call complete() several times before succeeding.
      Parameters:
      src - the file being created
      clientName - the name of the client that adds the block
      last - the last block info
      fileId - the id uniquely identifying a file
      Returns:
      true if all file blocks are minimally replicated or false otherwise
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      SafeModeException - create not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      IOException - If an I/O error occurred
    • reportBadBlocks

      void reportBadBlocks(LocatedBlock[] blocks) throws IOException
      The client wants to report corrupted blocks (blocks with specified locations on datanodes).
      Parameters:
      blocks - Array of located blocks to report
      Throws:
      IOException
    • rename

      boolean rename(String src, String dst) throws IOException
      Rename an item in the file system namespace.
      Parameters:
      src - existing file or directory name.
      dst - new name.
      Returns:
      true if successful, or false if the old name does not exist or if the new name already belongs to the namespace.
      Throws:
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - an I/O error occurred
    • concat

      void concat(String trg, String[] srcs) throws IOException
      Moves blocks from srcs to trg and delete srcs.
      Parameters:
      trg - existing file
      srcs - - list of existing files (same block size, same replication)
      Throws:
      IOException - if some arguments are invalid
      org.apache.hadoop.fs.UnresolvedLinkException - if trg or srcs contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
    • rename2

      void rename2(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException
      Rename src to dst.
      • Fails if src is a file and dst is a directory.
      • Fails if src is a directory and dst is a file.
      • Fails if the parent of dst does not exist or is a file.

      Without OVERWRITE option, rename fails if the dst already exists. With OVERWRITE option, rename overwrites the dst, if it is a file or an empty directory. Rename fails if dst is a non-empty directory.

      This implementation of rename is atomic.

      Parameters:
      src - existing file or directory name.
      dst - new name.
      options - Rename options
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      DSQuotaExceededException - If rename violates disk space quota restriction
      org.apache.hadoop.fs.FileAlreadyExistsException - If dst already exists and options has Options.Rename.OVERWRITE option false.
      FileNotFoundException - If src does not exist
      NSQuotaExceededException - If rename violates namespace quota restriction
      org.apache.hadoop.fs.ParentNotDirectoryException - If parent of dst is not a directory
      SafeModeException - rename not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src or dst contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • truncate

      boolean truncate(String src, long newLength, String clientName) throws IOException
      Truncate file src to new size.
      • Fails if src is a directory.
      • Fails if src does not exist.
      • Fails if src is not closed.
      • Fails if new size is greater than current size.

      This implementation of truncate is purely a namespace operation if truncate occurs at a block boundary. Requires DataNode block recovery otherwise.

      Parameters:
      src - existing file
      newLength - the target size
      Returns:
      true if client does not need to wait for block recovery, false if client needs to wait for block recovery.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      SafeModeException - truncate not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • delete

      boolean delete(String src, boolean recursive) throws IOException
      Delete the given file or directory from the file system.

      same as delete but provides a way to avoid accidentally deleting non empty directories programmatically.

      Parameters:
      src - existing name
      recursive - if true deletes a non empty directory recursively, else throws an exception.
      Returns:
      true only if the existing file or directory was actually removed from the file system.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      SafeModeException - create not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      org.apache.hadoop.fs.PathIsNotEmptyDirectoryException - if path is a non-empty directory and recursive is set to false
      IOException - If an I/O error occurred
    • mkdirs

      boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException
      Create a directory (or hierarchy of directories) with the given name and permission.
      Parameters:
      src - The path of the directory being created
      masked - The masked permission of the directory being created
      createParent - create missing parent directory if true
      Returns:
      True if the operation success.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      org.apache.hadoop.fs.FileAlreadyExistsException - If src already exists
      FileNotFoundException - If parent of src does not exist and createParent is false
      NSQuotaExceededException - If file creation violates quota restriction
      org.apache.hadoop.fs.ParentNotDirectoryException - If parent of src is not a directory
      SafeModeException - create not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred. RunTimeExceptions:
      org.apache.hadoop.fs.InvalidPathException - If src is invalid
    • getListing

      DirectoryListing getListing(String src, byte[] startAfter, boolean needLocation) throws IOException
      Get a partial listing of the indicated directory.
      Parameters:
      src - the directory name
      startAfter - the name to start listing after encoded in java UTF8
      needLocation - if the FileStatus should contain block locations
      Returns:
      a partial listing starting after startAfter
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file src is not found
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      IOException - If an I/O error occurred
    • getBatchedListing

      BatchedDirectoryListing getBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) throws IOException
      Get a partial listing of the input directories
      Parameters:
      srcs - the input directories
      startAfter - the name to start listing after encoded in Java UTF8
      needLocation - if the FileStatus should contain block locations
      Returns:
      a partial listing starting after startAfter. null if the input is empty
      Throws:
      IOException - if an I/O error occurred
    • getSnapshottableDirListing

      SnapshottableDirectoryStatus[] getSnapshottableDirListing() throws IOException
      Get the list of snapshottable directories that are owned by the current user. Return all the snapshottable directories if the current user is a super user.
      Returns:
      The list of all the current snapshottable directories.
      Throws:
      IOException - If an I/O error occurred.
    • getSnapshotListing

      SnapshotStatus[] getSnapshotListing(String snapshotRoot) throws IOException
      Get listing of all the snapshots for a snapshottable directory.
      Returns:
      Information about all the snapshots for a snapshottable directory
      Throws:
      IOException - If an I/O error occurred
    • renewLease

      void renewLease(String clientName, List<String> namespaces) throws IOException
      Client programs can cause stateful changes in the NameNode that affect other clients. A client may obtain a file and neither abandon nor complete it. A client might hold a series of locks that prevent other clients from proceeding. Clearly, it would be bad if a client held a bunch of locks that it never gave up. This can happen easily if the client dies unexpectedly.

      So, the NameNode will revoke the locks and live file-creates for clients that it thinks have died. A client tells the NameNode that it is still alive by periodically calling renewLease(). If a certain amount of time passes since the last call to renewLease(), the NameNode assumes the client has died.

      Parameters:
      namespaces - The full Namespace list that the renewLease rpc should be forwarded by RBF. Tips: NN side, this value should be null. RBF side, if this value is null, this rpc will be forwarded to all available namespaces, else this rpc will be forwarded to the special namespaces.
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      IOException - If an I/O error occurred
    • recoverLease

      boolean recoverLease(String src, String clientName) throws IOException
      Start lease recovery. Lightweight NameNode operation to trigger lease recovery
      Parameters:
      src - path of the file to start lease recovery
      clientName - name of the current client
      Returns:
      true if the file is already closed
      Throws:
      IOException
    • getStats

      long[] getStats() throws IOException
      Get an array of aggregated statistics combining blocks of both type BlockType.CONTIGUOUS and BlockType.STRIPED in the filesystem. Use public constants like GET_STATS_CAPACITY_IDX in place of actual numbers to index into the array.
      • [0] contains the total storage capacity of the system, in bytes.
      • [1] contains the total used space of the system, in bytes.
      • [2] contains the available storage of the system, in bytes.
      • [3] contains number of low redundancy blocks in the system.
      • [4] contains number of corrupt blocks.
      • [5] contains number of blocks without any good replicas left.
      • [6] contains number of blocks which have replication factor 1 and have lost the only replica.
      • [7] contains number of bytes that are at risk for deletion.
      • [8] contains number of pending deletion blocks.
      Throws:
      IOException
    • getReplicatedBlockStats

      ReplicatedBlockStats getReplicatedBlockStats() throws IOException
      Get statistics pertaining to blocks of type BlockType.CONTIGUOUS in the filesystem.
      Throws:
      IOException
    • getECBlockGroupStats

      ECBlockGroupStats getECBlockGroupStats() throws IOException
      Get statistics pertaining to blocks of type BlockType.STRIPED in the filesystem.
      Throws:
      IOException
    • getDatanodeReport

      DatanodeInfo[] getDatanodeReport(HdfsConstants.DatanodeReportType type) throws IOException
      Get a report on the system's current datanodes. One DatanodeInfo object is returned for each DataNode. Return live datanodes if type is LIVE; dead datanodes if type is DEAD; otherwise all datanodes if type is ALL.
      Throws:
      IOException
    • getDatanodeStorageReport

      DatanodeStorageReport[] getDatanodeStorageReport(HdfsConstants.DatanodeReportType type) throws IOException
      Get a report on the current datanode storages.
      Throws:
      IOException
    • getPreferredBlockSize

      long getPreferredBlockSize(String filename) throws IOException
      Get the block size for the given file.
      Parameters:
      filename - The name of the file
      Returns:
      The number of bytes in each block
      Throws:
      IOException
      org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
    • setSafeMode

      boolean setSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException
      Enter, leave or get safe mode.

      Safe mode is a name node state when it

      1. does not accept changes to name space (read-only), and
      2. does not replicate or delete blocks.

      Safe mode is entered automatically at name node startup. Safe mode can also be entered manually using setSafeMode(SafeModeAction.SAFEMODE_ENTER,false).

      At startup the name node accepts data node reports collecting information about block locations. In order to leave safe mode it needs to collect a configurable percentage called threshold of blocks, which satisfy the minimal replication condition. The minimal replication condition is that each block must have at least dfs.namenode.replication.min replicas. When the threshold is reached the name node extends safe mode for a configurable amount of time to let the remaining data nodes to check in before it will start replicating missing blocks. Then the name node leaves safe mode.

      If safe mode is turned on manually using setSafeMode(SafeModeAction.SAFEMODE_ENTER,false) then the name node stays in safe mode until it is manually turned off using setSafeMode(SafeModeAction.SAFEMODE_LEAVE,false). Current state of the name node can be verified using setSafeMode(SafeModeAction.SAFEMODE_GET,false)

      Configuration parameters:

      dfs.safemode.threshold.pct is the threshold parameter.
      dfs.safemode.extension is the safe mode extension parameter.
      dfs.namenode.replication.min is the minimal replication parameter.

      Special cases:

      The name node does not enter safe mode at startup if the threshold is set to 0 or if the name space is empty.
      If the threshold is set to 1 then all blocks need to have at least minimal replication.
      If the threshold value is greater than 1 then the name node will not be able to turn off safe mode automatically.
      Safe mode can always be turned off manually.
      Parameters:
      action -
      • 0 leave safe mode;
      • 1 enter safe mode;
      • 2 get safe mode state.
      isChecked - If true then action will be done only in ActiveNN.
      Returns:
      • 0 if the safe mode is OFF or
      • 1 if the safe mode is ON.
      Throws:
      IOException
    • saveNamespace

      boolean saveNamespace(long timeWindow, long txGap) throws IOException
      Save namespace image.

      Saves current namespace into storage directories and reset edits log. Requires superuser privilege and safe mode.

      Parameters:
      timeWindow - NameNode does a checkpoint if the latest checkpoint was done beyond the given time period (in seconds).
      txGap - NameNode does a checkpoint if the gap between the latest checkpoint and the latest transaction id is greater this gap.
      Returns:
      whether an extra checkpoint has been done
      Throws:
      IOException - if image creation failed.
    • rollEdits

      long rollEdits() throws IOException
      Roll the edit log. Requires superuser privileges.
      Returns:
      the txid of the new segment
      Throws:
      org.apache.hadoop.security.AccessControlException - if the superuser privilege is violated
      IOException - if log roll fails
    • restoreFailedStorage

      boolean restoreFailedStorage(String arg) throws IOException
      Enable/Disable restore failed storage.

      sets flag to enable restore of failed storage replicas

      Throws:
      org.apache.hadoop.security.AccessControlException - if the superuser privilege is violated.
      IOException
    • refreshNodes

      void refreshNodes() throws IOException
      Tells the namenode to reread the hosts and exclude files.
      Throws:
      IOException
    • finalizeUpgrade

      void finalizeUpgrade() throws IOException
      Finalize previous upgrade. Remove file system state saved during the upgrade. The upgrade will become irreversible.
      Throws:
      IOException
    • upgradeStatus

      boolean upgradeStatus() throws IOException
      Get status of upgrade - finalized or not.
      Returns:
      true if upgrade is finalized or if no upgrade is in progress and false otherwise.
      Throws:
      IOException
    • rollingUpgrade

      Rolling upgrade operations.
      Parameters:
      action - either query, prepare or finalize.
      Returns:
      rolling upgrade information. On query, if no upgrade is in progress, returns null.
      Throws:
      IOException
    • listCorruptFileBlocks

      CorruptFileBlocks listCorruptFileBlocks(String path, String cookie) throws IOException
      Returns:
      CorruptFileBlocks, containing a list of corrupt files (with duplicates if there is more than one corrupt block in a file) and a cookie
      Throws:
      IOException - Each call returns a subset of the corrupt files in the system. To obtain all corrupt files, call this method repeatedly and each time pass in the cookie returned from the previous call.
    • metaSave

      void metaSave(String filename) throws IOException
      Dumps namenode data structures into specified file. If the file already exists, then append.
      Throws:
      IOException
    • setBalancerBandwidth

      void setBalancerBandwidth(long bandwidth) throws IOException
      Tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec.
      Parameters:
      bandwidth - Blanacer bandwidth in bytes per second for this datanode.
      Throws:
      IOException
    • getFileInfo

      HdfsFileStatus getFileInfo(String src) throws IOException
      Get the file info for a specific file or directory.
      Parameters:
      src - The string representation of the path to the file
      Returns:
      object containing information regarding the file or null if file not found
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file src is not found
      org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
      IOException - If an I/O error occurred
    • isFileClosed

      boolean isFileClosed(String src) throws IOException
      Get the close status of a file.
      Parameters:
      src - The string representation of the path to the file
      Returns:
      return true if file is closed
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file src is not found
      org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
      IOException - If an I/O error occurred
    • getFileLinkInfo

      HdfsFileStatus getFileLinkInfo(String src) throws IOException
      Get the file info for a specific file or directory. If the path refers to a symlink then the FileStatus of the symlink is returned.
      Parameters:
      src - The string representation of the path to the file
      Returns:
      object containing information regarding the file or null if file not found
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink
      IOException - If an I/O error occurred
    • getLocatedFileInfo

      HdfsLocatedFileStatus getLocatedFileInfo(String src, boolean needBlockToken) throws IOException
      Get the file info for a specific file or directory with LocatedBlocks.
      Parameters:
      src - The string representation of the path to the file
      needBlockToken - Generate block tokens for LocatedBlocks
      Returns:
      object containing information regarding the file or null if file not found
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file src is not found
      IOException - If an I/O error occurred
    • getContentSummary

      org.apache.hadoop.fs.ContentSummary getContentSummary(String path) throws IOException
      Get ContentSummary rooted at the specified directory.
      Parameters:
      path - The string representation of the path
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file path is not found
      org.apache.hadoop.fs.UnresolvedLinkException - if path contains a symlink.
      IOException - If an I/O error occurred
    • setQuota

      void setQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) throws IOException
      Set the quota for a directory.
      Parameters:
      path - The string representation of the path to the directory
      namespaceQuota - Limit on the number of names in the tree rooted at the directory
      storagespaceQuota - Limit on storage space occupied all the files under this directory.
      type - StorageType that the space quota is intended to be set on. It may be null when called by traditional space/namespace quota. When type is is not null, the storagespaceQuota parameter is for type specified and namespaceQuota must be HdfsConstants.QUOTA_DONT_SET.

      The quota can have three types of values : (1) 0 or more will set the quota to that value, (2) HdfsConstants.QUOTA_DONT_SET implies the quota will not be changed, and (3) HdfsConstants.QUOTA_RESET implies the quota will be reset. Any other value is a runtime error.
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file path is not found
      QuotaExceededException - if the directory size is greater than the given quota
      org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • fsync

      void fsync(String src, long inodeId, String client, long lastBlockLength) throws IOException
      Write all metadata for this file into persistent storage. The file must be currently open for writing.
      Parameters:
      src - The string representation of the path
      inodeId - The inode ID, or GRANDFATHER_INODE_ID if the client is too old to support fsync with inode IDs.
      client - The string representation of the client
      lastBlockLength - The length of the last block (under construction) to be reported to NameNode
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file src is not found
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink.
      IOException - If an I/O error occurred
    • setTimes

      void setTimes(String src, long mtime, long atime) throws IOException
      Sets the modification and access time of the file to the specified time.
      Parameters:
      src - The string representation of the path
      mtime - The number of milliseconds since Jan 1, 1970. Setting negative mtime means that modification time should not be set by this call.
      atime - The number of milliseconds since Jan 1, 1970. Setting negative atime means that access time should not be set by this call.
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file src is not found
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink.
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • createSymlink

      void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerm, boolean createParent) throws IOException
      Create symlink to a file or directory.
      Parameters:
      target - The path of the destination that the link points to.
      link - The path of the link being created.
      dirPerm - permissions to use when creating parent directories
      createParent - - if true then missing parent dirs are created if false then parent must exist
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      org.apache.hadoop.fs.FileAlreadyExistsException - If file link already exists
      FileNotFoundException - If parent of link does not exist and createParent is false
      org.apache.hadoop.fs.ParentNotDirectoryException - If parent of link is not a directory.
      org.apache.hadoop.fs.UnresolvedLinkException - if link contains a symlink.
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • getLinkTarget

      String getLinkTarget(String path) throws IOException
      Return the target of the given symlink. If there is an intermediate symlink in the path (ie a symlink leading up to the final path component) then the given path is returned with this symlink resolved.
      Parameters:
      path - The path with a link that needs resolution.
      Returns:
      The path after resolving the first symbolic link in the path.
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - If path does not exist
      IOException - If the given path does not refer to a symlink or an I/O error occurred
    • updateBlockForPipeline

      LocatedBlock updateBlockForPipeline(ExtendedBlock block, String clientName) throws IOException
      Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block.
      Parameters:
      block - a block
      clientName - the name of the client
      Returns:
      a located block with a new generation stamp and an access token
      Throws:
      IOException - if any error occurs
    • updatePipeline

      void updatePipeline(String clientName, ExtendedBlock oldBlock, ExtendedBlock newBlock, DatanodeID[] newNodes, String[] newStorageIDs) throws IOException
      Update a pipeline for a block under construction.
      Parameters:
      clientName - the name of the client
      oldBlock - the old block
      newBlock - the new block containing new generation stamp and length
      newNodes - datanodes in the pipeline
      Throws:
      IOException - if any error occurs
    • getDelegationToken

      org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException
      Get a valid Delegation Token.
      Parameters:
      renewer - the designated renewer for the token
      Throws:
      IOException
    • renewDelegationToken

      long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException
      Renew an existing delegation token.
      Parameters:
      token - delegation token obtained earlier
      Returns:
      the new expiration time
      Throws:
      IOException
    • cancelDelegationToken

      void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException
      Cancel an existing delegation token.
      Parameters:
      token - delegation token
      Throws:
      IOException
    • getDataEncryptionKey

      DataEncryptionKey getDataEncryptionKey() throws IOException
      Returns:
      encryption key so a client can encrypt data sent via the DataTransferProtocol to/from DataNodes.
      Throws:
      IOException
    • createSnapshot

      String createSnapshot(String snapshotRoot, String snapshotName) throws IOException
      Create a snapshot.
      Parameters:
      snapshotRoot - the path that is being snapshotted
      snapshotName - name of the snapshot created
      Returns:
      the snapshot path.
      Throws:
      IOException
    • deleteSnapshot

      void deleteSnapshot(String snapshotRoot, String snapshotName) throws IOException
      Delete a specific snapshot of a snapshottable directory.
      Parameters:
      snapshotRoot - The snapshottable directory
      snapshotName - Name of the snapshot for the snapshottable directory
      Throws:
      IOException
    • renameSnapshot

      void renameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) throws IOException
      Rename a snapshot.
      Parameters:
      snapshotRoot - the directory path where the snapshot was taken
      snapshotOldName - old name of the snapshot
      snapshotNewName - new name of the snapshot
      Throws:
      IOException
    • allowSnapshot

      void allowSnapshot(String snapshotRoot) throws IOException
      Allow snapshot on a directory.
      Parameters:
      snapshotRoot - the directory to be snapped
      Throws:
      IOException - on error
    • disallowSnapshot

      void disallowSnapshot(String snapshotRoot) throws IOException
      Disallow snapshot on a directory.
      Parameters:
      snapshotRoot - the directory to disallow snapshot
      Throws:
      IOException - on error
    • getSnapshotDiffReport

      SnapshotDiffReport getSnapshotDiffReport(String snapshotRoot, String fromSnapshot, String toSnapshot) throws IOException
      Get the difference between two snapshots, or between a snapshot and the current tree of a directory.
      Parameters:
      snapshotRoot - full path of the directory where snapshots are taken
      fromSnapshot - snapshot name of the from point. Null indicates the current tree
      toSnapshot - snapshot name of the to point. Null indicates the current tree.
      Returns:
      The difference report represented as a SnapshotDiffReport.
      Throws:
      IOException - on error
    • getSnapshotDiffReportListing

      SnapshotDiffReportListing getSnapshotDiffReportListing(String snapshotRoot, String fromSnapshot, String toSnapshot, byte[] startPath, int index) throws IOException
      Get the difference between two snapshots of a directory iteratively.
      Parameters:
      snapshotRoot - full path of the directory where snapshots are taken
      fromSnapshot - snapshot name of the from point. Null indicates the current tree
      toSnapshot - snapshot name of the to point. Null indicates the current tree.
      startPath - path relative to the snapshottable root directory from where the snapshotdiff computation needs to start across multiple rpc calls
      index - index in the created or deleted list of the directory at which the snapshotdiff computation stopped during the last rpc call as the no of entries exceeded the snapshotdiffentry limit. -1 indicates, the snapshotdiff compuatation needs to start right from the startPath provided.
      Returns:
      The difference report represented as a SnapshotDiffReport.
      Throws:
      IOException - on error
    • addCacheDirective

      long addCacheDirective(CacheDirectiveInfo directive, EnumSet<CacheFlag> flags) throws IOException
      Add a CacheDirective to the CacheManager.
      Parameters:
      directive - A CacheDirectiveInfo to be added
      flags - CacheFlags to use for this operation.
      Returns:
      A CacheDirectiveInfo associated with the added directive
      Throws:
      IOException - if the directive could not be added
    • modifyCacheDirective

      void modifyCacheDirective(CacheDirectiveInfo directive, EnumSet<CacheFlag> flags) throws IOException
      Modify a CacheDirective in the CacheManager.
      Parameters:
      flags - CacheFlags to use for this operation.
      Throws:
      IOException - if the directive could not be modified
    • removeCacheDirective

      void removeCacheDirective(long id) throws IOException
      Remove a CacheDirectiveInfo from the CacheManager.
      Parameters:
      id - of a CacheDirectiveInfo
      Throws:
      IOException - if the cache directive could not be removed
    • listCacheDirectives

      org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<CacheDirectiveEntry> listCacheDirectives(long prevId, CacheDirectiveInfo filter) throws IOException
      List the set of cached paths of a cache pool. Incrementally fetches results from the server.
      Parameters:
      prevId - The last listed entry ID, or -1 if this is the first call to listCacheDirectives.
      filter - Parameters to use to filter the list results, or null to display all directives visible to us.
      Returns:
      A batch of CacheDirectiveEntry objects.
      Throws:
      IOException
    • addCachePool

      void addCachePool(CachePoolInfo info) throws IOException
      Add a new cache pool.
      Parameters:
      info - Description of the new cache pool
      Throws:
      IOException - If the request could not be completed.
    • modifyCachePool

      void modifyCachePool(CachePoolInfo req) throws IOException
      Modify an existing cache pool.
      Parameters:
      req - The request to modify a cache pool.
      Throws:
      IOException - If the request could not be completed.
    • removeCachePool

      void removeCachePool(String pool) throws IOException
      Remove a cache pool.
      Parameters:
      pool - name of the cache pool to remove.
      Throws:
      IOException - if the cache pool did not exist, or could not be removed.
    • listCachePools

      org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<CachePoolEntry> listCachePools(String prevPool) throws IOException
      List the set of cache pools. Incrementally fetches results from the server.
      Parameters:
      prevPool - name of the last pool listed, or the empty string if this is the first invocation of listCachePools
      Returns:
      A batch of CachePoolEntry objects.
      Throws:
      IOException
    • modifyAclEntries

      void modifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Modifies ACL entries of files and directories. This method can add new ACL entries or modify the permissions on existing ACL entries. All existing ACL entries that are not specified in this call are retained without changes. (Modifications are merged into the current ACL.)
      Throws:
      IOException
    • removeAclEntries

      void removeAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Removes ACL entries from files and directories. Other ACL entries are retained.
      Throws:
      IOException
    • removeDefaultAcl

      void removeDefaultAcl(String src) throws IOException
      Removes all default ACL entries from files and directories.
      Throws:
      IOException
    • removeAcl

      void removeAcl(String src) throws IOException
      Removes all but the base ACL entries of files and directories. The entries for user, group, and others are retained for compatibility with permission bits.
      Throws:
      IOException
    • setAcl

      void setAcl(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Fully replaces ACL of files and directories, discarding all existing entries.
      Throws:
      IOException
    • getAclStatus

      org.apache.hadoop.fs.permission.AclStatus getAclStatus(String src) throws IOException
      Gets the ACLs of files and directories.
      Throws:
      IOException
    • createEncryptionZone

      void createEncryptionZone(String src, String keyName) throws IOException
      Create an encryption zone.
      Throws:
      IOException
    • getEZForPath

      EncryptionZone getEZForPath(String src) throws IOException
      Get the encryption zone for a path.
      Throws:
      IOException
    • listEncryptionZones

      org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<EncryptionZone> listEncryptionZones(long prevId) throws IOException
      Used to implement cursor-based batched listing of EncryptionZones.
      Parameters:
      prevId - ID of the last item in the previous batch. If there is no previous batch, a negative value can be used.
      Returns:
      Batch of encryption zones.
      Throws:
      IOException
    • reencryptEncryptionZone

      void reencryptEncryptionZone(String zone, HdfsConstants.ReencryptAction action) throws IOException
      Used to implement re-encryption of encryption zones.
      Parameters:
      zone - the encryption zone to re-encrypt.
      action - the action for the re-encryption.
      Throws:
      IOException
    • listReencryptionStatus

      org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<ZoneReencryptionStatus> listReencryptionStatus(long prevId) throws IOException
      Used to implement cursor-based batched listing of ZoneReencryptionStatuss.
      Parameters:
      prevId - ID of the last item in the previous batch. If there is no previous batch, a negative value can be used.
      Returns:
      Batch of encryption zones.
      Throws:
      IOException
    • setXAttr

      void setXAttr(String src, XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException
      Set xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".

      Refer to the HDFS extended attributes user documentation for details.

      Parameters:
      src - file or directory
      xAttr - XAttr to set
      flag - set flag
      Throws:
      IOException
    • getXAttrs

      List<XAttr> getXAttrs(String src, List<XAttr> xAttrs) throws IOException
      Get xattrs of a file or directory. Values in xAttrs parameter are ignored. If xAttrs is null or empty, this is the same as getting all xattrs of the file or directory. Only those xattrs for which the logged-in user has permissions to view are returned.

      Refer to the HDFS extended attributes user documentation for details.

      Parameters:
      src - file or directory
      xAttrs - xAttrs to get
      Returns:
      XAttr list
      Throws:
      IOException
    • listXAttrs

      List<XAttr> listXAttrs(String src) throws IOException
      List the xattrs names for a file or directory. Only the xattr names for which the logged in user has the permissions to access will be returned.

      Refer to the HDFS extended attributes user documentation for details.

      Parameters:
      src - file or directory
      Returns:
      XAttr list
      Throws:
      IOException
    • removeXAttr

      void removeXAttr(String src, XAttr xAttr) throws IOException
      Remove xattr of a file or directory.Value in xAttr parameter is ignored. The name must be prefixed with the namespace followed by ".". For example, "user.attr".

      Refer to the HDFS extended attributes user documentation for details.

      Parameters:
      src - file or directory
      xAttr - XAttr to remove
      Throws:
      IOException
    • checkAccess

      void checkAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) throws IOException
      Checks if the user can access a path. The mode specifies which access checks to perform. If the requested permissions are granted, then the method returns normally. If access is denied, then the method throws an AccessControlException. In general, applications should avoid using this method, due to the risk of time-of-check/time-of-use race conditions. The permissions on a file may change immediately after the access call returns.
      Parameters:
      path - Path to check
      mode - type of access to check
      Throws:
      org.apache.hadoop.security.AccessControlException - if access is denied
      FileNotFoundException - if the path does not exist
      IOException - see specific implementation
    • getCurrentEditLogTxid

      long getCurrentEditLogTxid() throws IOException
      Get the highest txid the NameNode knows has been written to the edit log, or -1 if the NameNode's edit log is not yet open for write. Used as the starting point for the inotify event stream.
      Throws:
      IOException
    • getEditsFromTxid

      EventBatchList getEditsFromTxid(long txid) throws IOException
      Get an ordered list of batches of events corresponding to the edit log transactions for txids equal to or greater than txid.
      Throws:
      IOException
    • setErasureCodingPolicy

      void setErasureCodingPolicy(String src, String ecPolicyName) throws IOException
      Set an erasure coding policy on a specified path.
      Parameters:
      src - The path to set policy on.
      ecPolicyName - The erasure coding policy name.
      Throws:
      IOException
    • addErasureCodingPolicies

      AddErasureCodingPolicyResponse[] addErasureCodingPolicies(ErasureCodingPolicy[] policies) throws IOException
      Add Erasure coding policies to HDFS. For each policy input, schema and cellSize are musts, name and id are ignored. They will be automatically created and assigned by Namenode once the policy is successfully added, and will be returned in the response.
      Parameters:
      policies - The user defined ec policy list to add.
      Returns:
      Return the response list of adding operations.
      Throws:
      IOException
    • removeErasureCodingPolicy

      void removeErasureCodingPolicy(String ecPolicyName) throws IOException
      Remove erasure coding policy.
      Parameters:
      ecPolicyName - The name of the policy to be removed.
      Throws:
      IOException
    • enableErasureCodingPolicy

      void enableErasureCodingPolicy(String ecPolicyName) throws IOException
      Enable erasure coding policy.
      Parameters:
      ecPolicyName - The name of the policy to be enabled.
      Throws:
      IOException
    • disableErasureCodingPolicy

      void disableErasureCodingPolicy(String ecPolicyName) throws IOException
      Disable erasure coding policy.
      Parameters:
      ecPolicyName - The name of the policy to be disabled.
      Throws:
      IOException
    • getErasureCodingPolicies

      ErasureCodingPolicyInfo[] getErasureCodingPolicies() throws IOException
      Get the erasure coding policies loaded in Namenode, excluding REPLICATION policy.
      Throws:
      IOException
    • getErasureCodingCodecs

      Map<String,String> getErasureCodingCodecs() throws IOException
      Get the erasure coding codecs loaded in Namenode.
      Throws:
      IOException
    • getErasureCodingPolicy

      ErasureCodingPolicy getErasureCodingPolicy(String src) throws IOException
      Get the information about the EC policy for the path. Null will be returned if directory or file has REPLICATION policy.
      Parameters:
      src - path to get the info for
      Throws:
      IOException
    • unsetErasureCodingPolicy

      void unsetErasureCodingPolicy(String src) throws IOException
      Unset erasure coding policy from a specified path.
      Parameters:
      src - The path to unset policy.
      Throws:
      IOException
    • getECTopologyResultForPolicies

      ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException
      Verifies if the given policies are supported in the given cluster setup. If not policy is specified checks for all enabled policies.
      Parameters:
      policyNames - name of policies.
      Returns:
      the result if the given policies are supported in the cluster setup
      Throws:
      IOException
    • getQuotaUsage

      org.apache.hadoop.fs.QuotaUsage getQuotaUsage(String path) throws IOException
      Get QuotaUsage rooted at the specified directory. Note: due to HDFS-6763, standby/observer doesn't keep up-to-date info about quota usage, and thus even though this is ReadOnly, it can only be directed to the active namenode.
      Parameters:
      path - The string representation of the path
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file path is not found
      org.apache.hadoop.fs.UnresolvedLinkException - if path contains a symlink.
      IOException - If an I/O error occurred
    • listOpenFiles

      @Deprecated org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<OpenFileEntry> listOpenFiles(long prevId) throws IOException
      Deprecated.
      List open files in the system in batches. INode id is the cursor and the open files returned in a batch will have their INode ids greater than the cursor INode id. Open files can only be requested by super user and the the list across batches are not atomic.
      Parameters:
      prevId - the cursor INode id.
      Throws:
      IOException
    • listOpenFiles

      org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<OpenFileEntry> listOpenFiles(long prevId, EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException
      List open files in the system in batches. INode id is the cursor and the open files returned in a batch will have their INode ids greater than the cursor INode id. Open files can only be requested by super user and the the list across batches are not atomic.
      Parameters:
      prevId - the cursor INode id.
      openFilesTypes - types to filter the open files.
      path - path to filter the open files.
      Throws:
      IOException
    • getHAServiceState

      org.apache.hadoop.ha.HAServiceProtocol.HAServiceState getHAServiceState() throws IOException
      Get HA service state of the server.
      Returns:
      server HA state
      Throws:
      IOException
    • msync

      void msync() throws IOException
      Called by client to wait until the server has reached the state id of the client. The client and server state id are given by client side and server side alignment context respectively. This can be a blocking call.
      Throws:
      IOException
    • satisfyStoragePolicy

      void satisfyStoragePolicy(String path) throws IOException
      Satisfy the storage policy for a file/directory.
      Parameters:
      path - Path of an existing file/directory.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied.
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink.
      FileNotFoundException - If file/dir src is not found.
      SafeModeException - append not allowed in safemode.
      IOException
    • getSlowDatanodeReport

      DatanodeInfo[] getSlowDatanodeReport() throws IOException
      Get report on all of the slow Datanodes. Slow running datanodes are identified based on the Outlier detection algorithm, if slow peer tracking is enabled for the DFS cluster.
      Returns:
      Datanode report for slow running datanodes.
      Throws:
      IOException - If an I/O error occurs.
    • getEnclosingRoot

      org.apache.hadoop.fs.Path getEnclosingRoot(String src) throws IOException
      Get the enclosing root for a path.
      Throws:
      IOException