Class ClientNamenodeProtocolTranslatorPB

java.lang.Object
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB
All Implemented Interfaces:
Closeable, AutoCloseable, ClientProtocol, org.apache.hadoop.ipc.ProtocolMetaInterface, org.apache.hadoop.ipc.ProtocolTranslator

@Private @Stable public class ClientNamenodeProtocolTranslatorPB extends Object implements org.apache.hadoop.ipc.ProtocolMetaInterface, ClientProtocol, Closeable, org.apache.hadoop.ipc.ProtocolTranslator
This class forwards NN's ClientProtocol calls as RPC calls to the NN server while translating from the parameter types used in ClientProtocol to the new PB types.
  • Field Details

    • VOID_GET_SERVER_DEFAULT_REQUEST

      protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetServerDefaultsRequestProto VOID_GET_SERVER_DEFAULT_REQUEST
    • VOID_GET_FSSTATUS_REQUEST

      protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetFsStatusRequestProto VOID_GET_FSSTATUS_REQUEST
    • VOID_GET_FS_REPLICATED_BLOCK_STATS_REQUEST

      protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetFsReplicatedBlockStatsRequestProto VOID_GET_FS_REPLICATED_BLOCK_STATS_REQUEST
    • VOID_GET_FS_ECBLOCKGROUP_STATS_REQUEST

      protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetFsECBlockGroupStatsRequestProto VOID_GET_FS_ECBLOCKGROUP_STATS_REQUEST
    • VOID_ROLLEDITS_REQUEST

      protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.RollEditsRequestProto VOID_ROLLEDITS_REQUEST
    • VOID_REFRESH_NODES_REQUEST

      protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.RefreshNodesRequestProto VOID_REFRESH_NODES_REQUEST
    • VOID_FINALIZE_UPGRADE_REQUEST

      protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.FinalizeUpgradeRequestProto VOID_FINALIZE_UPGRADE_REQUEST
    • VOID_UPGRADE_STATUS_REQUEST

      protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.UpgradeStatusRequestProto VOID_UPGRADE_STATUS_REQUEST
    • VOID_GET_DATA_ENCRYPTIONKEY_REQUEST

      protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetDataEncryptionKeyRequestProto VOID_GET_DATA_ENCRYPTIONKEY_REQUEST
    • VOID_GET_STORAGE_POLICIES_REQUEST

      protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetStoragePoliciesRequestProto VOID_GET_STORAGE_POLICIES_REQUEST
    • VOID_GET_EC_POLICIES_REQUEST

      protected static final org.apache.hadoop.hdfs.protocol.proto.ErasureCodingProtos.GetErasureCodingPoliciesRequestProto VOID_GET_EC_POLICIES_REQUEST
    • VOID_GET_EC_CODEC_REQUEST

      protected static final org.apache.hadoop.hdfs.protocol.proto.ErasureCodingProtos.GetErasureCodingCodecsRequestProto VOID_GET_EC_CODEC_REQUEST
  • Constructor Details

  • Method Details

    • close

      public void close()
      Specified by:
      close in interface AutoCloseable
      Specified by:
      close in interface Closeable
    • getBlockLocations

      public LocatedBlocks getBlockLocations(String src, long offset, long length) throws IOException
      Description copied from interface: ClientProtocol
      Get locations of the blocks of the specified file within the specified range. DataNode locations for each block are sorted by the proximity to the client.

      Return LocatedBlocks which contains file length, blocks and their locations. DataNode locations for each block are sorted by the distance to the client's address.

      The client will then have to contact one of the indicated DataNodes to obtain the actual data.

      Specified by:
      getBlockLocations in interface ClientProtocol
      Parameters:
      src - file name
      offset - range start offset
      length - range length
      Returns:
      file length and array of blocks with their locations
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src does not exist
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      IOException - If an I/O error occurred
    • getServerDefaults

      public org.apache.hadoop.fs.FsServerDefaults getServerDefaults() throws IOException
      Description copied from interface: ClientProtocol
      Get server default values for a number of configuration params.
      Specified by:
      getServerDefaults in interface ClientProtocol
      Returns:
      a set of server default configuration values
      Throws:
      IOException
    • create

      public HdfsFileStatus create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) throws IOException
      Description copied from interface: ClientProtocol
      Create a new file entry in the namespace.

      This will create an empty file specified by the source path. The path should reflect a full path originated at the root. The name-node does not have a notion of "current" directory for a client.

      Once created, the file is visible and available for read to other clients. Although, other clients cannot ClientProtocol.delete(String, boolean), re-create or ClientProtocol.rename(String, String) it until the file is completed or explicitly as a result of lease expiration.

      Blocks have a maximum size. Clients that intend to create multi-block files must also use ClientProtocol.addBlock(java.lang.String, java.lang.String, org.apache.hadoop.hdfs.protocol.ExtendedBlock, org.apache.hadoop.hdfs.protocol.DatanodeInfo[], long, java.lang.String[], java.util.EnumSet<org.apache.hadoop.hdfs.AddBlockFlag>)

      Specified by:
      create in interface ClientProtocol
      Parameters:
      src - path of the file being created.
      masked - masked permission.
      clientName - name of the current client.
      flag - indicates whether the file should be overwritten if it already exists or create if it does not exist or append, or whether the file should be a replicate file, no matter what its ancestor's replication or erasure coding policy is.
      createParent - create missing parent directory if true
      replication - block replication factor.
      blockSize - maximum block size.
      supportedVersions - CryptoProtocolVersions supported by the client
      ecPolicyName - the name of erasure coding policy. A null value means this file will inherit its parent directory's policy, either traditional replication or erasure coding policy. ecPolicyName and SHOULD_REPLICATE CreateFlag are mutually exclusive. It's invalid to set both SHOULD_REPLICATE flag and a non-null ecPolicyName.
      storagePolicy - the name of the storage policy.
      Returns:
      the status of the created file, it could be null if the server doesn't support returning the file status
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      AlreadyBeingCreatedException - if the path does not exist.
      DSQuotaExceededException - If file creation violates disk space quota restriction
      org.apache.hadoop.fs.FileAlreadyExistsException - If file src already exists
      FileNotFoundException - If parent of src does not exist and createParent is false
      org.apache.hadoop.fs.ParentNotDirectoryException - If parent of src is not a directory.
      NSQuotaExceededException - If file creation violates name space quota restriction
      SafeModeException - create not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred RuntimeExceptions:
    • truncate

      public boolean truncate(String src, long newLength, String clientName) throws IOException
      Description copied from interface: ClientProtocol
      Truncate file src to new size.
      • Fails if src is a directory.
      • Fails if src does not exist.
      • Fails if src is not closed.
      • Fails if new size is greater than current size.

      This implementation of truncate is purely a namespace operation if truncate occurs at a block boundary. Requires DataNode block recovery otherwise.

      Specified by:
      truncate in interface ClientProtocol
      Parameters:
      src - existing file
      newLength - the target size
      Returns:
      true if client does not need to wait for block recovery, false if client needs to wait for block recovery.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      SafeModeException - truncate not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • append

      public LastBlockWithStatus append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) throws IOException
      Description copied from interface: ClientProtocol
      Append to the end of the file.
      Specified by:
      append in interface ClientProtocol
      Parameters:
      src - path of the file being created.
      clientName - name of the current client.
      flag - indicates whether the data is appended to a new block.
      Returns:
      wrapper with information about the last partial block and file status if any
      Throws:
      org.apache.hadoop.security.AccessControlException - if permission to append file is denied by the system. As usually on the client side the exception will be wrapped into RemoteException. Allows appending to an existing file if the server is configured with the parameter dfs.support.append set to true, otherwise throws an IOException.
      FileNotFoundException - If file src is not found
      DSQuotaExceededException - If append violates disk space quota restriction
      SafeModeException - append not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred. RuntimeExceptions:
    • setReplication

      public boolean setReplication(String src, short replication) throws IOException
      Description copied from interface: ClientProtocol
      Set replication for an existing file.

      The NameNode sets replication to the new value and returns. The actual block replication is not expected to be performed during this method call. The blocks will be populated or removed in the background as the result of the routine block maintenance procedures.

      Specified by:
      setReplication in interface ClientProtocol
      Parameters:
      src - file name
      replication - new replication
      Returns:
      true if successful; false if file does not exist or is a directory
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      DSQuotaExceededException - If replication violates disk space quota restriction
      FileNotFoundException - If file src is not found
      SafeModeException - not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • setPermission

      public void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException
      Description copied from interface: ClientProtocol
      Set permissions for an existing file/directory.
      Specified by:
      setPermission in interface ClientProtocol
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      SafeModeException - not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • setOwner

      public void setOwner(String src, String username, String groupname) throws IOException
      Description copied from interface: ClientProtocol
      Set Owner of a path (i.e. a file or a directory). The parameters username and groupname cannot both be null.
      Specified by:
      setOwner in interface ClientProtocol
      Parameters:
      src - file path
      username - If it is null, the original username remains unchanged.
      groupname - If it is null, the original groupname remains unchanged.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      SafeModeException - not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • abandonBlock

      public void abandonBlock(ExtendedBlock b, long fileId, String src, String holder) throws IOException
      Description copied from interface: ClientProtocol
      The client can give up on a block by calling abandonBlock(). The client can then either obtain a new block, or complete or abandon the file. Any partial writes to the block will be discarded.
      Specified by:
      abandonBlock in interface ClientProtocol
      Parameters:
      b - Block to abandon
      fileId - The id of the file where the block resides. Older clients will pass GRANDFATHER_INODE_ID here.
      src - The path of the file where the block resides.
      holder - Lease holder.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - file src is not found
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      IOException - If an I/O error occurred
    • addBlock

      public LocatedBlock addBlock(String src, String clientName, ExtendedBlock previous, DatanodeInfo[] excludeNodes, long fileId, String[] favoredNodes, EnumSet<AddBlockFlag> addBlockFlags) throws IOException
      Description copied from interface: ClientProtocol
      A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock(). addBlock() allocates a new block and datanodes the block data should be replicated to. addBlock() also commits the previous block by reporting to the name-node the actual generation stamp and the length of the block that the client has transmitted to data-nodes.
      Specified by:
      addBlock in interface ClientProtocol
      Parameters:
      src - the file being created
      clientName - the name of the client that adds the block
      previous - previous block
      excludeNodes - a list of nodes that should not be allocated for the current block
      fileId - the id uniquely identifying a file
      favoredNodes - the list of nodes where the client wants the blocks. Nodes are identified by either host name or address.
      addBlockFlags - flags to advise the behavior of allocating and placing a new block.
      Returns:
      LocatedBlock allocated block information.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      NotReplicatedYetException - previous blocks of the file are not replicated yet. Blocks cannot be added until replication completes.
      SafeModeException - create not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      IOException - If an I/O error occurred
    • getAdditionalDatanode

      public LocatedBlock getAdditionalDatanode(String src, long fileId, ExtendedBlock blk, DatanodeInfo[] existings, String[] existingStorageIDs, DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) throws IOException
      Description copied from interface: ClientProtocol
      Get a datanode for an existing pipeline.
      Specified by:
      getAdditionalDatanode in interface ClientProtocol
      Parameters:
      src - the file being written
      fileId - the ID of the file being written
      blk - the block being written
      existings - the existing nodes in the pipeline
      excludes - the excluded nodes
      numAdditionalNodes - number of additional datanodes
      clientName - the name of the client
      Returns:
      the located block.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      SafeModeException - create not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      IOException - If an I/O error occurred
    • complete

      public boolean complete(String src, String clientName, ExtendedBlock last, long fileId) throws IOException
      Description copied from interface: ClientProtocol
      The client is done writing data to the given filename, and would like to complete it. The function returns whether the file has been closed successfully. If the function returns false, the caller should try again. close() also commits the last block of file by reporting to the name-node the actual generation stamp and the length of the block that the client has transmitted to data-nodes. A call to complete() will not return true until all the file's blocks have been replicated the minimum number of times. Thus, DataNode failures may cause a client to call complete() several times before succeeding.
      Specified by:
      complete in interface ClientProtocol
      Parameters:
      src - the file being created
      clientName - the name of the client that adds the block
      last - the last block info
      fileId - the id uniquely identifying a file
      Returns:
      true if all file blocks are minimally replicated or false otherwise
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      SafeModeException - create not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      IOException - If an I/O error occurred
    • reportBadBlocks

      public void reportBadBlocks(LocatedBlock[] blocks) throws IOException
      Description copied from interface: ClientProtocol
      The client wants to report corrupted blocks (blocks with specified locations on datanodes).
      Specified by:
      reportBadBlocks in interface ClientProtocol
      Parameters:
      blocks - Array of located blocks to report
      Throws:
      IOException
    • rename

      public boolean rename(String src, String dst) throws IOException
      Description copied from interface: ClientProtocol
      Rename an item in the file system namespace.
      Specified by:
      rename in interface ClientProtocol
      Parameters:
      src - existing file or directory name.
      dst - new name.
      Returns:
      true if successful, or false if the old name does not exist or if the new name already belongs to the namespace.
      Throws:
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - an I/O error occurred
    • rename2

      public void rename2(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException
      Description copied from interface: ClientProtocol
      Rename src to dst.
      • Fails if src is a file and dst is a directory.
      • Fails if src is a directory and dst is a file.
      • Fails if the parent of dst does not exist or is a file.

      Without OVERWRITE option, rename fails if the dst already exists. With OVERWRITE option, rename overwrites the dst, if it is a file or an empty directory. Rename fails if dst is a non-empty directory.

      This implementation of rename is atomic.

      Specified by:
      rename2 in interface ClientProtocol
      Parameters:
      src - existing file or directory name.
      dst - new name.
      options - Rename options
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      DSQuotaExceededException - If rename violates disk space quota restriction
      org.apache.hadoop.fs.FileAlreadyExistsException - If dst already exists and options has Options.Rename.OVERWRITE option false.
      FileNotFoundException - If src does not exist
      NSQuotaExceededException - If rename violates namespace quota restriction
      org.apache.hadoop.fs.ParentNotDirectoryException - If parent of dst is not a directory
      SafeModeException - rename not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src or dst contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • concat

      public void concat(String trg, String[] srcs) throws IOException
      Description copied from interface: ClientProtocol
      Moves blocks from srcs to trg and delete srcs.
      Specified by:
      concat in interface ClientProtocol
      Parameters:
      trg - existing file
      srcs - - list of existing files (same block size, same replication)
      Throws:
      IOException - if some arguments are invalid
      org.apache.hadoop.fs.UnresolvedLinkException - if trg or srcs contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
    • delete

      public boolean delete(String src, boolean recursive) throws IOException
      Description copied from interface: ClientProtocol
      Delete the given file or directory from the file system.

      same as delete but provides a way to avoid accidentally deleting non empty directories programmatically.

      Specified by:
      delete in interface ClientProtocol
      Parameters:
      src - existing name
      recursive - if true deletes a non empty directory recursively, else throws an exception.
      Returns:
      true only if the existing file or directory was actually removed from the file system.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      FileNotFoundException - If file src is not found
      SafeModeException - create not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      org.apache.hadoop.fs.PathIsNotEmptyDirectoryException - if path is a non-empty directory and recursive is set to false
      IOException - If an I/O error occurred
    • mkdirs

      public boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException
      Description copied from interface: ClientProtocol
      Create a directory (or hierarchy of directories) with the given name and permission.
      Specified by:
      mkdirs in interface ClientProtocol
      Parameters:
      src - The path of the directory being created
      masked - The masked permission of the directory being created
      createParent - create missing parent directory if true
      Returns:
      True if the operation success.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      org.apache.hadoop.fs.FileAlreadyExistsException - If src already exists
      FileNotFoundException - If parent of src does not exist and createParent is false
      NSQuotaExceededException - If file creation violates quota restriction
      org.apache.hadoop.fs.ParentNotDirectoryException - If parent of src is not a directory
      SafeModeException - create not allowed in safemode
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred. RunTimeExceptions:
    • getListing

      public DirectoryListing getListing(String src, byte[] startAfter, boolean needLocation) throws IOException
      Description copied from interface: ClientProtocol
      Get a partial listing of the indicated directory.
      Specified by:
      getListing in interface ClientProtocol
      Parameters:
      src - the directory name
      startAfter - the name to start listing after encoded in java UTF8
      needLocation - if the FileStatus should contain block locations
      Returns:
      a partial listing starting after startAfter
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file src is not found
      org.apache.hadoop.fs.UnresolvedLinkException - If src contains a symlink
      IOException - If an I/O error occurred
    • getBatchedListing

      public BatchedDirectoryListing getBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) throws IOException
      Description copied from interface: ClientProtocol
      Get a partial listing of the input directories
      Specified by:
      getBatchedListing in interface ClientProtocol
      Parameters:
      srcs - the input directories
      startAfter - the name to start listing after encoded in Java UTF8
      needLocation - if the FileStatus should contain block locations
      Returns:
      a partial listing starting after startAfter. null if the input is empty
      Throws:
      IOException - if an I/O error occurred
    • renewLease

      public void renewLease(String clientName, List<String> namespaces) throws IOException
      Description copied from interface: ClientProtocol
      Client programs can cause stateful changes in the NameNode that affect other clients. A client may obtain a file and neither abandon nor complete it. A client might hold a series of locks that prevent other clients from proceeding. Clearly, it would be bad if a client held a bunch of locks that it never gave up. This can happen easily if the client dies unexpectedly.

      So, the NameNode will revoke the locks and live file-creates for clients that it thinks have died. A client tells the NameNode that it is still alive by periodically calling renewLease(). If a certain amount of time passes since the last call to renewLease(), the NameNode assumes the client has died.

      Specified by:
      renewLease in interface ClientProtocol
      namespaces - The full Namespace list that the renewLease rpc should be forwarded by RBF. Tips: NN side, this value should be null. RBF side, if this value is null, this rpc will be forwarded to all available namespaces, else this rpc will be forwarded to the special namespaces.
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      IOException - If an I/O error occurred
    • recoverLease

      public boolean recoverLease(String src, String clientName) throws IOException
      Description copied from interface: ClientProtocol
      Start lease recovery. Lightweight NameNode operation to trigger lease recovery
      Specified by:
      recoverLease in interface ClientProtocol
      Parameters:
      src - path of the file to start lease recovery
      clientName - name of the current client
      Returns:
      true if the file is already closed
      Throws:
      IOException
    • getStats

      public long[] getStats() throws IOException
      Description copied from interface: ClientProtocol
      Get an array of aggregated statistics combining blocks of both type BlockType.CONTIGUOUS and BlockType.STRIPED in the filesystem. Use public constants like ClientProtocol.GET_STATS_CAPACITY_IDX in place of actual numbers to index into the array.
      • [0] contains the total storage capacity of the system, in bytes.
      • [1] contains the total used space of the system, in bytes.
      • [2] contains the available storage of the system, in bytes.
      • [3] contains number of low redundancy blocks in the system.
      • [4] contains number of corrupt blocks.
      • [5] contains number of blocks without any good replicas left.
      • [6] contains number of blocks which have replication factor 1 and have lost the only replica.
      • [7] contains number of bytes that are at risk for deletion.
      • [8] contains number of pending deletion blocks.
      Specified by:
      getStats in interface ClientProtocol
      Throws:
      IOException
    • getReplicatedBlockStats

      public ReplicatedBlockStats getReplicatedBlockStats() throws IOException
      Description copied from interface: ClientProtocol
      Get statistics pertaining to blocks of type BlockType.CONTIGUOUS in the filesystem.
      Specified by:
      getReplicatedBlockStats in interface ClientProtocol
      Throws:
      IOException
    • getECBlockGroupStats

      public ECBlockGroupStats getECBlockGroupStats() throws IOException
      Description copied from interface: ClientProtocol
      Get statistics pertaining to blocks of type BlockType.STRIPED in the filesystem.
      Specified by:
      getECBlockGroupStats in interface ClientProtocol
      Throws:
      IOException
    • getDatanodeReport

      public DatanodeInfo[] getDatanodeReport(HdfsConstants.DatanodeReportType type) throws IOException
      Description copied from interface: ClientProtocol
      Get a report on the system's current datanodes. One DatanodeInfo object is returned for each DataNode. Return live datanodes if type is LIVE; dead datanodes if type is DEAD; otherwise all datanodes if type is ALL.
      Specified by:
      getDatanodeReport in interface ClientProtocol
      Throws:
      IOException
    • getDatanodeStorageReport

      public DatanodeStorageReport[] getDatanodeStorageReport(HdfsConstants.DatanodeReportType type) throws IOException
      Description copied from interface: ClientProtocol
      Get a report on the current datanode storages.
      Specified by:
      getDatanodeStorageReport in interface ClientProtocol
      Throws:
      IOException
    • getPreferredBlockSize

      public long getPreferredBlockSize(String filename) throws IOException
      Description copied from interface: ClientProtocol
      Get the block size for the given file.
      Specified by:
      getPreferredBlockSize in interface ClientProtocol
      Parameters:
      filename - The name of the file
      Returns:
      The number of bytes in each block
      Throws:
      IOException
      org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
    • setSafeMode

      public boolean setSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException
      Description copied from interface: ClientProtocol
      Enter, leave or get safe mode.

      Safe mode is a name node state when it

      1. does not accept changes to name space (read-only), and
      2. does not replicate or delete blocks.

      Safe mode is entered automatically at name node startup. Safe mode can also be entered manually using setSafeMode(SafeModeAction.SAFEMODE_ENTER,false).

      At startup the name node accepts data node reports collecting information about block locations. In order to leave safe mode it needs to collect a configurable percentage called threshold of blocks, which satisfy the minimal replication condition. The minimal replication condition is that each block must have at least dfs.namenode.replication.min replicas. When the threshold is reached the name node extends safe mode for a configurable amount of time to let the remaining data nodes to check in before it will start replicating missing blocks. Then the name node leaves safe mode.

      If safe mode is turned on manually using setSafeMode(SafeModeAction.SAFEMODE_ENTER,false) then the name node stays in safe mode until it is manually turned off using setSafeMode(SafeModeAction.SAFEMODE_LEAVE,false). Current state of the name node can be verified using setSafeMode(SafeModeAction.SAFEMODE_GET,false)

      Configuration parameters:

      dfs.safemode.threshold.pct is the threshold parameter.
      dfs.safemode.extension is the safe mode extension parameter.
      dfs.namenode.replication.min is the minimal replication parameter.

      Special cases:

      The name node does not enter safe mode at startup if the threshold is set to 0 or if the name space is empty.
      If the threshold is set to 1 then all blocks need to have at least minimal replication.
      If the threshold value is greater than 1 then the name node will not be able to turn off safe mode automatically.
      Safe mode can always be turned off manually.
      Specified by:
      setSafeMode in interface ClientProtocol
      Parameters:
      action -
      • 0 leave safe mode;
      • 1 enter safe mode;
      • 2 get safe mode state.
      isChecked - If true then action will be done only in ActiveNN.
      Returns:
      • 0 if the safe mode is OFF or
      • 1 if the safe mode is ON.
      Throws:
      IOException
    • saveNamespace

      public boolean saveNamespace(long timeWindow, long txGap) throws IOException
      Description copied from interface: ClientProtocol
      Save namespace image.

      Saves current namespace into storage directories and reset edits log. Requires superuser privilege and safe mode.

      Specified by:
      saveNamespace in interface ClientProtocol
      Parameters:
      timeWindow - NameNode does a checkpoint if the latest checkpoint was done beyond the given time period (in seconds).
      txGap - NameNode does a checkpoint if the gap between the latest checkpoint and the latest transaction id is greater this gap.
      Returns:
      whether an extra checkpoint has been done
      Throws:
      IOException - if image creation failed.
    • rollEdits

      public long rollEdits() throws IOException
      Description copied from interface: ClientProtocol
      Roll the edit log. Requires superuser privileges.
      Specified by:
      rollEdits in interface ClientProtocol
      Returns:
      the txid of the new segment
      Throws:
      org.apache.hadoop.security.AccessControlException - if the superuser privilege is violated
      IOException - if log roll fails
    • restoreFailedStorage

      public boolean restoreFailedStorage(String arg) throws IOException
      Description copied from interface: ClientProtocol
      Enable/Disable restore failed storage.

      sets flag to enable restore of failed storage replicas

      Specified by:
      restoreFailedStorage in interface ClientProtocol
      Throws:
      org.apache.hadoop.security.AccessControlException - if the superuser privilege is violated.
      IOException
    • refreshNodes

      public void refreshNodes() throws IOException
      Description copied from interface: ClientProtocol
      Tells the namenode to reread the hosts and exclude files.
      Specified by:
      refreshNodes in interface ClientProtocol
      Throws:
      IOException
    • finalizeUpgrade

      public void finalizeUpgrade() throws IOException
      Description copied from interface: ClientProtocol
      Finalize previous upgrade. Remove file system state saved during the upgrade. The upgrade will become irreversible.
      Specified by:
      finalizeUpgrade in interface ClientProtocol
      Throws:
      IOException
    • upgradeStatus

      public boolean upgradeStatus() throws IOException
      Description copied from interface: ClientProtocol
      Get status of upgrade - finalized or not.
      Specified by:
      upgradeStatus in interface ClientProtocol
      Returns:
      true if upgrade is finalized or if no upgrade is in progress and false otherwise.
      Throws:
      IOException
    • rollingUpgrade

      public RollingUpgradeInfo rollingUpgrade(HdfsConstants.RollingUpgradeAction action) throws IOException
      Description copied from interface: ClientProtocol
      Rolling upgrade operations.
      Specified by:
      rollingUpgrade in interface ClientProtocol
      Parameters:
      action - either query, prepare or finalize.
      Returns:
      rolling upgrade information. On query, if no upgrade is in progress, returns null.
      Throws:
      IOException
    • listCorruptFileBlocks

      public CorruptFileBlocks listCorruptFileBlocks(String path, String cookie) throws IOException
      Specified by:
      listCorruptFileBlocks in interface ClientProtocol
      Returns:
      CorruptFileBlocks, containing a list of corrupt files (with duplicates if there is more than one corrupt block in a file) and a cookie
      Throws:
      IOException - Each call returns a subset of the corrupt files in the system. To obtain all corrupt files, call this method repeatedly and each time pass in the cookie returned from the previous call.
    • metaSave

      public void metaSave(String filename) throws IOException
      Description copied from interface: ClientProtocol
      Dumps namenode data structures into specified file. If the file already exists, then append.
      Specified by:
      metaSave in interface ClientProtocol
      Throws:
      IOException
    • getFileInfo

      public HdfsFileStatus getFileInfo(String src) throws IOException
      Description copied from interface: ClientProtocol
      Get the file info for a specific file or directory.
      Specified by:
      getFileInfo in interface ClientProtocol
      Parameters:
      src - The string representation of the path to the file
      Returns:
      object containing information regarding the file or null if file not found
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file src is not found
      org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
      IOException - If an I/O error occurred
    • getLocatedFileInfo

      public HdfsLocatedFileStatus getLocatedFileInfo(String src, boolean needBlockToken) throws IOException
      Description copied from interface: ClientProtocol
      Get the file info for a specific file or directory with LocatedBlocks.
      Specified by:
      getLocatedFileInfo in interface ClientProtocol
      Parameters:
      src - The string representation of the path to the file
      needBlockToken - Generate block tokens for LocatedBlocks
      Returns:
      object containing information regarding the file or null if file not found
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file src is not found
      IOException - If an I/O error occurred
    • getFileLinkInfo

      public HdfsFileStatus getFileLinkInfo(String src) throws IOException
      Description copied from interface: ClientProtocol
      Get the file info for a specific file or directory. If the path refers to a symlink then the FileStatus of the symlink is returned.
      Specified by:
      getFileLinkInfo in interface ClientProtocol
      Parameters:
      src - The string representation of the path to the file
      Returns:
      object containing information regarding the file or null if file not found
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink
      IOException - If an I/O error occurred
    • getContentSummary

      public org.apache.hadoop.fs.ContentSummary getContentSummary(String path) throws IOException
      Description copied from interface: ClientProtocol
      Get ContentSummary rooted at the specified directory.
      Specified by:
      getContentSummary in interface ClientProtocol
      Parameters:
      path - The string representation of the path
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file path is not found
      org.apache.hadoop.fs.UnresolvedLinkException - if path contains a symlink.
      IOException - If an I/O error occurred
    • setQuota

      public void setQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) throws IOException
      Description copied from interface: ClientProtocol
      Set the quota for a directory.
      Specified by:
      setQuota in interface ClientProtocol
      Parameters:
      path - The string representation of the path to the directory
      namespaceQuota - Limit on the number of names in the tree rooted at the directory
      storagespaceQuota - Limit on storage space occupied all the files under this directory.
      type - StorageType that the space quota is intended to be set on. It may be null when called by traditional space/namespace quota. When type is is not null, the storagespaceQuota parameter is for type specified and namespaceQuota must be HdfsConstants.QUOTA_DONT_SET.

      The quota can have three types of values : (1) 0 or more will set the quota to that value, (2) HdfsConstants.QUOTA_DONT_SET implies the quota will not be changed, and (3) HdfsConstants.QUOTA_RESET implies the quota will be reset. Any other value is a runtime error.
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file path is not found
      QuotaExceededException - if the directory size is greater than the given quota
      org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • fsync

      public void fsync(String src, long fileId, String client, long lastBlockLength) throws IOException
      Description copied from interface: ClientProtocol
      Write all metadata for this file into persistent storage. The file must be currently open for writing.
      Specified by:
      fsync in interface ClientProtocol
      Parameters:
      src - The string representation of the path
      fileId - The inode ID, or GRANDFATHER_INODE_ID if the client is too old to support fsync with inode IDs.
      client - The string representation of the client
      lastBlockLength - The length of the last block (under construction) to be reported to NameNode
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file src is not found
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink.
      IOException - If an I/O error occurred
    • setTimes

      public void setTimes(String src, long mtime, long atime) throws IOException
      Description copied from interface: ClientProtocol
      Sets the modification and access time of the file to the specified time.
      Specified by:
      setTimes in interface ClientProtocol
      Parameters:
      src - The string representation of the path
      mtime - The number of milliseconds since Jan 1, 1970. Setting negative mtime means that modification time should not be set by this call.
      atime - The number of milliseconds since Jan 1, 1970. Setting negative atime means that access time should not be set by this call.
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file src is not found
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink.
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • createSymlink

      public void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerm, boolean createParent) throws IOException
      Description copied from interface: ClientProtocol
      Create symlink to a file or directory.
      Specified by:
      createSymlink in interface ClientProtocol
      Parameters:
      target - The path of the destination that the link points to.
      link - The path of the link being created.
      dirPerm - permissions to use when creating parent directories
      createParent - - if true then missing parent dirs are created if false then parent must exist
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      org.apache.hadoop.fs.FileAlreadyExistsException - If file link already exists
      FileNotFoundException - If parent of link does not exist and createParent is false
      org.apache.hadoop.fs.ParentNotDirectoryException - If parent of link is not a directory.
      org.apache.hadoop.fs.UnresolvedLinkException - if link contains a symlink.
      SnapshotAccessControlException - if path is in RO snapshot
      IOException - If an I/O error occurred
    • getLinkTarget

      public String getLinkTarget(String path) throws IOException
      Description copied from interface: ClientProtocol
      Return the target of the given symlink. If there is an intermediate symlink in the path (ie a symlink leading up to the final path component) then the given path is returned with this symlink resolved.
      Specified by:
      getLinkTarget in interface ClientProtocol
      Parameters:
      path - The path with a link that needs resolution.
      Returns:
      The path after resolving the first symbolic link in the path.
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - If path does not exist
      IOException - If the given path does not refer to a symlink or an I/O error occurred
    • updateBlockForPipeline

      public LocatedBlock updateBlockForPipeline(ExtendedBlock block, String clientName) throws IOException
      Description copied from interface: ClientProtocol
      Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block.
      Specified by:
      updateBlockForPipeline in interface ClientProtocol
      Parameters:
      block - a block
      clientName - the name of the client
      Returns:
      a located block with a new generation stamp and an access token
      Throws:
      IOException - if any error occurs
    • updatePipeline

      public void updatePipeline(String clientName, ExtendedBlock oldBlock, ExtendedBlock newBlock, DatanodeID[] newNodes, String[] storageIDs) throws IOException
      Description copied from interface: ClientProtocol
      Update a pipeline for a block under construction.
      Specified by:
      updatePipeline in interface ClientProtocol
      Parameters:
      clientName - the name of the client
      oldBlock - the old block
      newBlock - the new block containing new generation stamp and length
      newNodes - datanodes in the pipeline
      Throws:
      IOException - if any error occurs
    • getDelegationToken

      public org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException
      Description copied from interface: ClientProtocol
      Get a valid Delegation Token.
      Specified by:
      getDelegationToken in interface ClientProtocol
      Parameters:
      renewer - the designated renewer for the token
      Throws:
      IOException
    • renewDelegationToken

      public long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException
      Description copied from interface: ClientProtocol
      Renew an existing delegation token.
      Specified by:
      renewDelegationToken in interface ClientProtocol
      Parameters:
      token - delegation token obtained earlier
      Returns:
      the new expiration time
      Throws:
      IOException
    • cancelDelegationToken

      public void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException
      Description copied from interface: ClientProtocol
      Cancel an existing delegation token.
      Specified by:
      cancelDelegationToken in interface ClientProtocol
      Parameters:
      token - delegation token
      Throws:
      IOException
    • setBalancerBandwidth

      public void setBalancerBandwidth(long bandwidth) throws IOException
      Description copied from interface: ClientProtocol
      Tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec.
      Specified by:
      setBalancerBandwidth in interface ClientProtocol
      Parameters:
      bandwidth - Blanacer bandwidth in bytes per second for this datanode.
      Throws:
      IOException
    • isMethodSupported

      public boolean isMethodSupported(String methodName) throws IOException
      Specified by:
      isMethodSupported in interface org.apache.hadoop.ipc.ProtocolMetaInterface
      Throws:
      IOException
    • getDataEncryptionKey

      public DataEncryptionKey getDataEncryptionKey() throws IOException
      Specified by:
      getDataEncryptionKey in interface ClientProtocol
      Returns:
      encryption key so a client can encrypt data sent via the DataTransferProtocol to/from DataNodes.
      Throws:
      IOException
    • isFileClosed

      public boolean isFileClosed(String src) throws IOException
      Description copied from interface: ClientProtocol
      Get the close status of a file.
      Specified by:
      isFileClosed in interface ClientProtocol
      Parameters:
      src - The string representation of the path to the file
      Returns:
      return true if file is closed
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file src is not found
      org.apache.hadoop.fs.UnresolvedLinkException - if the path contains a symlink.
      IOException - If an I/O error occurred
    • getUnderlyingProxyObject

      public Object getUnderlyingProxyObject()
      Specified by:
      getUnderlyingProxyObject in interface org.apache.hadoop.ipc.ProtocolTranslator
    • createSnapshot

      public String createSnapshot(String snapshotRoot, String snapshotName) throws IOException
      Description copied from interface: ClientProtocol
      Create a snapshot.
      Specified by:
      createSnapshot in interface ClientProtocol
      Parameters:
      snapshotRoot - the path that is being snapshotted
      snapshotName - name of the snapshot created
      Returns:
      the snapshot path.
      Throws:
      IOException
    • deleteSnapshot

      public void deleteSnapshot(String snapshotRoot, String snapshotName) throws IOException
      Description copied from interface: ClientProtocol
      Delete a specific snapshot of a snapshottable directory.
      Specified by:
      deleteSnapshot in interface ClientProtocol
      Parameters:
      snapshotRoot - The snapshottable directory
      snapshotName - Name of the snapshot for the snapshottable directory
      Throws:
      IOException
    • allowSnapshot

      public void allowSnapshot(String snapshotRoot) throws IOException
      Description copied from interface: ClientProtocol
      Allow snapshot on a directory.
      Specified by:
      allowSnapshot in interface ClientProtocol
      Parameters:
      snapshotRoot - the directory to be snapped
      Throws:
      IOException - on error
    • disallowSnapshot

      public void disallowSnapshot(String snapshotRoot) throws IOException
      Description copied from interface: ClientProtocol
      Disallow snapshot on a directory.
      Specified by:
      disallowSnapshot in interface ClientProtocol
      Parameters:
      snapshotRoot - the directory to disallow snapshot
      Throws:
      IOException - on error
    • renameSnapshot

      public void renameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) throws IOException
      Description copied from interface: ClientProtocol
      Rename a snapshot.
      Specified by:
      renameSnapshot in interface ClientProtocol
      Parameters:
      snapshotRoot - the directory path where the snapshot was taken
      snapshotOldName - old name of the snapshot
      snapshotNewName - new name of the snapshot
      Throws:
      IOException
    • getSnapshottableDirListing

      public SnapshottableDirectoryStatus[] getSnapshottableDirListing() throws IOException
      Description copied from interface: ClientProtocol
      Get the list of snapshottable directories that are owned by the current user. Return all the snapshottable directories if the current user is a super user.
      Specified by:
      getSnapshottableDirListing in interface ClientProtocol
      Returns:
      The list of all the current snapshottable directories.
      Throws:
      IOException - If an I/O error occurred.
    • getSnapshotListing

      public SnapshotStatus[] getSnapshotListing(String path) throws IOException
      Description copied from interface: ClientProtocol
      Get listing of all the snapshots for a snapshottable directory.
      Specified by:
      getSnapshotListing in interface ClientProtocol
      Returns:
      Information about all the snapshots for a snapshottable directory
      Throws:
      IOException - If an I/O error occurred
    • getSnapshotDiffReport

      public SnapshotDiffReport getSnapshotDiffReport(String snapshotRoot, String fromSnapshot, String toSnapshot) throws IOException
      Description copied from interface: ClientProtocol
      Get the difference between two snapshots, or between a snapshot and the current tree of a directory.
      Specified by:
      getSnapshotDiffReport in interface ClientProtocol
      Parameters:
      snapshotRoot - full path of the directory where snapshots are taken
      fromSnapshot - snapshot name of the from point. Null indicates the current tree
      toSnapshot - snapshot name of the to point. Null indicates the current tree.
      Returns:
      The difference report represented as a SnapshotDiffReport.
      Throws:
      IOException - on error
    • getSnapshotDiffReportListing

      public SnapshotDiffReportListing getSnapshotDiffReportListing(String snapshotRoot, String fromSnapshot, String toSnapshot, byte[] startPath, int index) throws IOException
      Description copied from interface: ClientProtocol
      Get the difference between two snapshots of a directory iteratively.
      Specified by:
      getSnapshotDiffReportListing in interface ClientProtocol
      Parameters:
      snapshotRoot - full path of the directory where snapshots are taken
      fromSnapshot - snapshot name of the from point. Null indicates the current tree
      toSnapshot - snapshot name of the to point. Null indicates the current tree.
      startPath - path relative to the snapshottable root directory from where the snapshotdiff computation needs to start across multiple rpc calls
      index - index in the created or deleted list of the directory at which the snapshotdiff computation stopped during the last rpc call as the no of entries exceeded the snapshotdiffentry limit. -1 indicates, the snapshotdiff compuatation needs to start right from the startPath provided.
      Returns:
      The difference report represented as a SnapshotDiffReport.
      Throws:
      IOException - on error
    • addCacheDirective

      public long addCacheDirective(CacheDirectiveInfo directive, EnumSet<CacheFlag> flags) throws IOException
      Description copied from interface: ClientProtocol
      Add a CacheDirective to the CacheManager.
      Specified by:
      addCacheDirective in interface ClientProtocol
      Parameters:
      directive - A CacheDirectiveInfo to be added
      flags - CacheFlags to use for this operation.
      Returns:
      A CacheDirectiveInfo associated with the added directive
      Throws:
      IOException - if the directive could not be added
    • modifyCacheDirective

      public void modifyCacheDirective(CacheDirectiveInfo directive, EnumSet<CacheFlag> flags) throws IOException
      Description copied from interface: ClientProtocol
      Modify a CacheDirective in the CacheManager.
      Specified by:
      modifyCacheDirective in interface ClientProtocol
      flags - CacheFlags to use for this operation.
      Throws:
      IOException - if the directive could not be modified
    • removeCacheDirective

      public void removeCacheDirective(long id) throws IOException
      Description copied from interface: ClientProtocol
      Remove a CacheDirectiveInfo from the CacheManager.
      Specified by:
      removeCacheDirective in interface ClientProtocol
      Parameters:
      id - of a CacheDirectiveInfo
      Throws:
      IOException - if the cache directive could not be removed
    • listCacheDirectives

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<CacheDirectiveEntry> listCacheDirectives(long prevId, CacheDirectiveInfo filter) throws IOException
      Description copied from interface: ClientProtocol
      List the set of cached paths of a cache pool. Incrementally fetches results from the server.
      Specified by:
      listCacheDirectives in interface ClientProtocol
      Parameters:
      prevId - The last listed entry ID, or -1 if this is the first call to listCacheDirectives.
      filter - Parameters to use to filter the list results, or null to display all directives visible to us.
      Returns:
      A batch of CacheDirectiveEntry objects.
      Throws:
      IOException
    • addCachePool

      public void addCachePool(CachePoolInfo info) throws IOException
      Description copied from interface: ClientProtocol
      Add a new cache pool.
      Specified by:
      addCachePool in interface ClientProtocol
      Parameters:
      info - Description of the new cache pool
      Throws:
      IOException - If the request could not be completed.
    • modifyCachePool

      public void modifyCachePool(CachePoolInfo req) throws IOException
      Description copied from interface: ClientProtocol
      Modify an existing cache pool.
      Specified by:
      modifyCachePool in interface ClientProtocol
      Parameters:
      req - The request to modify a cache pool.
      Throws:
      IOException - If the request could not be completed.
    • removeCachePool

      public void removeCachePool(String cachePoolName) throws IOException
      Description copied from interface: ClientProtocol
      Remove a cache pool.
      Specified by:
      removeCachePool in interface ClientProtocol
      Parameters:
      cachePoolName - name of the cache pool to remove.
      Throws:
      IOException - if the cache pool did not exist, or could not be removed.
    • listCachePools

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<CachePoolEntry> listCachePools(String prevKey) throws IOException
      Description copied from interface: ClientProtocol
      List the set of cache pools. Incrementally fetches results from the server.
      Specified by:
      listCachePools in interface ClientProtocol
      Parameters:
      prevKey - name of the last pool listed, or the empty string if this is the first invocation of listCachePools
      Returns:
      A batch of CachePoolEntry objects.
      Throws:
      IOException
    • modifyAclEntries

      public void modifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Description copied from interface: ClientProtocol
      Modifies ACL entries of files and directories. This method can add new ACL entries or modify the permissions on existing ACL entries. All existing ACL entries that are not specified in this call are retained without changes. (Modifications are merged into the current ACL.)
      Specified by:
      modifyAclEntries in interface ClientProtocol
      Throws:
      IOException
    • removeAclEntries

      public void removeAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Description copied from interface: ClientProtocol
      Removes ACL entries from files and directories. Other ACL entries are retained.
      Specified by:
      removeAclEntries in interface ClientProtocol
      Throws:
      IOException
    • removeDefaultAcl

      public void removeDefaultAcl(String src) throws IOException
      Description copied from interface: ClientProtocol
      Removes all default ACL entries from files and directories.
      Specified by:
      removeDefaultAcl in interface ClientProtocol
      Throws:
      IOException
    • removeAcl

      public void removeAcl(String src) throws IOException
      Description copied from interface: ClientProtocol
      Removes all but the base ACL entries of files and directories. The entries for user, group, and others are retained for compatibility with permission bits.
      Specified by:
      removeAcl in interface ClientProtocol
      Throws:
      IOException
    • setAcl

      public void setAcl(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Description copied from interface: ClientProtocol
      Fully replaces ACL of files and directories, discarding all existing entries.
      Specified by:
      setAcl in interface ClientProtocol
      Throws:
      IOException
    • getAclStatus

      public org.apache.hadoop.fs.permission.AclStatus getAclStatus(String src) throws IOException
      Description copied from interface: ClientProtocol
      Gets the ACLs of files and directories.
      Specified by:
      getAclStatus in interface ClientProtocol
      Throws:
      IOException
    • createEncryptionZone

      public void createEncryptionZone(String src, String keyName) throws IOException
      Description copied from interface: ClientProtocol
      Create an encryption zone.
      Specified by:
      createEncryptionZone in interface ClientProtocol
      Throws:
      IOException
    • getEZForPath

      public EncryptionZone getEZForPath(String src) throws IOException
      Description copied from interface: ClientProtocol
      Get the encryption zone for a path.
      Specified by:
      getEZForPath in interface ClientProtocol
      Throws:
      IOException
    • listEncryptionZones

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<EncryptionZone> listEncryptionZones(long id) throws IOException
      Description copied from interface: ClientProtocol
      Used to implement cursor-based batched listing of EncryptionZones.
      Specified by:
      listEncryptionZones in interface ClientProtocol
      Parameters:
      id - ID of the last item in the previous batch. If there is no previous batch, a negative value can be used.
      Returns:
      Batch of encryption zones.
      Throws:
      IOException
    • setErasureCodingPolicy

      public void setErasureCodingPolicy(String src, String ecPolicyName) throws IOException
      Description copied from interface: ClientProtocol
      Set an erasure coding policy on a specified path.
      Specified by:
      setErasureCodingPolicy in interface ClientProtocol
      Parameters:
      src - The path to set policy on.
      ecPolicyName - The erasure coding policy name.
      Throws:
      IOException
    • unsetErasureCodingPolicy

      public void unsetErasureCodingPolicy(String src) throws IOException
      Description copied from interface: ClientProtocol
      Unset erasure coding policy from a specified path.
      Specified by:
      unsetErasureCodingPolicy in interface ClientProtocol
      Parameters:
      src - The path to unset policy.
      Throws:
      IOException
    • getECTopologyResultForPolicies

      public ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException
      Description copied from interface: ClientProtocol
      Verifies if the given policies are supported in the given cluster setup. If not policy is specified checks for all enabled policies.
      Specified by:
      getECTopologyResultForPolicies in interface ClientProtocol
      Parameters:
      policyNames - name of policies.
      Returns:
      the result if the given policies are supported in the cluster setup
      Throws:
      IOException
    • reencryptEncryptionZone

      public void reencryptEncryptionZone(String zone, HdfsConstants.ReencryptAction action) throws IOException
      Description copied from interface: ClientProtocol
      Used to implement re-encryption of encryption zones.
      Specified by:
      reencryptEncryptionZone in interface ClientProtocol
      Parameters:
      zone - the encryption zone to re-encrypt.
      action - the action for the re-encryption.
      Throws:
      IOException
    • listReencryptionStatus

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<ZoneReencryptionStatus> listReencryptionStatus(long id) throws IOException
      Description copied from interface: ClientProtocol
      Used to implement cursor-based batched listing of ZoneReencryptionStatuss.
      Specified by:
      listReencryptionStatus in interface ClientProtocol
      Parameters:
      id - ID of the last item in the previous batch. If there is no previous batch, a negative value can be used.
      Returns:
      Batch of encryption zones.
      Throws:
      IOException
    • setXAttr

      public void setXAttr(String src, XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException
      Description copied from interface: ClientProtocol
      Set xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".

      Refer to the HDFS extended attributes user documentation for details.

      Specified by:
      setXAttr in interface ClientProtocol
      Parameters:
      src - file or directory
      xAttr - XAttr to set
      flag - set flag
      Throws:
      IOException
    • getXAttrs

      public List<XAttr> getXAttrs(String src, List<XAttr> xAttrs) throws IOException
      Description copied from interface: ClientProtocol
      Get xattrs of a file or directory. Values in xAttrs parameter are ignored. If xAttrs is null or empty, this is the same as getting all xattrs of the file or directory. Only those xattrs for which the logged-in user has permissions to view are returned.

      Refer to the HDFS extended attributes user documentation for details.

      Specified by:
      getXAttrs in interface ClientProtocol
      Parameters:
      src - file or directory
      xAttrs - xAttrs to get
      Returns:
      XAttr list
      Throws:
      IOException
    • listXAttrs

      public List<XAttr> listXAttrs(String src) throws IOException
      Description copied from interface: ClientProtocol
      List the xattrs names for a file or directory. Only the xattr names for which the logged in user has the permissions to access will be returned.

      Refer to the HDFS extended attributes user documentation for details.

      Specified by:
      listXAttrs in interface ClientProtocol
      Parameters:
      src - file or directory
      Returns:
      XAttr list
      Throws:
      IOException
    • removeXAttr

      public void removeXAttr(String src, XAttr xAttr) throws IOException
      Description copied from interface: ClientProtocol
      Remove xattr of a file or directory.Value in xAttr parameter is ignored. The name must be prefixed with the namespace followed by ".". For example, "user.attr".

      Refer to the HDFS extended attributes user documentation for details.

      Specified by:
      removeXAttr in interface ClientProtocol
      Parameters:
      src - file or directory
      xAttr - XAttr to remove
      Throws:
      IOException
    • checkAccess

      public void checkAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) throws IOException
      Description copied from interface: ClientProtocol
      Checks if the user can access a path. The mode specifies which access checks to perform. If the requested permissions are granted, then the method returns normally. If access is denied, then the method throws an AccessControlException. In general, applications should avoid using this method, due to the risk of time-of-check/time-of-use race conditions. The permissions on a file may change immediately after the access call returns.
      Specified by:
      checkAccess in interface ClientProtocol
      Parameters:
      path - Path to check
      mode - type of access to check
      Throws:
      org.apache.hadoop.security.AccessControlException - if access is denied
      FileNotFoundException - if the path does not exist
      IOException - see specific implementation
    • setStoragePolicy

      public void setStoragePolicy(String src, String policyName) throws IOException
      Description copied from interface: ClientProtocol
      Set the storage policy for a file/directory.
      Specified by:
      setStoragePolicy in interface ClientProtocol
      Parameters:
      src - Path of an existing file/directory.
      policyName - The name of the storage policy
      Throws:
      SnapshotAccessControlException - If access is denied
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink
      FileNotFoundException - If file/dir src is not found
      QuotaExceededException - If changes violate the quota restriction
      IOException
    • unsetStoragePolicy

      public void unsetStoragePolicy(String src) throws IOException
      Description copied from interface: ClientProtocol
      Unset the storage policy set for a given file or directory.
      Specified by:
      unsetStoragePolicy in interface ClientProtocol
      Parameters:
      src - Path of an existing file/directory.
      Throws:
      SnapshotAccessControlException - If access is denied
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink
      FileNotFoundException - If file/dir src is not found
      QuotaExceededException - If changes violate the quota restriction
      IOException
    • getStoragePolicy

      public BlockStoragePolicy getStoragePolicy(String path) throws IOException
      Description copied from interface: ClientProtocol
      Get the storage policy for a file/directory.
      Specified by:
      getStoragePolicy in interface ClientProtocol
      Parameters:
      path - Path of an existing file/directory.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink
      FileNotFoundException - If file/dir src is not found
      IOException
    • getStoragePolicies

      public BlockStoragePolicy[] getStoragePolicies() throws IOException
      Description copied from interface: ClientProtocol
      Get all the available block storage policies.
      Specified by:
      getStoragePolicies in interface ClientProtocol
      Returns:
      All the in-use block storage policies currently.
      Throws:
      IOException
    • getCurrentEditLogTxid

      public long getCurrentEditLogTxid() throws IOException
      Description copied from interface: ClientProtocol
      Get the highest txid the NameNode knows has been written to the edit log, or -1 if the NameNode's edit log is not yet open for write. Used as the starting point for the inotify event stream.
      Specified by:
      getCurrentEditLogTxid in interface ClientProtocol
      Throws:
      IOException
    • getEditsFromTxid

      public EventBatchList getEditsFromTxid(long txid) throws IOException
      Description copied from interface: ClientProtocol
      Get an ordered list of batches of events corresponding to the edit log transactions for txids equal to or greater than txid.
      Specified by:
      getEditsFromTxid in interface ClientProtocol
      Throws:
      IOException
    • addErasureCodingPolicies

      public AddErasureCodingPolicyResponse[] addErasureCodingPolicies(ErasureCodingPolicy[] policies) throws IOException
      Description copied from interface: ClientProtocol
      Add Erasure coding policies to HDFS. For each policy input, schema and cellSize are musts, name and id are ignored. They will be automatically created and assigned by Namenode once the policy is successfully added, and will be returned in the response.
      Specified by:
      addErasureCodingPolicies in interface ClientProtocol
      Parameters:
      policies - The user defined ec policy list to add.
      Returns:
      Return the response list of adding operations.
      Throws:
      IOException
    • removeErasureCodingPolicy

      public void removeErasureCodingPolicy(String ecPolicyName) throws IOException
      Description copied from interface: ClientProtocol
      Remove erasure coding policy.
      Specified by:
      removeErasureCodingPolicy in interface ClientProtocol
      Parameters:
      ecPolicyName - The name of the policy to be removed.
      Throws:
      IOException
    • enableErasureCodingPolicy

      public void enableErasureCodingPolicy(String ecPolicyName) throws IOException
      Description copied from interface: ClientProtocol
      Enable erasure coding policy.
      Specified by:
      enableErasureCodingPolicy in interface ClientProtocol
      Parameters:
      ecPolicyName - The name of the policy to be enabled.
      Throws:
      IOException
    • disableErasureCodingPolicy

      public void disableErasureCodingPolicy(String ecPolicyName) throws IOException
      Description copied from interface: ClientProtocol
      Disable erasure coding policy.
      Specified by:
      disableErasureCodingPolicy in interface ClientProtocol
      Parameters:
      ecPolicyName - The name of the policy to be disabled.
      Throws:
      IOException
    • getErasureCodingPolicies

      public ErasureCodingPolicyInfo[] getErasureCodingPolicies() throws IOException
      Description copied from interface: ClientProtocol
      Get the erasure coding policies loaded in Namenode, excluding REPLICATION policy.
      Specified by:
      getErasureCodingPolicies in interface ClientProtocol
      Throws:
      IOException
    • getErasureCodingCodecs

      public Map<String,String> getErasureCodingCodecs() throws IOException
      Description copied from interface: ClientProtocol
      Get the erasure coding codecs loaded in Namenode.
      Specified by:
      getErasureCodingCodecs in interface ClientProtocol
      Throws:
      IOException
    • getErasureCodingPolicy

      public ErasureCodingPolicy getErasureCodingPolicy(String src) throws IOException
      Description copied from interface: ClientProtocol
      Get the information about the EC policy for the path. Null will be returned if directory or file has REPLICATION policy.
      Specified by:
      getErasureCodingPolicy in interface ClientProtocol
      Parameters:
      src - path to get the info for
      Throws:
      IOException
    • getQuotaUsage

      public org.apache.hadoop.fs.QuotaUsage getQuotaUsage(String path) throws IOException
      Description copied from interface: ClientProtocol
      Get QuotaUsage rooted at the specified directory. Note: due to HDFS-6763, standby/observer doesn't keep up-to-date info about quota usage, and thus even though this is ReadOnly, it can only be directed to the active namenode.
      Specified by:
      getQuotaUsage in interface ClientProtocol
      Parameters:
      path - The string representation of the path
      Throws:
      org.apache.hadoop.security.AccessControlException - permission denied
      FileNotFoundException - file path is not found
      org.apache.hadoop.fs.UnresolvedLinkException - if path contains a symlink.
      IOException - If an I/O error occurred
    • listOpenFiles

      @Deprecated public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<OpenFileEntry> listOpenFiles(long prevId) throws IOException
      Deprecated.
      Description copied from interface: ClientProtocol
      List open files in the system in batches. INode id is the cursor and the open files returned in a batch will have their INode ids greater than the cursor INode id. Open files can only be requested by super user and the the list across batches are not atomic.
      Specified by:
      listOpenFiles in interface ClientProtocol
      Parameters:
      prevId - the cursor INode id.
      Throws:
      IOException
    • listOpenFiles

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<OpenFileEntry> listOpenFiles(long prevId, EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException
      Description copied from interface: ClientProtocol
      List open files in the system in batches. INode id is the cursor and the open files returned in a batch will have their INode ids greater than the cursor INode id. Open files can only be requested by super user and the the list across batches are not atomic.
      Specified by:
      listOpenFiles in interface ClientProtocol
      Parameters:
      prevId - the cursor INode id.
      openFilesTypes - types to filter the open files.
      path - path to filter the open files.
      Throws:
      IOException
    • msync

      public void msync() throws IOException
      Description copied from interface: ClientProtocol
      Called by client to wait until the server has reached the state id of the client. The client and server state id are given by client side and server side alignment context respectively. This can be a blocking call.
      Specified by:
      msync in interface ClientProtocol
      Throws:
      IOException
    • satisfyStoragePolicy

      public void satisfyStoragePolicy(String src) throws IOException
      Description copied from interface: ClientProtocol
      Satisfy the storage policy for a file/directory.
      Specified by:
      satisfyStoragePolicy in interface ClientProtocol
      Parameters:
      src - Path of an existing file/directory.
      Throws:
      org.apache.hadoop.security.AccessControlException - If access is denied.
      org.apache.hadoop.fs.UnresolvedLinkException - if src contains a symlink.
      FileNotFoundException - If file/dir src is not found.
      SafeModeException - append not allowed in safemode.
      IOException
    • getSlowDatanodeReport

      public DatanodeInfo[] getSlowDatanodeReport() throws IOException
      Description copied from interface: ClientProtocol
      Get report on all of the slow Datanodes. Slow running datanodes are identified based on the Outlier detection algorithm, if slow peer tracking is enabled for the DFS cluster.
      Specified by:
      getSlowDatanodeReport in interface ClientProtocol
      Returns:
      Datanode report for slow running datanodes.
      Throws:
      IOException - If an I/O error occurs.
    • getHAServiceState

      public org.apache.hadoop.ha.HAServiceProtocol.HAServiceState getHAServiceState() throws IOException
      Description copied from interface: ClientProtocol
      Get HA service state of the server.
      Specified by:
      getHAServiceState in interface ClientProtocol
      Returns:
      server HA state
      Throws:
      IOException
    • getEnclosingRoot

      public org.apache.hadoop.fs.Path getEnclosingRoot(String filename) throws IOException
      Description copied from interface: ClientProtocol
      Get the enclosing root for a path.
      Specified by:
      getEnclosingRoot in interface ClientProtocol
      Throws:
      IOException