java.lang.Object
org.apache.hadoop.conf.Configured
org.apache.hadoop.hdfs.server.diskbalancer.command.Command
All Implemented Interfaces:
Closeable, AutoCloseable, org.apache.hadoop.conf.Configurable
Direct Known Subclasses:
CancelCommand, ExecuteCommand, HelpCommand, PlanCommand, QueryCommand, ReportCommand

public abstract class Command extends org.apache.hadoop.conf.Configured implements Closeable
Common interface for command handling.
  • Constructor Details

    • Command

      public Command(org.apache.hadoop.conf.Configuration conf)
      Constructs a command.
    • Command

      public Command(org.apache.hadoop.conf.Configuration conf, PrintStream ps)
      Constructs a command.
  • Method Details

    • close

      public void close() throws IOException
      Cleans any resources held by this command.

      The main goal is to delete id file created in .NameNodeConnector#checkAndMarkRunning , otherwise, it's not allowed to run multiple commands in a row.

      Specified by:
      close in interface AutoCloseable
      Specified by:
      close in interface Closeable
      Throws:
      IOException
    • execute

      public abstract void execute(org.apache.commons.cli.CommandLine cmd) throws Exception
      Executes the Client Calls.
      Parameters:
      cmd - - CommandLine
      Throws:
      Exception
    • printHelp

      public abstract void printHelp()
      Gets extended help for this command.
    • readClusterInfo

      protected DiskBalancerCluster readClusterInfo(org.apache.commons.cli.CommandLine cmd) throws Exception
      Process the URI and return the cluster with nodes setup. This is used in all commands.
      Parameters:
      cmd - - CommandLine
      Returns:
      DiskBalancerCluster
      Throws:
      Exception
    • setOutputPath

      protected void setOutputPath(String path) throws IOException
      Setup the outpath.
      Parameters:
      path - - Path or null to use default path.
      Throws:
      IOException
    • setNodesToProcess

      protected void setNodesToProcess(DiskBalancerDataNode node)
      Sets the nodes to process.
      Parameters:
      node - - Node
    • setNodesToProcess

      protected void setNodesToProcess(List<DiskBalancerDataNode> nodes)
      Sets the list of Nodes to process.
      Parameters:
      nodes - Nodes.
    • getNodeList

      protected Set<String> getNodeList(String listArg) throws IOException
      Gets the node set from a file or a string.
      Parameters:
      listArg - - String File URL or a comma separated list of node names.
      Returns:
      Set of node names
      Throws:
      IOException
    • getNodes

      protected List<DiskBalancerDataNode> getNodes(String listArg) throws IOException
      Returns a DiskBalancer Node list from the Cluster or null if not found.
      Parameters:
      listArg - String File URL or a comma separated list of node names.
      Returns:
      List of DiskBalancer Node
      Throws:
      IOException
    • verifyCommandOptions

      protected void verifyCommandOptions(String commandName, org.apache.commons.cli.CommandLine cmd)
      Verifies if the command line options are sane.
      Parameters:
      commandName - - Name of the command
      cmd - - Parsed Command Line
    • getClusterURI

      public URI getClusterURI()
      Gets cluster URL.
      Returns:
      - URL
    • setClusterURI

      public void setClusterURI(URI clusterURI)
      Set cluster URL.
      Parameters:
      clusterURI - - URL
    • getDataNodeProxy

      public org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol getDataNodeProxy(String datanode) throws IOException
      Copied from DFSAdmin.java. -- Creates a connection to dataNode.
      Parameters:
      datanode - - dataNode.
      Returns:
      ClientDataNodeProtocol
      Throws:
      IOException
    • create

      protected org.apache.hadoop.fs.FSDataOutputStream create(String fileName) throws IOException
      Returns a file created in the cluster.
      Parameters:
      fileName - - fileName to open.
      Returns:
      OutputStream.
      Throws:
      IOException
    • open

      protected org.apache.hadoop.fs.FSDataInputStream open(String fileName) throws IOException
      Returns a InputStream to read data.
      Throws:
      IOException
    • getOutputPath

      protected org.apache.hadoop.fs.Path getOutputPath()
      Returns the output path where the plan and snapshot gets written.
      Returns:
      Path
    • addValidCommandParameters

      protected void addValidCommandParameters(String key, String desc)
      Adds valid params to the valid args table.
      Parameters:
      key -
      desc -
    • getDefaultTop

      protected int getDefaultTop()
      returns default top number of nodes.
      Returns:
      default top number of nodes.
    • recordOutput

      protected void recordOutput(org.apache.commons.text.TextStringBuilder result, String outputLine)
      Put output line to log and string buffer.
    • parseTopNodes

      protected int parseTopNodes(org.apache.commons.cli.CommandLine cmd, org.apache.commons.text.TextStringBuilder result) throws IllegalArgumentException
      Parse top number of nodes to be processed.
      Returns:
      top number of nodes to be processed.
      Throws:
      IllegalArgumentException
    • populatePathNames

      protected void populatePathNames(DiskBalancerDataNode node) throws IOException
      Reads the Physical path of the disks we are balancing. This is needed to make the disk balancer human friendly and not used in balancing.
      Parameters:
      node - - Disk Balancer Node.
      Throws:
      IOException
    • setTopNodes

      public void setTopNodes(int topNodes)
      Set top number of nodes to be processed.
    • getTopNodes

      public int getTopNodes()
      Get top number of nodes to be processed.
      Returns:
      top number of nodes to be processed.
    • setCluster

      @VisibleForTesting public void setCluster(DiskBalancerCluster newCluster)
      Set DiskBalancer cluster