Skip navigation links
  • Overview
  • Package
  • Class
  • Use
  • Tree
  • Deprecated
  • Index
  • Help

Deprecated API

Contents

  • Interfaces
  • Classes
  • Enum Classes
  • Exceptions
  • Fields
  • Methods
  • Constructors
  • Enum Constants
  • Deprecated Interfaces
    Interface
    Description
    org.apache.hadoop.fs.impl.FunctionsRaisingIOE.BiFunctionRaisingIOE
    use BiFunctionRaisingIOE
    org.apache.hadoop.fs.impl.FunctionsRaisingIOE.CallableRaisingIOE
    use CallableRaisingIOE
    org.apache.hadoop.fs.impl.FunctionsRaisingIOE.FunctionRaisingIOE
    use FunctionRaisingIOE
    org.apache.hadoop.io.Closeable
    use java.io.Closeable
    org.apache.hadoop.ipc.ProtobufRpcEngineCallback
  • Deprecated Classes
    Class
    Description
    org.apache.hadoop.fs.FileUtil.HardLink
    Use HardLink
    org.apache.hadoop.fs.impl.FunctionsRaisingIOE
    use org.apache.hadoop.util.functional
    org.apache.hadoop.fs.impl.FutureIOSupport
    org.apache.hadoop.io.UTF8
    replaced by Text
    org.apache.hadoop.ipc.ProtobufHelper
    hadoop code MUST use ShadedProtobufHelper.
    org.apache.hadoop.ipc.ProtobufRpcEngine
    org.apache.hadoop.ipc.WritableRpcEngine
    org.apache.hadoop.ipc.WritableRpcEngine.Server
  • Deprecated Enum Classes
    Enum Class
    Description
    org.apache.hadoop.fs.StreamCapabilities.StreamCapability
  • Deprecated Exceptions
    Exceptions
    Description
    org.apache.hadoop.fs.impl.WrappedIOException
    use the UncheckedIOException directly.]
  • Deprecated Fields
    Field
    Description
    org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_CUSTOM_TAGS
    Please use CommonConfigurationKeysPublic.HADOOP_TAGS_CUSTOM instead See https://issues.apache.org/jira/browse/HADOOP-15474
    org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS
    use CommonConfigurationKeysPublic.HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_KEY instead.
    org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT
    use CommonConfigurationKeysPublic.HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_DEFAULT instead.
    org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SYSTEM_TAGS
    Please use CommonConfigurationKeysPublic.HADOOP_TAGS_SYSTEM instead See https://issues.apache.org/jira/browse/HADOOP-15474
    org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_SORT_FACTOR_KEY
    Moved to mapreduce, see mapreduce.task.io.sort.factor in mapred-default.xml See https://issues.apache.org/jira/browse/HADOOP-6801 For SequenceFile.Sorter control instead, see CommonConfigurationKeysPublic.SEQ_IO_SORT_FACTOR_KEY.
    org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_SORT_MB_KEY
    Moved to mapreduce, see mapreduce.task.io.sort.mb in mapred-default.xml See https://issues.apache.org/jira/browse/HADOOP-6801 For SequenceFile.Sorter control instead, see CommonConfigurationKeysPublic.SEQ_IO_SORT_MB_KEY.
    org.apache.hadoop.fs.StreamCapabilities.HFLUSH
    org.apache.hadoop.ipc.DecayRpcScheduler.IPC_FCQ_DECAYSCHEDULER_FACTOR_KEY
    org.apache.hadoop.ipc.DecayRpcScheduler.IPC_FCQ_DECAYSCHEDULER_PERIOD_KEY
    org.apache.hadoop.ipc.DecayRpcScheduler.IPC_FCQ_DECAYSCHEDULER_THRESHOLDS_KEY
    org.apache.hadoop.ipc.FairCallQueue.IPC_CALLQUEUE_PRIORITY_LEVELS_DEFAULT
    org.apache.hadoop.ipc.FairCallQueue.IPC_CALLQUEUE_PRIORITY_LEVELS_KEY
    org.apache.hadoop.security.authorize.ServiceAuthorizationManager.SERVICE_AUTHORIZATION_CONFIG
    Use CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION instead.
    org.apache.hadoop.util.Shell.WINDOWS_MAX_SHELL_LENGHT
    use the correctly spelled constant.
    org.apache.hadoop.util.Shell.WINUTILS
    use one of the exception-raising getter methods, specifically Shell.getWinUtilsPath() or Shell.getWinUtilsFile()
  • Deprecated Methods
    Method
    Description
    org.apache.hadoop.conf.Configuration.addDeprecation(String, String[])
    use Configuration.addDeprecation(String key, String newKey) instead
    org.apache.hadoop.conf.Configuration.addDeprecation(String, String[], String)
    use Configuration.addDeprecation(String key, String newKey, String customMessage) instead
    org.apache.hadoop.fs.AbstractFileSystem.getServerDefaults()
    use AbstractFileSystem.getServerDefaults(Path) instead
    org.apache.hadoop.fs.DelegateToFileSystem.getServerDefaults()
    org.apache.hadoop.fs.FileStatus.isDir()
    Use FileStatus.isFile(), FileStatus.isDirectory(), and FileStatus.isSymlink() instead.
    org.apache.hadoop.fs.FileStatus.readFields(DataInput)
    Use the PBHelper and protobuf serialization directly.
    org.apache.hadoop.fs.FileStatus.write(DataOutput)
    Use the PBHelper and protobuf serialization directly.
    org.apache.hadoop.fs.FileSystem.delete(Path)
    Use FileSystem.delete(Path, boolean) instead.
    org.apache.hadoop.fs.FileSystem.getAllStatistics()
    use FileSystem.getGlobalStorageStatistics()
    org.apache.hadoop.fs.FileSystem.getBlockSize(Path)
    Use FileSystem.getFileStatus(Path) instead
    org.apache.hadoop.fs.FileSystem.getDefaultBlockSize()
    use FileSystem.getDefaultBlockSize(Path) instead
    org.apache.hadoop.fs.FileSystem.getDefaultReplication()
    use FileSystem.getDefaultReplication(Path) instead
    org.apache.hadoop.fs.FileSystem.getLength(Path)
    Use FileSystem.getFileStatus(Path) instead.
    org.apache.hadoop.fs.FileSystem.getName()
    call FileSystem.getUri() instead.
    org.apache.hadoop.fs.FileSystem.getNamed(String, Configuration)
    call FileSystem.get(URI, Configuration) instead.
    org.apache.hadoop.fs.FileSystem.getReplication(Path)
    Use FileSystem.getFileStatus(Path) instead
    org.apache.hadoop.fs.FileSystem.getServerDefaults()
    use FileSystem.getServerDefaults(Path) instead
    org.apache.hadoop.fs.FileSystem.getStatistics()
    use FileSystem.getGlobalStorageStatistics()
    org.apache.hadoop.fs.FileSystem.getStatistics(String, Class<? extends FileSystem>)
    use FileSystem.getGlobalStorageStatistics()
    org.apache.hadoop.fs.FileSystem.isDirectory(Path)
    Use FileSystem.getFileStatus(Path) instead
    org.apache.hadoop.fs.FileSystem.isFile(Path)
    Use FileSystem.getFileStatus(Path) instead
    org.apache.hadoop.fs.FileSystem.primitiveCreate(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable, Options.ChecksumOpt)
    org.apache.hadoop.fs.FileSystem.primitiveMkdir(Path, FsPermission)
    org.apache.hadoop.fs.FileSystem.primitiveMkdir(Path, FsPermission, boolean)
    org.apache.hadoop.fs.FileSystem.rename(Path, Path, Options.Rename...)
    org.apache.hadoop.fs.FileUtil.fullyDelete(FileSystem, Path)
    Use FileSystem.delete(Path, boolean)
    org.apache.hadoop.fs.FilterFs.getServerDefaults()
    org.apache.hadoop.fs.FSBuilder.must(String, double)
    org.apache.hadoop.fs.FSBuilder.must(String, float)
    use FSBuilder.mustDouble(String, double) to set floating point.
    org.apache.hadoop.fs.FSBuilder.must(String, long)
    org.apache.hadoop.fs.FSBuilder.opt(String, double)
    use FSBuilder.optDouble(String, double)
    org.apache.hadoop.fs.FSBuilder.opt(String, float)
    use FSBuilder.optDouble(String, double)
    org.apache.hadoop.fs.FSBuilder.opt(String, long)
    use FSBuilder.optLong(String, long) where possible.
    org.apache.hadoop.fs.FSInputChecker.checksum2long(byte[])
    org.apache.hadoop.fs.ftp.FtpFs.getServerDefaults()
    org.apache.hadoop.fs.impl.FutureIOSupport.awaitFuture(Future<T>)
    org.apache.hadoop.fs.impl.FutureIOSupport.awaitFuture(Future<T>, long, TimeUnit)
    org.apache.hadoop.fs.impl.FutureIOSupport.propagateOptions(FSBuilder<?, ?>, Configuration, String, boolean)
    org.apache.hadoop.fs.impl.FutureIOSupport.propagateOptions(FSBuilder<T, U>, Configuration, String, String)
    org.apache.hadoop.fs.impl.FutureIOSupport.raiseInnerCause(CompletionException)
    org.apache.hadoop.fs.impl.FutureIOSupport.raiseInnerCause(ExecutionException)
    org.apache.hadoop.fs.local.RawLocalFs.getServerDefaults()
    org.apache.hadoop.fs.LocalDirAllocator.removeContext(String)
    org.apache.hadoop.fs.Path.makeQualified(FileSystem)
    use Path.makeQualified(URI, Path)
    org.apache.hadoop.fs.permission.FsPermission.getAclBit()
    Get acl bit from the FileStatus object.
    org.apache.hadoop.fs.permission.FsPermission.getEncryptedBit()
    Get encryption bit from the FileStatus object.
    org.apache.hadoop.fs.permission.FsPermission.getErasureCodedBit()
    Get ec bit from the FileStatus object.
    org.apache.hadoop.fs.permission.FsPermission.readFields(DataInput)
    org.apache.hadoop.fs.permission.FsPermission.toExtendedShort()
    org.apache.hadoop.fs.permission.FsPermission.write(DataOutput)
    org.apache.hadoop.fs.shell.FsCommand.runAll()
    use Command.run(String...argv)
    org.apache.hadoop.fs.TrashPolicy.getInstance(Configuration, FileSystem, Path)
    Use TrashPolicy.getInstance(Configuration, FileSystem) instead.
    org.apache.hadoop.fs.TrashPolicy.initialize(Configuration, FileSystem, Path)
    Use TrashPolicy.initialize(Configuration, FileSystem) instead.
    org.apache.hadoop.fs.TrashPolicyDefault.initialize(Configuration, FileSystem, Path)
    Use TrashPolicyDefault.initialize(Configuration, FileSystem) instead.
    org.apache.hadoop.fs.viewfs.ViewFs.getServerDefaults()
    org.apache.hadoop.http.HttpServer2.getPort()
    org.apache.hadoop.io.BytesWritable.get()
    Use BytesWritable.getBytes() instead.
    org.apache.hadoop.io.BytesWritable.getSize()
    Use BytesWritable.getLength() instead.
    org.apache.hadoop.io.file.tfile.TFile.Reader.createScanner(byte[], byte[])
    Use TFile.Reader.createScannerByKey(byte[], byte[]) instead.
    org.apache.hadoop.io.file.tfile.TFile.Reader.createScanner(RawComparable, RawComparable)
    Use TFile.Reader.createScannerByKey(RawComparable, RawComparable) instead.
    org.apache.hadoop.io.nativeio.NativeIO.link(File, File)
    org.apache.hadoop.io.SequenceFile.createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, int, short, long, boolean, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata)
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, int, short, long, SequenceFile.CompressionType, CompressionCodec, Progressable, SequenceFile.Metadata)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable, SequenceFile.Metadata)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, Progressable)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.Writer.syncFs()
    Use SequenceFile.Writer.hsync() or SequenceFile.Writer.hflush() instead
    org.apache.hadoop.io.WritableUtils.cloneInto(Writable, Writable)
    use ReflectionUtils.cloneInto instead.
    org.apache.hadoop.ipc.Client.getTimeout(Configuration)
    use Client.getRpcTimeout(Configuration) instead
    org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ServiceException)
    org.apache.hadoop.ipc.RPC.Builder.setnumReaders(int)
    call RPC.Builder.setNumReaders(int value) instead.
    org.apache.hadoop.ipc.RpcScheduler.addResponseTime(String, int, int, int)
    Use RpcScheduler.addResponseTime(String, Schedulable, ProcessingDetails) instead.
    org.apache.hadoop.ipc.Server.call(Writable, long)
    Use Server.call(RPC.RpcKind, String, Writable, long) instead
    org.apache.hadoop.metrics2.util.MetricsCache.Record.metrics()
    use metricsEntrySet() instead
    org.apache.hadoop.security.authorize.ProxyUsers.authorize(UserGroupInformation, String, Configuration)
    use ProxyUsers.authorize(UserGroupInformation, String) instead.
    org.apache.hadoop.security.Groups.getGroups(String)
    Use Groups.getGroupsSet(String user) instead.
    org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.setUseQueryStringForDelegationToken(boolean)
    org.apache.hadoop.security.UserGroupInformation.getGroups()
    Use UserGroupInformation.getGroupsSet() instead.
    org.apache.hadoop.service.ServiceOperations.stopQuietly(Log, Service)
    to be removed with 3.4.0. Use ServiceOperations.stopQuietly(Logger, Service) instead.
    org.apache.hadoop.util.HostsFileReader.getHostDetails(Set<String>, Map<String, Integer>)
    use instead
    org.apache.hadoop.util.HostsFileReader.getHostDetails(Set<String>, Set<String>)
    use instead
    org.apache.hadoop.util.ReflectionUtils.cloneWritableInto(Writable, Writable)
    org.apache.hadoop.util.ReflectionUtils.logThreadInfo(Log, String, long)
    to be removed with 3.4.0. Use ReflectionUtils.logThreadInfo(Logger, String, long) instead.
    org.apache.hadoop.util.RunJar.unJarAndSave(InputStream, File, String, Pattern)
    org.apache.hadoop.util.Shell.isJava7OrAbove()
    This call isn't needed any more: please remove uses of it.
    org.apache.hadoop.util.StringUtils.humanReadableInt(long)
    use StringUtils.TraditionalBinaryPrefix.long2String(long, String, int).
    org.apache.hadoop.util.StringUtils.limitDecimalTo2(double)
    use StringUtils.format("%.2f", d).
  • Deprecated Constructors
    Constructor
    Description
    org.apache.hadoop.fs.ContentSummary()
    org.apache.hadoop.fs.ContentSummary(long, long, long)
    org.apache.hadoop.fs.ContentSummary(long, long, long, long, long, long)
    org.apache.hadoop.fs.LocatedFileStatus(long, boolean, int, long, long, long, FsPermission, String, String, Path, Path, BlockLocation[])
    org.apache.hadoop.fs.shell.CommandFormat(String, int, int, String...)
    use replacement since name is an unused parameter
    org.apache.hadoop.fs.shell.Count(String[], int, Configuration)
    invoke via FsShell
    org.apache.hadoop.io.BloomMapFile.Reader(FileSystem, String, Configuration)
    org.apache.hadoop.io.BloomMapFile.Reader(FileSystem, String, WritableComparator, Configuration)
    org.apache.hadoop.io.BloomMapFile.Reader(FileSystem, String, WritableComparator, Configuration, boolean)
    org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class)
    org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class<? extends Writable>, SequenceFile.CompressionType, CompressionCodec, Progressable)
    org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class, SequenceFile.CompressionType)
    org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class, SequenceFile.CompressionType, Progressable)
    org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class)
    org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType)
    org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, CompressionCodec, Progressable)
    org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, Progressable)
    org.apache.hadoop.io.MapFile.Reader(FileSystem, String, Configuration)
     
    org.apache.hadoop.io.MapFile.Reader(FileSystem, String, WritableComparator, Configuration)
     
    org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class)
    Use Writer(Configuration, Path, Option...) instead.
    org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class, SequenceFile.CompressionType)
    Use Writer(Configuration, Path, Option...) instead.
    org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class, SequenceFile.CompressionType, CompressionCodec, Progressable)
    Use Writer(Configuration, Path, Option...) instead.
    org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class, SequenceFile.CompressionType, Progressable)
    Use Writer(Configuration, Path, Option...) instead.
    org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class)
    Use Writer(Configuration, Path, Option...) instead.
    org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType)
    Use Writer(Configuration, Path, Option...) instead.
    org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, CompressionCodec, Progressable)
    Use Writer(Configuration, Path, Option...) instead.
    org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, Progressable)
    Use Writer(Configuration, Path, Option...)} instead.
    org.apache.hadoop.io.SequenceFile.Reader(FileSystem, Path, Configuration)
    Use Reader(Configuration, Option...) instead.
    org.apache.hadoop.io.SequenceFile.Reader(FSDataInputStream, int, long, long, Configuration)
    Use Reader(Configuration, Reader.Option...) instead.
    org.apache.hadoop.io.SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class, int, short, long, Progressable, SequenceFile.Metadata)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class, Progressable, SequenceFile.Metadata)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SetFile.Writer(FileSystem, String, Class<? extends WritableComparable>)
    pass a Configuration too
    org.apache.hadoop.ipc.WritableRpcEngine.Server(Class<?>, Object, Configuration, String, int, int, int, int, boolean, SecretManager<? extends TokenIdentifier>, String)
    use Server#Server(Class, Object, Configuration, String, int, int, int, int, boolean, SecretManager)
    org.apache.hadoop.ipc.WritableRpcEngine.Server(Object, Configuration, String, int)
    Use #Server(Class, Object, Configuration, String, int)
    org.apache.hadoop.ipc.WritableRpcEngine.Server(Object, Configuration, String, int, int, int, int, boolean, SecretManager<? extends TokenIdentifier>)
    use Server#Server(Class, Object, Configuration, String, int, int, int, int, boolean, SecretManager)
  • Deprecated Enum Constants
    Enum Constant
    Description
    org.apache.hadoop.security.SaslRpcServer.AuthMethod.DIGEST

Copyright © 2008–2026 Apache Software Foundation. All rights reserved.