[wilian@localhost hadoop-0.20.2]$ bin/hadoop fs -help
hadoop fs is the command to execute fs commands. The full syntax is:
hadoop fs [-fs <local | file system URI>] [-conf <configuration file>]
[-D <property=value>] [-ls <path>] [-lsr <path>] [-du <path>]
[-dus <path>] [-mv <src> <dst>] [-cp <src> <dst>] [-rm [-skipTrash] <src>]
[-rmr [-skipTrash] <src>] [-put <localsrc> ... <dst>] [-copyFromLocal <localsrc> ... <dst>]
[-moveFromLocal <localsrc> ... <dst>] [-get [-ignoreCrc] [-crc] <src> <localdst>
[-getmerge <src> <localdst> [addnl]] [-cat <src>]
[-copyToLocal [-ignoreCrc] [-crc] <src> <localdst>] [-moveToLocal <src> <localdst>]
[-mkdir <path>] [-report] [-setrep [-R] [-w] <rep> <path/file>]
[-touchz <path>] [-test -[ezd] <path>] [-stat [format] <path>]
[-tail [-f] <path>] [-text <path>]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-chgrp [-R] GROUP PATH...]
[-count[-q] <path>]
[-help [cmd]]
-fs [local | <file system URI>]: Specify the file system to use.
If not specified, the current configuration is used,
taken from the following, in increasing precedence:
core-default.xml inside the hadoop jar file
core-site.xml in $HADOOP_CONF_DIR
'local' means use the local file system as your DFS.
<file system URI> specifies a particular file system to
contact. This argument is optional but if used must appear
appear first on the command line. Exactly one additional
argument must be specified.
-ls <path>: List the contents that match the specified file pattern. If
path is not specified, the contents of /user/<currentUser>
will be listed. Directory entries are of the form
dirName (full path) <dir>
and file entries are of the form
fileName(full path) <r n> size
where n is the number of replicas specified for the file
and size is the size of the file, in bytes.
-lsr <path>: Recursively list the contents that match the specified
file pattern. Behaves very similarly to hadoop fs -ls,
except that the data is shown for all the entries in the
subtree.
-du <path>: Show the amount of space, in bytes, used by the files that
match the specified file pattern. Equivalent to the unix
command "du -sb <path>/*" in case of a directory,
and to "du -b <path>" in case of a file.
The output is in the form
name(full path) size (in bytes)
-dus <path>: Show the amount of space, in bytes, used by the files that
match the specified file pattern. Equivalent to the unix
command "du -sb" The output is in the form
name(full path) size (in bytes)
-mv <src> <dst>: Move files that match the specified file pattern <src>
to a destination <dst>. When moving multiple files, the
destination must be a directory.
-cp <src> <dst>: Copy files that match the file pattern <src> to a
destination. When copying multiple files, the destination
must be a directory.
-rm [-skipTrash] <src>: Delete all files that match the specified file pattern.
Equivalent to the Unix command "rm <src>"
-skipTrash option bypasses trash, if enabled, and immediately
deletes <src>
-rmr [-skipTrash] <src>: Remove all directories which match the specified file
pattern. Equivalent to the Unix command "rm -rf <src>"
-skipTrash option bypasses trash, if enabled, and immediately
deletes <src>
-put <localsrc> ... <dst>: Copy files from the local file system
into fs.
-copyFromLocal <localsrc> ... <dst>: Identical to the -put command.
-moveFromLocal <localsrc> ... <dst>: Same as -put, except that the source is
deleted after it's copied.
-get [-ignoreCrc] [-crc] <src> <localdst>: Copy files that match the file pattern <src>
to the local name. <src> is kept. When copying mutiple,
files, the destination must be a directory.
-getmerge <src> <localdst>: Get all the files in the directories that
match the source file pattern and merge and sort them to only
one file on local fs. <src> is kept.
-cat <src>: Fetch all files that match the file pattern <src>
and display their content on stdout.
-copyToLocal [-ignoreCrc] [-crc] <src> <localdst>: Identical to the -get command.
-moveToLocal <src> <localdst>: Not implemented yet
-mkdir <path>: Create a directory in specified location.
-setrep [-R] [-w] <rep> <path/file>: Set the replication level of a file.
The -R flag requests a recursive change of replication level
for an entire tree.
-tail [-f] <file>: Show the last 1KB of the file.
The -f option shows apended data as the file grows.
-touchz <path>: Write a timestamp in yyyy-MM-dd HH:mm:ss format
in a file at <path>. An error is returned if the file exists with non-zero length
-test -[ezd] <path>: If file { exists, has zero length, is a directory
then return 0, else return 1.
-text <src>: Takes a source file and outputs the file in text format.
The allowed formats are zip and TextRecordInputStream.
-stat [format] <path>: Print statistics about the file/directory at <path>
in the specified format. Format accepts filesize in blocks (%b), filename (%n),
block size (%o), replication (%r), modification date (%y, %Y)
-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...
Changes permissions of a file.
This works similar to shell's chmod with a few exceptions.
-R modifies the files recursively. This is the only option
currently supported.
MODE Mode is same as mode used for chmod shell command.
Only letters recognized are 'rwxX'. E.g. a+r,g-w,+rwx,o=r
OCTALMODE Mode specifed in 3 digits. Unlike shell command,
this requires all three digits.
E.g. 754 is same as u=rwx,g=rx,o=r
If none of 'augo' is specified, 'a' is assumed and unlike
shell command, no umask is applied.
-chown [-R] [OWNER][:[GROUP]] PATH...
Changes owner and group of a file.
This is similar to shell's chown with a few exceptions.
-R modifies the files recursively. This is the only option
currently supported.
If only owner or group is specified then only owner or
group is modified.
The owner and group names may only cosists of digits, alphabet,
and any of '-_.@/' i.e. [-_.@/a-zA-Z0-9]. The names are case
sensitive.
WARNING: Avoid using '.' to separate user name and group though
Linux allows it. If user names have dots in them and you are
using local file system, you might see surprising results since
shell command 'chown' is used for local files.
-chgrp [-R] GROUP PATH...
This is equivalent to -chown ... :GROUP ...
-count[-q] <path>: Count the number of directories, files and bytes under the paths
that match the specified file pattern. The output columns are:
DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME or
QUOTA REMAINING_QUATA SPACE_QUOTA REMAINING_SPACE_QUOTA
DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME
-help [cmd]: Displays help for given command or all commands if none
is specified.
分享到:
相关推荐
调用文件系统(FS)Shell命令应使用 bin/hadoop fs 的形式。 所有的的FS shell命令使用URI路 径作为参数。URI格式是scheme://authority/path。对HDFS文件系统,scheme是hdfs,对本地文件 系统,scheme是file。其中...
hadoop-shell 指令学习,
hadoop fs -ls [path] #递归显示当前目录结构 hadoop fs -ls -R [path] #显示根目录下内容 hadoop fs -ls / 创建目录 #创建 lgc 文件夹 hadoop fs -mkdir /lgc #递归创建目录 hadoop fs -mkdir -p [path] 删除操作 ...
Hadoop 有两个核心的东西:HDFS、MapReduce。操作HDFS可以通过命令行、WEB接口和JAVA代码。本文档详细介绍了操作HDFS的命令,并配有详细的例子。
HDFS 下的shell学习 基本指令的帮助文档
at org.apache.hadoop.util.Shell.runCommand(Shell.java:482) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at ...
用hadoop开发所用的命令:1、列出所有HadoopShell支持的命令 $bin/hadoopfs-help 2、显示关于某个命令的详细信息 $bin/hadoopfs-helpcommand-name
HDFS的Shell操作,bin/hadoop fs 具体命令 OR bin/hdfs dfs 具体命令 dfs是fs的实现类等等。
1. 编程实现以下指定功能,并利用 Hadoop 提供的 Shell 命令完成相同任务: 2. 编程实现一个类“MyFSDataInputStream”,该类继承“org.apache.hadoop.fs.FSDataInputStream”,要求如下:实现按行读取 HDFS 中指定...
一、Shell命令实现 第一步,启动Hadoop: start-dfs.sh 第二步,检测文件或目录是否存在: ...import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; public class HDFSFileIfExist {
Hadoop之HDFS概述 :https://blog.csdn.net/weixin_45102492/article/details/104369155 HDFS的java客户端操作:https://blog.csdn.net/weixin_45102492/article/details/104378020 Hadoop之HDFS的数据流:...
本文档为Apache官方Hadoop 1.1.0中文文档 ...9.FS Shell使用指南 10.DistCp使用指南 11.Map-Reduce教程 12.Hadoop本地库 13.Streaming 14.Hadoop Archives 15.Hadoop On Demand 另附带 Hadoop API
Hadoop天气分析 该项目将下载世界上大多数国家的天气历史数据,并将数据存储到HDFS中。 将数据放入HDFS后,映射器和化简器作业将针对...hadoop fs -mkdir /user/hadoop hadoop fs -mkdir /user/hadoop/data hadoop fs -
调用文件系统(FS)Shell命令应使用 bin/hadoop fs 的形式。 所有的的FS shell命令使用URI路径作为参数。URI格式是scheme://authority/path。对HDFS文件系统,scheme是hdfs,对本地文件系统,scheme是file。其中scheme...
本篇文章只是简单阐述一下HDFS中常用命令, 在实际开发中可使用 bin/hadoop fs查看命令详情 使用HDFS基本语法: bin/hadoop fs OR bin/hdfs dfs 注:为帮助快速理解并使用本文中使用T表示target 基本命令 1.启动...
windows下搭建nutch会遇到Hadoop下FileUtil.java问题,所以我们一般的做法是找到Hadoop-core-1.2.0源码中的org.apache.hadoop.fs下的FileUtil.java修改其中的CheckReturnValue方法,注释掉其中的内容这时运行会遇到...
(2)熟练使用HDFS操作常用的 Shell命令。(3)熟悉HDFS操作常用的Java API。 A.2.2 实验平台 (1)操作系统:Linux(建议Ubuntu 16.04)。(2) Hadoop版本:2.7.1。 (3)JDK版本:1.7或以上版本。(4) Java IDE:Eclipse。
:-)$ hadoop fs -cat /path/to/file.avro | java -jar avro-tools.jar tojson | head -n 5# HadoopIOAvroFsReader(cfg).read( " /path/to/file.avro " ) take 5 foreach println 首先我们启动 Scala REPL: # Must ...
4.1 案例一:Oozie调度shell脚本 目标:使用Oozie调度Shell脚本 分步实现: 1)解压官方案例模板 [atguigu@hadoop102 oozie-4.0.0-cdh5.3.6]$ tar -zxvf oozie-examples.tar.gz 2)创建工作目录 [atguigu@hadoop102...
hadoop单机安装与测试 1•Local (Standalone)...熟悉hdfs shell hdfs dfs -help 3•Fully-Distributed Mode(分布在多个节点上,每个节点上都在运行) 设置hadoop的运行环境 env是运行环境 hadoop-env.sh叫做hadoo