官方文档:http://hadoop.apache.org/docs/r1.2.1/file_system_shell.html
1、登录主节点,切换到hdfs用户
[hdfs@cdhm1~]#su - hdfs
2、列出当前目录有哪些子目录,有哪些文件
[hdfs@cdhm1 ~]$hadoop fs -ls / Found 2 items drwxrwxrwt - hdfs supergroup 0 2017-05-23 16:39 /tmp drwxr-xr-x - hdfs supergroup 02017-05-24 15:45 /user
3、在Hadoop文件系统当中,创建一个test目录
[hdfs@cdhm1 ~]$hadoop fs -mkdir /user/hdfs/test
查看目录下有个test目录,创建成功
[hdfs@cdhm1 ~]$hadoop fs -ls /user/hdfs/ Found 4 items drwx------ - hdfs supergroup0 2017-05-26 08:00 /user/hdfs/.Trash drwxr-xr-x - hdfs supergroup 0 2017-06-05 15:19/user/hdfs/.sparkStaging drwx------ - hdfs supergroup 0 2017-05-24 15:46/user/hdfs/.staging drwxr-xr-x - hdfs supergroup 0 2017-07-03 10:19 /user/hdfs/test
4、删除test目录
[hdfs@cdhm1 ~]$ hadoop fs -rmr /user/hdfs/test 17/07/03 10:46:06INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval =1440 minutes, Emptier interval = 0 minutes. Moved:'hdfs://cdhm1:8020/user/hdfs/test' to trash at:hdfs://cdhm1:8020/user/hdfs/.Trash/Current
再次查看test目录已经没有了
[hdfs@cdhm1 ~]$hadoop fs -ls /user/hdfs/ Found 3 items drwx------ - hdfs supergroup 0 2017-07-03 10:46 /user/hdfs/.Trash drwxr-xr-x - hdfs supergroup 0 2017-06-05 15:19/user/hdfs/.sparkStaging drwx------ - hdfs supergroup 0 2017-05-24 15:46/user/hdfs/.staging
5、把Hadoop文件系统当中的文件下载到本地
[hdfs@cdhm1 ~]$hadoop fs -get /user/hdfs/test2
查看本地test2下载成功
[hdfs@cdhm1 ~]$ ls test2 test.txt
把本地文件上传到Hadoop文件系统当中
[hdfs@cdhm1 ~]$hadoop fs -put test.txt /user/hdfs/
查看hadoop中test.txt已成功上传
[hdfs@cdhm1 ~]$hadoop fs -ls /user/hdfs/ Found 5 items drwx------ - hdfs supergroup 0 2017-07-03 10:46 /user/hdfs/.Trash drwxr-xr-x - hdfs supergroup 0 2017-06-05 15:19/user/hdfs/.sparkStaging drwx------ - hdfs supergroup 0 2017-05-24 15:46/user/hdfs/.staging -rw-r--r-- 3 hdfs supergroup 0 2017-07-03 11:13 /user/hdfs/test.txt drwxr-xr-x - hdfs supergroup 0 2017-07-03 11:04 /user/hdfs/test2
原创文章,作者:ItWorker,如若转载,请注明出处:https://blog.ytso.com/197144.html