ol7.7安装部署4节点hadoop 3.2.1分布式集群学习环境的详细教程_其它综合

来源:脚本之家  责任编辑:小易  

因为NET Framework 4.7 WPF 功能采2113用D3DCompiler_47.dll依赖项。5261    解决方法:1、在 C:\Windows\System32 目录4102里面找到 cmd.exe ,右1653键单击以管理员方式运行。2、在打开的命令行窗口里面输入 net stop WuAuServ3、点击【回车】,停止 Windows Update 服务。4、继续输入 echo %windir%,点击【回车】。5、根据显示的文件夹,点击该目录。6、把【SoftwareDistribution 】文件夹的名字改了 。7、回到命令行窗口,继续输入 net start WuAuServ。8、点击【回车】9、重新启动 Windows Update 服务。10、重新安装 .NET Framework 4 即可,兼容性一定不要勾选,然后按“离家的小灰灰”的步骤操作就可以了www.zgxue.com防采集请勿采集本网。

准备4台虚拟机,安装好ol7.7,分配固定ip192.168.168.11 12 13 14,其中192.168.168.11作为master,其他3个作为slave,主节点也同时作为namenode的同时也是datanode,192.168.168.14作为datanode的同时也作为secondary namenodes

此时再打开原来的“计算机管理”窗口中依路径“服务和应用程序—服务”打开,在列表中找到“Windows Update”并单击右键选择“启动”,此时再安装Microsoft.NET Framework 4.5\\4.0的安装包就能顺利通过了

首先修改/etc/hostname将主机名改为master、slave1、slave2、slave3

你什么系统?至少要win7才能装上。

然后修改/etc/hosts文件添加

192.168.168.11 master192.168.168.12 slave1192.168.168.13 slave2192.168.168.14 slave3

然后卸载自带openjdk改为sun jdk,参考https://www.cnblogs.com/yongestcat/p/13222963.html

配置无密码登陆本机

ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsacat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keyschmod 0600 ~/.ssh/authorized_keys

配置互信

master上把公钥传输给各个slave

scp ~/.ssh/id_rsa.pub hadoop@slave1:/home/hadoop/scp ~/.ssh/id_rsa.pub hadoop@slave2:/home/hadoop/scp ~/.ssh/id_rsa.pub hadoop@slave3:/home/hadoop/

在slave主机上将master的公钥加入各自的节点上

cat ~/id_rsa.pub >> ~/.ssh/authorized_keys

master上安装hadoop

sudo tar -xzvf ~/hadoop-3.2.1.tar.gz -C /usr/localsudo mv hadoop-3.2.1-src/ ./hadoopsudo chown -R hadoop: ./hadoop

.bashrc添加并使之生效

export HADOOP_HOME=/usr/local/hadoopexport PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbinexport HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

集群配置,/usr/local/hadoop/etc/hadoop目录中有配置文件:

修改core-site.xml

<configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property></configuration>

修改hdfs-site.xml

<configuration> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop/data/nameNode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/data/dataNode</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.secondary.http.address</name> <value>slave3:50090</value> </property></configuration>

修改mapred-site.xml

<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value> </property></configuration>

修改yarn-site.xml

<configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property></configuration>

修改hadoop-env.sh找到JAVA_HOME的配置将目录修改为

export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_191

修改workers

[hadoop@master /usr/local/hadoop/etc/hadoop]$ vim workersmasterslave1slave2slave3

最后将配置好的/usr/local/hadoop文件夹复制到其他节点

sudo scp -r /usr/local/hadoop/ slave1:/usr/local/sudo scp -r /usr/local/hadoop/ slave2:/usr/local/sudo scp -r /usr/local/hadoop/ slave3:/usr/local/

并且把文件夹owner改为hadoop

sudo systemctl stop firewalldsudo systemctl disable firewalld

关闭防火墙

格式化hdfs,首次运行前运行,以后不用,在任意节点执行都可以/usr/local/hadoop/bin/hadoop namenode –format

看到这个successfuly formatted就是表示成功

start-dfs.sh启动集群hdfs

jps命令查看运行情况

通过master的9870端口可以网页监控http://192.168.168.11:9870/

也可以通过命令行查看集群状态hadoop dfsadmin -report

[hadoop@master ~]$ hadoop dfsadmin -reportWARNING: Use of this script to execute dfsadmin is deprecated.WARNING: Attempting to execute replacement "hdfs dfsadmin" instead. Configured Capacity: 201731358720 (187.88 GB)Present Capacity: 162921230336 (151.73 GB)DFS Remaining: 162921181184 (151.73 GB)DFS Used: 49152 (48 KB)DFS Used%: 0.00%Replicated Blocks: Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0Erasure Coded Block Groups: Low redundancy block groups: 0 Block groups with corrupt internal blocks: 0 Missing block groups: 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0 -------------------------------------------------Live datanodes (4): Name: 192.168.168.11:9866 (master)Hostname: masterDecommission Status : NormalConfigured Capacity: 50432839680 (46.97 GB)DFS Used: 12288 (12 KB)Non DFS Used: 9796546560 (9.12 GB)DFS Remaining: 40636280832 (37.85 GB)DFS Used%: 0.00%DFS Remaining%: 80.58%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Fri Jul 03 11:14:44 CST 2020Last Block Report: Fri Jul 03 11:10:35 CST 2020Num of Blocks: 0 Name: 192.168.168.12:9866 (slave1)Hostname: slave1Decommission Status : NormalConfigured Capacity: 50432839680 (46.97 GB)DFS Used: 12288 (12 KB)Non DFS Used: 9710411776 (9.04 GB)DFS Remaining: 40722415616 (37.93 GB)DFS Used%: 0.00%DFS Remaining%: 80.75%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Fri Jul 03 11:14:44 CST 2020Last Block Report: Fri Jul 03 11:10:35 CST 2020Num of Blocks: 0 Name: 192.168.168.13:9866 (slave2)Hostname: slave2Decommission Status : NormalConfigured Capacity: 50432839680 (46.97 GB)DFS Used: 12288 (12 KB)Non DFS Used: 9657286656 (8.99 GB)DFS Remaining: 40775540736 (37.98 GB)DFS Used%: 0.00%DFS Remaining%: 80.85%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Fri Jul 03 11:14:44 CST 2020Last Block Report: Fri Jul 03 11:10:35 CST 2020Num of Blocks: 0 Name: 192.168.168.14:9866 (slave3)Hostname: slave3Decommission Status : NormalConfigured Capacity: 50432839680 (46.97 GB)DFS Used: 12288 (12 KB)Non DFS Used: 9645883392 (8.98 GB)DFS Remaining: 40786944000 (37.99 GB)DFS Used%: 0.00%DFS Remaining%: 80.87%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Fri Jul 03 11:14:44 CST 2020Last Block Report: Fri Jul 03 11:10:35 CST 2020Num of Blocks: 0 [hadoop@master ~]$

start-yarn.sh可以开启yarn,可以通过master8088端口监控

启动集群命令,可以同时开启hdfs和yarn /usr/local/hadoop/sbin/start-all.sh

停止集群命令 /usr/local/hadoop/sbin/stop-all.sh

就这样,记录过程,以备后查

到此这篇关于ol7.7安装部署4节点hadoop 3.2.1分布式集群学习环境的文章就介绍到这了,更多相关ol7.7安装部署hadoop分布式集群内容请搜索真格学网以前的文章或继续浏览下面的相关文章希望大家以后多多支持真格学网!

have的用法:21131、have+过去分词,构成5261完成时态4102。例句:1653He has left for Japan.他已去了日本。2、have+been+现在分词,构成完成进行时。例句:I have been studying for 8 years。我学英语已达八年了。3、have+been+过去分词,构成完成式被动语态。例句:has been taught in China for many years。中国教英语已经多年。had的用法:1、后接过去分词,构成现在和过去完成、完成进行时。例句:He has gone to Beijing。他已经到北京去了。2、had也可以作为一般动词,意为“拥有”“进行”“吃”“使得”“让”等.如果后接不定式,意为“不得不”。例句:I had my hair cut yesterday.我昨天理了发。扩展资料:have美[h?v]英[h?v]v.有;吃;让;得到n.有钱人;(天然资源多的)富国;诈骗auxv.与过去分词连用构成完成时网络拥有;具有;现在完成时第三人称单数:has 现在分词:having 过去分词:had反义词:lack,lose,abstain,seek,encouragehad美[h?d]英[h?d]v.have(有)的过去式和过去分词网络有空穴储存层的光敏二极管(Hole Accumulation Diode);吃;have的过去式同义词:must,make sure,think of,suffer from,organize内容来自www.zgxue.com请勿采集。


  • 本文相关:
  • linux中安装配置hadoop集群详细步骤
  • 详解从 0 开始使用 docker 快速搭建 hadoop 集群环境
  • 在hadoop集群环境中为mysql安装配置sqoop的教程
  • java结合hadoop集群文件上传下载
  • hadoop单机版和全分布式(集群)安装
  • 国外开发者谈为何放弃php而改用python
  • bower 强大的管理web包管理工具
  • 多种语言(big5\gbk\gb2312\utf8\shift_jis\iso8859-1)的网页编码
  • 编码史记
  • http 状态代码 指示(ajax,bs结构用的到)
  • 总结一些你可能不知道的ip地址
  • hadoop二次排序的原理和实现方法
  • 详解window启动webpack打包的三种方法
  • base64编码的深入认识与理解
  • android 微信文件传输助手文件夹
  • have与had的用法
  • 为什么无法安装NET4.7Win7系统?
  • 我家win7安装Microsoft.NET Framework时显示阻滞问题说,此操作系统不支持 .NET Framework 4.7.1
  • 如何在am4接口主板上安装windows7系统
  • win7安装.net4.5提示拒绝访问
  • .net framework 4.7安装出错怎么办?
  • 只有win7安装盘的情况下,如何将ext4的分区改成ntfs?
  • 网站首页网页制作脚本下载服务器操作系统网站运营平面设计媒体动画电脑基础硬件教程网络安全javascriptasp.netphp编程ajax相关正则表达式asp编程jsp编程编程10000问css/htmlflex脚本加解密web2.0xml/rss网页编辑器相关技巧安全相关网页播放器其它综合dart首页其它综合linux中安装配置hadoop集群详细步骤详解从 0 开始使用 docker 快速搭建 hadoop 集群环境在hadoop集群环境中为mysql安装配置sqoop的教程java结合hadoop集群文件上传下载hadoop单机版和全分布式(集群)安装国外开发者谈为何放弃php而改用pythonbower 强大的管理web包管理工具多种语言(big5\gbk\gb2312\utf8\shift_jis\iso8859-1)的网页编码编码史记总结一些你可能不知道的ip地址hadoop二次排序的原理和实现方法详解window启动webpack打包的三种方法base64编码的深入认识与理解android 微信文件传输助手文件夹最新idea2020注册码永久激活(激活删除svn三种方法delsvn(windows+intellij idea激活码获取方法(iintellij idea2020永久破解,亲测c/s和b/s两种架构的概念、区别和网址(url)支持的最大长度是多少5个linux平台程序员最爱的开发工url中斜杠/和反斜杠\的区别小结提示“处理url时服务器出错”和“webpack基础教程之名词解释简单实用的aixcoder智能编程助手开发插件idea开启run dashboard的配置详解网址(url)支持的最大长度是多少?最大支trie树_字典树(字符串排序)简介及实现分享win10 1903过tp的双机调试问题url中斜杠/和反斜杠\的区别小结到初创公司工作的五个理由xxencode 编码,xx编码介绍、xxencode编码vscode中使用autoprefixer3.0无效的解决方
    免责声明 - 关于我们 - 联系我们 - 广告联系 - 友情链接 - 帮助中心 - 频道导航
    Copyright © 2017 www.zgxue.com All Rights Reserved