Spark学习笔记 (二)Spark2.3 HA集群的分布式安装图文详解_java

来源:脚本之家  责任编辑:小易  

参考答案:山重水复疑无路,柳暗花明又一村www.zgxue.com防采集请勿采集本网。

本文实例讲述了Spark2.3 HA集群的分布式安装。分享给大家供大家参考,具体如下:

一、下载Spark安装包

1、从官网下载

本书是《新概念英语(新版)1》学习辅导书,zd针对新概念英语教材课文的特点以及学习目标,本书以教材中的词汇、语法和课后习题讲解为重点,分为词汇助记、课文重点、课后重点练习讲解三个部分。

http://spark.apache.org/downloads.html

写作课笔记整理第1章绪论写作的含义:1.人们运用语言文字记写思维成果的行为活动。(形式上)2.不仅是个人情感的宣泄与抒发,也是为交流思想、传播信息进行精神生产的创造性劳动过程。(本质上)五四时期

2、从微软的镜像站下载

5R笔记法,又叫做康乃笔记法,是用产生这种笔记法的大学校名命名的。这一方法几乎适用于一切讲授或阅读课,特别是对于听课笔记,5R笔记法应是最佳首选。这种方法是记与学,思考与运用相结合的

http://mirrors.hust.edu.cn/apache/

3、从清华的镜像站下载

一般这个社区矫正人员笔记没有必要写那么认真,只要让社区人员看上去顺得过眼就可以了。

https://mirrors.tuna.tsinghua.edu.cn/apache/

二、安装基础

学习笔记的格式:日期+书名+阅读内容+好词好句+心得体会。读书笔记,是指人们在阅读书籍或文章时,遇到值得记录的东西和自己的心得、体会,随时随地把它写下来的一种文体。古人有条著名的读书

1、Java8安装成功

2、zookeeper安装成功

3、hadoop2.7.5 HA安装成功

4、Scala安装成功(不安装进程也可以启动)

 

三、Spark安装过程

 1、上传并解压缩

[hadoop@hadoop1 ~]$ lsapps data exam inithive.conf movie spark-2.3.0-bin-hadoop2.7.tgz udf.jarcookies data.txt executions json.txt projects student zookeeper.outcourse emp hive.sql log sougou temp[hadoop@hadoop1 ~]$ tar -zxvf spark-2.3.0-bin-hadoop2.7.tgz -C apps/

2、为安装包创建一个软连接

[hadoop@hadoop1 ~]$ cd apps/[hadoop@hadoop1 apps]$ lshadoop-2.7.5 hbase-1.2.6 spark-2.3.0-bin-hadoop2.7 zookeeper-3.4.10 zookeeper.out[hadoop@hadoop1 apps]$ ln -s spark-2.3.0-bin-hadoop2.7/ spark[hadoop@hadoop1 apps]$ ll总用量 36drwxr-xr-x. 10 hadoop hadoop 4096 3月 23 20:29 hadoop-2.7.5drwxrwxr-x. 7 hadoop hadoop 4096 3月 29 13:15 hbase-1.2.6lrwxrwxrwx. 1 hadoop hadoop 26 4月 20 13:48 spark -> spark-2.3.0-bin-hadoop2.7/drwxr-xr-x. 13 hadoop hadoop 4096 2月 23 03:42 spark-2.3.0-bin-hadoop2.7drwxr-xr-x. 10 hadoop hadoop 4096 3月 23 2017 zookeeper-3.4.10-rw-rw-r--. 1 hadoop hadoop 17559 3月 29 13:37 zookeeper.out[hadoop@hadoop1 apps]$

3、进入spark/conf修改配置文件

(1)进入配置文件所在目录

[hadoop@hadoop1 ~]$ cd apps/spark/conf/[hadoop@hadoop1 conf]$ ll总用量 36-rw-r--r--. 1 hadoop hadoop 996 2月 23 03:42 docker.properties.template-rw-r--r--. 1 hadoop hadoop 1105 2月 23 03:42 fairscheduler.xml.template-rw-r--r--. 1 hadoop hadoop 2025 2月 23 03:42 log4j.properties.template-rw-r--r--. 1 hadoop hadoop 7801 2月 23 03:42 metrics.properties.template-rw-r--r--. 1 hadoop hadoop 865 2月 23 03:42 slaves.template-rw-r--r--. 1 hadoop hadoop 1292 2月 23 03:42 spark-defaults.conf.template-rwxr-xr-x. 1 hadoop hadoop 4221 2月 23 03:42 spark-env.sh.template[hadoop@hadoop1 conf]$

(2)复制spark-env.sh.template并重命名为spark-env.sh,并在文件最后添加配置内容

[hadoop@hadoop1 conf]$ cp spark-env.sh.template spark-env.sh[hadoop@hadoop1 conf]$ vi spark-env.sh

export JAVA_HOME=/usr/local/jdk1.8.0_73#export SCALA_HOME=/usr/share/scalaexport HADOOP_HOME=/home/hadoop/apps/hadoop-2.7.5export HADOOP_CONF_DIR=/home/hadoop/apps/hadoop-2.7.5/etc/hadoopexport SPARK_WORKER_MEMORY=500mexport SPARK_WORKER_CORES=1export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181 -Dspark.deploy.zookeeper.dir=/spark"

注:

#export SPARK_MASTER_IP=hadoop1  这个配置要注释掉。 

集群搭建时配置的spark参数可能和现在的不一样,主要是考虑个人电脑配置问题,如果memory配置太大,机器运行很慢。 

说明: 

-Dspark.deploy.recoveryMode=ZOOKEEPER    #说明整个集群状态是通过zookeeper来维护的,整个集群状态的恢复也是通过zookeeper来维护的。就是说用zookeeper做了spark的HA配置,Master(Active)挂掉的话,Master(standby)要想变成Master(Active)的话,Master(Standby)就要像zookeeper读取整个集群状态信息,然后进行恢复所有Worker和Driver的状态信息,和所有的Application状态信息; 

-Dspark.deploy.zookeeper.url=hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181#将所有配置了zookeeper,并且在这台机器上有可能做master(Active)的机器都配置进来;(我用了4台,就配置了4台) 

-Dspark.deploy.zookeeper.dir=/spark 

这里的dir和zookeeper配置文件zoo.cfg中的dataDir的区别??? 

-Dspark.deploy.zookeeper.dir是保存spark的元数据,保存了spark的作业运行状态; 

zookeeper会保存spark集群的所有的状态信息,包括所有的Workers信息,所有的Applactions信息,所有的Driver信息,如果集群 

(3)复制slaves.template成slaves

[hadoop@hadoop1 conf]$ cp slaves.template slaves[hadoop@hadoop1 conf]$ vi slaves

添加如下内容

hadoop1hadoop2hadoop3hadoop4

(4)将安装包分发给其他节点

[hadoop@hadoop1 ~]$ cd apps/[hadoop@hadoop1 apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ hadoop2:$PWD[hadoop@hadoop1 apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ hadoop3:$PWD[hadoop@hadoop1 apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ hadoop4:$PWD

创建软连接

[hadoop@hadoop2 ~]$ cd apps/[hadoop@hadoop2 apps]$ lshadoop-2.7.5 hbase-1.2.6 spark-2.3.0-bin-hadoop2.7 zookeeper-3.4.10[hadoop@hadoop2 apps]$ ln -s spark-2.3.0-bin-hadoop2.7/ spark[hadoop@hadoop2 apps]$ ll总用量 16drwxr-xr-x 10 hadoop hadoop 4096 3月 23 20:29 hadoop-2.7.5drwxrwxr-x 7 hadoop hadoop 4096 3月 29 13:15 hbase-1.2.6lrwxrwxrwx 1 hadoop hadoop 26 4月 20 19:26 spark -> spark-2.3.0-bin-hadoop2.7/drwxr-xr-x 13 hadoop hadoop 4096 4月 20 19:24 spark-2.3.0-bin-hadoop2.7drwxr-xr-x 10 hadoop hadoop 4096 3月 21 19:31 zookeeper-3.4.10[hadoop@hadoop2 apps]$

4、配置环境变量

所有节点均要配置

[hadoop@hadoop1 spark]$ vi ~/.bashrc

#Sparkexport SPARK_HOME=/home/hadoop/apps/sparkexport PATH=$PATH:$SPARK_HOME/bin

保存并使其立即生效

[hadoop@hadoop1 spark]$ source ~/.bashrc

四、启动

1、先启动zookeeper集群

所有节点均要执行

[hadoop@hadoop1 ~]$ zkServer.sh startZooKeeper JMX enabled by defaultUsing config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfgStarting zookeeper ... STARTED[hadoop@hadoop1 ~]$ zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfgMode: follower[hadoop@hadoop1 ~]$

2、在启动HDFS集群

任意一个节点执行即可

[hadoop@hadoop1 ~]$ start-dfs.sh

3、在启动Spark集群

在一个节点上执行

[hadoop@hadoop1 ~]$ cd apps/spark/sbin/[hadoop@hadoop1 sbin]$ start-all.sh

4、查看进程

5、问题

查看进程发现spark集群只有hadoop1成功启动了Master进程,其他3个节点均没有启动成功,需要手动启动,进入到/home/hadoop/apps/spark/sbin目录下执行以下命令,3个节点都要执行

[hadoop@hadoop2 ~]$ cd ~/apps/spark/sbin/[hadoop@hadoop2 sbin]$ start-master.sh

6、执行之后再次查看进程

Master进程和Worker进程都以启动成功

五、验证

1、查看Web界面Master状态

hadoop1是ALIVE状态,hadoop2、hadoop3和hadoop4均是STANDBY状态

hadoop1节点

hadoop2节点

hadoop3

hadoop4

2、验证HA的高可用

手动干掉hadoop1上面的Master进程,观察是否会自动进行切换

干掉hadoop1上的Master进程之后,再次查看web界面

hadoo1节点,由于Master进程被干掉,所以界面无法访问

hadoop2节点,Master被干掉之后,hadoop2节点上的Master成功篡位成功,成为ALIVE状态

hadoop3节点

hadoop4节点

六、执行Spark程序on standalone

1、执行第一个Spark程序

[hadoop@hadoop3 ~]$ /home/hadoop/apps/spark/bin/spark-submit \> --class org.apache.spark.examples.SparkPi \> --master spark://hadoop1:7077 \> --executor-memory 500m \> --total-executor-cores 1 \> /home/hadoop/apps/spark/examples/jars/spark-examples_2.11-2.3.0.jar \> 100

其中的spark://hadoop1:7077是下图中的地址

运行结果

2、启动spark shell

[hadoop@hadoop1 ~]$ /home/hadoop/apps/spark/bin/spark-shell \> --master spark://hadoop1:7077 \> --executor-memory 500m \> --total-executor-cores 1

参数说明:

--master spark://hadoop1:7077 指定Master的地址

--executor-memory 500m:指定每个worker可用内存为500m

--total-executor-cores 1: 指定整个集群使用的cup核数为1个

注意:

如果启动spark shell时没有指定master地址,但是也可以正常启动spark shell和执行spark shell中的程序,其实是启动了spark的local模式,该模式仅在本机启动一个进程,没有与集群建立联系。

Spark Shell中已经默认将SparkContext类初始化为对象sc。用户代码如果需要用到,则直接应用sc即可

Spark Shell中已经默认将SparkSQl类初始化为对象spark。用户代码如果需要用到,则直接应用spark即可

3、 在spark shell中编写WordCount程序

(1)编写一个hello.txt文件并上传到HDFS上的spark目录下

[hadoop@hadoop1 ~]$ vi hello.txt[hadoop@hadoop1 ~]$ hadoop fs -mkdir -p /spark[hadoop@hadoop1 ~]$ hadoop fs -put hello.txt /spark

hello.txt的内容如下

you,jumpi,jumpyou,jumpi,jumpjump

(2)在spark shell中用scala语言编写spark程序

scala> sc.textFile("/spark/hello.txt").flatMap(_.split(",")).map((_,1)).reduceByKey(_+_).saveAsTextFile("/spark/out")

说明:

sc是SparkContext对象,该对象是提交spark程序的入口

textFile("/spark/hello.txt")是hdfs中读取数据

flatMap(_.split(" "))先map再压平

map((_,1))将单词和1构成元组

reduceByKey(_+_)按照key进行reduce,并将value累加

saveAsTextFile("/spark/out")将结果写入到hdfs中

(3)使用hdfs命令查看结果

[hadoop@hadoop2 ~]$ hadoop fs -cat /spark/out/p*(jump,5)(you,2)(i,2)[hadoop@hadoop2 ~]$

七、 执行Spark程序on YARN

1、前提

成功启动zookeeper集群、HDFS集群、YARN集群

2、启动Spark on YARN

[hadoop@hadoop1 bin]$ spark-shell --master yarn --deploy-mode client

报错如下:

报错原因:内存资源给的过小,yarn直接kill掉进程,则报rpc连接失败、ClosedChannelException等错误。

解决方法:

先停止YARN服务,然后修改yarn-site.xml,增加如下内容

<property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> <description>Whether virtual memory limits will be enforced for containers</description> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>4</value> <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description> </property>

将新的yarn-site.xml文件分发到其他Hadoop节点对应的目录下,最后在重新启动YARN。 

重新执行以下命令启动spark on yarn

[hadoop@hadoop1 hadoop]$ spark-shell --master yarn --deploy-mode client

启动成功

3、打开YARN的web界面

打开YARN WEB页面:http://hadoop4:8088

可以看到Spark shell应用程序正在运行

 单击ID号链接,可以看到该应用程序的详细信息

单击“ApplicationMaster”链接

4、运行程序

scala> val array = Array(1,2,3,4,5)array: Array[Int] = Array(1, 2, 3, 4, 5)scala> val rdd = sc.makeRDD(array)rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at makeRDD at <console>:26scala> rdd.countres0: Long = 5 scala>

再次查看YARN的web界面

 查看executors

5、执行Spark自带的示例程序PI

[hadoop@hadoop1 ~]$ spark-submit --class org.apache.spark.examples.SparkPi \> --master yarn \> --deploy-mode cluster \> --driver-memory 500m \> --executor-memory 500m \> --executor-cores 1 \> /home/hadoop/apps/spark/examples/jars/spark-examples_2.11-2.3.0.jar \> 10

执行过程

[hadoop@hadoop1 ~]$ spark-submit --class org.apache.spark.examples.SparkPi \> --master yarn \> --deploy-mode cluster \> --driver-memory 500m \> --executor-memory 500m \> --executor-cores 1 \> /home/hadoop/apps/spark/examples/jars/spark-examples_2.11-2.3.0.jar \> 102018-04-21 17:57:32 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable2018-04-21 17:57:34 INFO ConfiguredRMFailoverProxyProvider:100 - Failing over to rm22018-04-21 17:57:34 INFO Client:54 - Requesting a new application from cluster with 4 NodeManagers2018-04-21 17:57:34 INFO Client:54 - Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)2018-04-21 17:57:34 INFO Client:54 - Will allocate AM container, with 884 MB memory including 384 MB overhead2018-04-21 17:57:34 INFO Client:54 - Setting up container launch context for our AM2018-04-21 17:57:34 INFO Client:54 - Setting up the launch environment for our AM container2018-04-21 17:57:34 INFO Client:54 - Preparing resources for our AM container2018-04-21 17:57:36 WARN Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.2018-04-21 17:57:39 INFO Client:54 - Uploading resource file:/tmp/spark-93bd68c9-85de-482e-bbd7-cd2cee60e720/__spark_libs__8262081479435245591.zip -> hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005/__spark_libs__8262081479435245591.zip2018-04-21 17:57:44 INFO Client:54 - Uploading resource file:/home/hadoop/apps/spark/examples/jars/spark-examples_2.11-2.3.0.jar -> hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005/spark-examples_2.11-2.3.0.jar2018-04-21 17:57:44 INFO Client:54 - Uploading resource file:/tmp/spark-93bd68c9-85de-482e-bbd7-cd2cee60e720/__spark_conf__2498510663663992254.zip -> hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005/__spark_conf__.zip2018-04-21 17:57:44 INFO SecurityManager:54 - Changing view acls to: hadoop2018-04-21 17:57:44 INFO SecurityManager:54 - Changing modify acls to: hadoop2018-04-21 17:57:44 INFO SecurityManager:54 - Changing view acls groups to: 2018-04-21 17:57:44 INFO SecurityManager:54 - Changing modify acls groups to: 2018-04-21 17:57:44 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()2018-04-21 17:57:44 INFO Client:54 - Submitting application application_1524303370510_0005 to ResourceManager2018-04-21 17:57:44 INFO YarnClientImpl:273 - Submitted application application_1524303370510_00052018-04-21 17:57:45 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)2018-04-21 17:57:45 INFO Client:54 - client token: N/A diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1524304664749 final status: UNDEFINED tracking URL: http://hadoop4:8088/proxy/application_1524303370510_0005/ user: hadoop2018-04-21 17:57:46 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)2018-04-21 17:57:47 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)2018-04-21 17:57:48 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)2018-04-21 17:57:49 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)2018-04-21 17:57:50 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)2018-04-21 17:57:51 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)2018-04-21 17:57:52 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)2018-04-21 17:57:53 INFO Client:54 - Application report for application_1524303370510_0005 (state: ACCEPTED)2018-04-21 17:57:54 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:57:54 INFO Client:54 - client token: N/A diagnostics: N/A ApplicationMaster host: 192.168.123.104 ApplicationMaster RPC port: 0 queue: default start time: 1524304664749 final status: UNDEFINED tracking URL: http://hadoop4:8088/proxy/application_1524303370510_0005/ user: hadoop2018-04-21 17:57:55 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:57:56 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:57:57 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:57:58 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:57:59 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:58:00 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:58:01 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:58:02 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:58:03 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:58:04 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:58:05 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:58:06 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:58:07 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:58:08 INFO Client:54 - Application report for application_1524303370510_0005 (state: RUNNING)2018-04-21 17:58:09 INFO Client:54 - Application report for application_1524303370510_0005 (state: FINISHED)2018-04-21 17:58:09 INFO Client:54 - client token: N/A diagnostics: N/A ApplicationMaster host: 192.168.123.104 ApplicationMaster RPC port: 0 queue: default start time: 1524304664749 final status: SUCCEEDED tracking URL: http://hadoop4:8088/proxy/application_1524303370510_0005/ user: hadoop2018-04-21 17:58:09 INFO Client:54 - Deleted staging directory hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_00052018-04-21 17:58:09 INFO ShutdownHookManager:54 - Shutdown hook called2018-04-21 17:58:09 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-93bd68c9-85de-482e-bbd7-cd2cee60e7202018-04-21 17:58:09 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-06de6905-8067-4f1e-a0a0-bc8a51daf535[hadoop@hadoop1 ~]$

希望本文所述对大家spark程序设计有所帮助。

父亲的性格:勤劳能干、要强、不甘落后、谦恭、朴实而又有点自卑课后练习:二、1、因为在家乡,台阶越高就代表地位越高,父亲向来老实百,地位很低,所度以觉得我们家台阶太低,希望改变。2、因为父亲的地位一直很低,产生了一种自卑心理,所以当他变得地位高,受人尊敬时感觉不自在。3、略(这题老师没讲…)三、相同点:都是表现父亲的文章,并抓住生活知细节,以小见大。不同点:《背影》抓住“背影”命题立意道,组织材料,突出父爱,给人深刻的印象,让人强烈地感受父爱。《台阶》抓住“台阶”命题立意,组织材料,用建房这个一般性题材有了侧重点,有了特色。其他笔记:28自然段“父亲很粗暴地一把推开我”看出了父亲要强的性格,不相信自己连担水都挑不动,写出了父亲的不服老。29自然段“若有所失”是因为父亲只是个劳回动者,失去了劳动力,不能劳动,就无法体现自身价值,也就失去了生命的意义。父亲为造台阶捡砖、捡瓦、塞角票、砍柴、捡屋基卵石、编草鞋…答…台阶是父亲的梦想,也是父亲的催老剂内容来自www.zgxue.com请勿采集。


  • 本文相关:
  • linux环境不使用hadoop安装单机版spark的方法
  • 浅谈七种常见的hadoop和spark项目案例
  • python搭建spark分布式集群环境
  • 使用docker快速搭建spark集群的方法教程
  • centos7下spark安装配置教程详解
  • spark学习笔记(一)spark初识【特性、组成、应用】
  • 初识spark入门
  • 详解java编写并运行spark应用程序的方法
  • spark整合mongodb的方法
  • java 中spark中将对象序列化存储到hdfs
  • java中重载、覆盖和隐藏三者的区别分析
  • 通过dom4j解析xml字符串(示例代码)
  • spring boot 2 thymeleaf服务器端表单验证实现详解
  • mybatis执行动态sql的方法
  • jdk8中新增的原子性操作类longadder详解
  • java内存结构和数据类型
  • 深入解析java编程中的stringbuffer与stringbuider
  • java模拟hibernate一级缓存示例分享
  • linux配置java环境变量详细过程
  • java设计模式之命令模式(command模式)介绍
  • 台阶 学习笔记 快!!!
  • 第二批论群众路线学习笔记
  • 2019年党员学习笔记
  • 拿来主义学习笔记
  • 学习笔记的介绍
  • 现代写作教程(第二版) 学习笔记
  • 如何做学习笔记最有效
  • 社区矫正人员学习笔记
  • 学习笔记的格式?
  • 社区矫正人员教育学习笔记怎么写?
  • 网站首页网页制作脚本下载服务器操作系统网站运营平面设计媒体动画电脑基础硬件教程网络安全c#教程vbvb.netc 语言java编程delphijavaandroidiosswiftscala易语言汇编语言其它相关首页javalinux环境不使用hadoop安装单机版spark的方法浅谈七种常见的hadoop和spark项目案例python搭建spark分布式集群环境使用docker快速搭建spark集群的方法教程centos7下spark安装配置教程详解spark学习笔记(一)spark初识【特性、组成、应用】初识spark入门详解java编写并运行spark应用程序的方法spark整合mongodb的方法java 中spark中将对象序列化存储到hdfsjava中重载、覆盖和隐藏三者的区别分析通过dom4j解析xml字符串(示例代码)spring boot 2 thymeleaf服务器端表单验证实现详解mybatis执行动态sql的方法jdk8中新增的原子性操作类longadder详解java内存结构和数据类型深入解析java编程中的stringbuffer与stringbuiderjava模拟hibernate一级缓存示例分享linux配置java环境变量详细过程java设计模式之命令模式(command模式)介绍java使double保留两位小数的多方java8 十大新特性详解java.net.socketexception: connjava写入文件的几种方法分享java环境变量的设置方法(图文教程java 十六进制与字符串的转换java list用法示例详解java中file类的使用方法javaweb实现文件上传下载功能实例java 字符串截取的三种方法(推荐java中重写equals和重写hashcode()java spring mvc 上传下载文件配置及cont详细介绍mybatis 3.4.0版本的功能200行java代码编写一个计算器程序springcloud gateway跨域配置代码实例nginx启用压缩及开启gzip 压缩的方法flutter listview 上拉加载更多下拉刷新功spring-boot-maven-plugin 插件的作用详解java swing组件jfilechooser用法实例分析java简单选择排序算法原理及实现
    免责声明 - 关于我们 - 联系我们 - 广告联系 - 友情链接 - 帮助中心 - 频道导航
    Copyright © 2017 www.zgxue.com All Rights Reserved