让我们用Java尝试一下Spark Word2Vec
因为JAVA最容易编写,所以我尝试安装了适用于Java的库。但是,由于在C语言中找不到下面的函数,似乎需要一些额外的改进。看起来暂时无法向已创建的Word2Vec模型添加数据,我不喜欢必须从头开始输入大量数据的方式。我需要找一些其他方法。
Word2Vec C 示例
我们也等会儿试试。
模型.最相似(正[‘女人’, ‘国王’],负[‘男人’])
模型.不匹配(“早餐 麦片 晚餐 午餐”.split())
模型.相似度(‘女人’, ‘男人’)
步驟
1. 创建一个普通的Java项目。
2. 转换为Maven项目。
3. 在pom.xml中添加MavenInstall。
4. 创建一个主类。
将pom.xml文件中的内容记录下来,并执行MavenInstall操作。
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>SimpleSpark</groupId>
<artifactId>SimpleSpark</artifactId>
<version>0.0.1-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.10</artifactId>
<version>1.6.1</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src</sourceDirectory>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
创建一个名为main的类。
(Note: The translation provided is in Simplified Chinese)
package demo;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.ml.feature.PolynomialExpansion;
import org.apache.spark.ml.feature.Word2Vec;
import org.apache.spark.ml.feature.Word2VecModel;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.types.ArrayType;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.Metadata;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
public class W2VecApplication {
static String data_vec = "/tmp/word2/";
static String data_model = "/tmp/model/";
static SparkConf conf;
static JavaSparkContext jsc;
static SQLContext sqlContext;
public static void main(String[] args) {
conf = new SparkConf().setAppName("demo.W2VecApplication").setMaster("local");
jsc = new JavaSparkContext(conf);
sqlContext = new org.apache.spark.sql.SQLContext(jsc);
String input = "The largest open source project in data processing¥n"
+ "Since its release, Apache Spark has seen rapid adoption by enterprises across a wide range of industries. Internet powerhouses such as Netflix, Yahoo, and eBay have deployed Spark at massive scale, collectively processing multiple petabytes of data on clusters of over 8,000 nodes. It has quickly become the largest open source community in big data, with over 1000 contributors from 250+ organizations.¥n"
+ "The team that created Apache Spark founded Databricks in 2013.¥n"
+ "Apache Spark is 100% open source, hosted at the vendor-independent Apache Software Foundation. At Databricks, we are fully committed to maintaining this open development model. Together with the Spark community, Databricks continues to contribute heavily to the Apache Spark project, through both development and community evangelism.¥n"
+ "At Databricks, we’re working hard to make Spark easier to use and run than ever, through our efforts on both the Spark codebase and support materials around it. All of our work on Spark is open source and goes directly to Apache.¥n"
+ "Speed Engineered from the bottom-up for performance, Spark can be 100x faster than Hadoop for large scale data processing by exploiting in memory computing and other optimizations. Spark is also fast when data is stored on disk, and currently holds the world record for large-scale on-disk sorting.¥n"
+ "Ease of Use Spark has easy-to-use APIs for operating on large datasets. This includes a collection of over 100 operators for transforming data and familiar data frame APIs for manipulating semi-structured data.¥n"
+ "A Unified Engine Spark comes packaged with higher-level libraries, including support for SQL queries, streaming data, machine learning and graph processing. These standard libraries increase developer productivity and can be seamlessly combined to create complex workflows.¥n"
+ "Apache Spark is an open source cluster computing framework. Originally developed at the University of California, Berkeley's AMPLab, the Spark codebase was later donated to the Apache Software Foundation, which has maintained it since. Spark provides an interface for programming entire clusters with implicit data parallelism and fault-tolerance.¥n"
+ "Apache Spark provides programmers with an application programming interface centered on a data structure called the resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines, that is maintained in a fault-tolerant way.[1] It was developed in response to limitations in the MapReduce cluster computing paradigm, which forces a particular linear dataflow structure on distributed programs: MapReduce programs read input data from disk, map a function across the data, reduce the results of the map, and store reduction results on disk. Spark's RDDs function as a working set for distributed programs that offers a (deliberately) restricted form of distributed shared memory.[2]¥n"
+ "The availability of RDDs facilitates the implementation of both iterative algorithms, that visit their dataset multiple times in a loop, and interactive/exploratory data analysis, i.e., the repeated database-style querying of data. The latency of such applications (compared to Apache Hadoop, a popular MapReduce implementation) may be reduced by several orders of magnitude.[1][3] Among the class of iterative algorithms are the training algorithms for machine learning systems, which formed the initial impetus for developing Apache Spark.[4]¥n"
+ "Apache Spark requires a cluster manager and a distributed storage system. For cluster management, Spark supports standalone (native Spark cluster), Hadoop YARN, or Apache Mesos.[5] For distributed storage, Spark can interface with a wide variety, including Hadoop Distributed File System (HDFS),[6] MapR File System (MapR-FS),[7] Cassandra,[8] OpenStack Swift, Amazon S3, Kudu, or a custom solution can be implemented. Spark also supports a pseudo-distributed local mode, usually used only for development or testing purposes, where distributed storage is not required and the local file system can be used instead; in such a scenario, Spark is run on a single machine with one executor per CPU core.";
try {
// 学習データ作成
startStudy(Arrays.asList(input.split("¥n")));
// 学習データから検索
findSomething("Spark");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}catch(IllegalStateException e2){
}
}
private static void startStudy(List<String> al) throws IOException{
// TODO Auto-generated method stub
PolynomialExpansion polyExpansion = new PolynomialExpansion()
.setInputCol("features")
.setOutputCol("polyFeatures")
.setDegree(3);
// 文章をざっくり読ませます
List<Row> aljr = new ArrayList<Row>();
for(int i = 0 ; i < al.size() ; i++){
aljr.add((Row)RowFactory.create(Arrays.asList(al.get(i).toString().split(" "))));
}
JavaRDD<Row> jrdd = jsc.parallelize(aljr);
StructType schema = new StructType(new StructField[]{
new StructField("text", new ArrayType(DataTypes.StringType, true), false, Metadata.empty())
});
DataFrame documentDF = sqlContext.createDataFrame(jrdd, schema);
// Learn a mapping from words to Vectors.
Word2Vec word2Vec = new Word2Vec()
.setInputCol("text")
.setOutputCol("result")
.setVectorSize(3)
.setMinCount(0);
// Word2 vec model の生成
Word2VecModel model = word2Vec.fit(documentDF);
DataFrame result = model.transform(documentDF);
for (Row r : result.select("result").take(3)) {
System.out.println(r);
}
// 一旦データ保存
if(!new File(data_model).exists()){
model.save(data_model);
}else{
model.write().overwrite().save(data_model);
}
if(!new File(data_vec).exists()){
word2Vec.save(data_vec);
}else{
word2Vec.write().overwrite().save(data_vec);
}
}
private static void findSomething(String str) throws IllegalStateException{
// 先ほど保存したファイル再利用
Word2Vec vec = Word2Vec.load(data_vec);
Word2VecModel model = Word2VecModel.load(data_model);
// Find similar word
DataFrame similar = model.findSynonyms(str, 30);
for (int i = 0 ; i < similar.count() ; i++) {
System.out.println(similar.showString(i,false));
}
}
}
+--------------+-------------------+
|word |similarity |
+--------------+-------------------+
|system. |0.2014637445894577 |
|called |0.19980325869996565|
|analysis, |0.1993920149609777 |
|availability |0.19908706675080787|
|datasets. |0.19904390575754743|
|higher-level |0.19838541460232392|
|particular |0.1955145561157753 |
|Use |0.19286837761820919|
|powerhouses |0.19223540828654095|
|bottom-up |0.19097962682578687|
|or |0.18865127679894006|
|processing. |0.18847629567307478|
|nodes. |0.18771307406284526|
|Engine |0.18674049047044267|
|when |0.1864894622682742 |
|may |0.1849541028612345 |
|implicit |0.181728914295441 |
|linear |0.179022874456727 |
|Amazon |0.1789115889397055 |
|Apache. |0.17816780927062123|
|response |0.17644830351552274|
|it |0.1755878682373225 |
|purposes, |0.17553053395023288|
|collectively |0.17349352919593652|
|scale |0.16585955085424214|
|the |0.1641250039271931 |
|reduction |0.16363822547590098|
|record |0.16354020270709446|
|optimizations.|0.1595631774706085 |
+--------------+-------------------+
only showing top 29 rows
我从维基百科和Spark的概述中获取了Spark的说明,所以想着能否理解其概要,于是随便从上面连续组合了一些词语,看能否组成一句话……但是仅有这些信息,不知道在说什么。我很想看看历次会议的会议记录。
System called "Spark1(入力した文字)".
Analysis datasets with higher-level availability, particular use of powerhouses and buttom-up or processing nodes Engine