在本地环境中建立Cassandra集群
在需要使用Cassandra的业务中,进行开发环境搭建的注意事项。
我使用了Virtualbox + Vagrant + Itamae进行搭建。
-
- ローカルマシン: MacOSX
-
- 仮想マシン(Vagrant): CentOS7
-
- Cassandra 3.11.3
- クラスタは3台で構成
为了暂时迅速地运行而编写的非常随意的备忘录。
创建Cassandra的虚拟机实例
随意创建一个如下所示的Vagrant文件。
Vagrantfile – 社区版虚拟机工具
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
(1..3).each do |no|
name = "nosql#{no}"
config.vm.define name do |node|
node.vm.provider "virtualbox" do |vm|
vm.name = name
vm.customize ["modifyvm", :id, "--memory", "2048"]
end
node.vm.box = "centos/7"
node.vm.hostname = name
node.vm.network "private_network", ip: "192.168.33.4#{no}"
node.vm.provision "shell", inline: "sudo systemctl stop firewalld"
node.vm.provision "shell", inline: "sudo systemctl disable firewalld"
node.vm.provision "shell", inline: "sudo systemctl restart network"
end
end
end
创建Vagrantfile之后,执行vagrant up。
$ vagrant up
$ vagrant ssh-config >> ~/.ssh/config
确认能够通过SSH连接到已构建的虚拟机实例。
$ ssh nosql1
Last login: Thu Apr 26 15:20:03 2018 from 10.0.2.2
[vagrant@nosql1 ~]$ exit
ログアウト
Connection to 127.0.0.1 closed.
$ ssh nosql2
Last login: Thu Apr 26 15:20:03 2018 from 10.0.2.2
[vagrant@nosql2 ~]$ exit
ログアウト
Connection to 127.0.0.1 closed.
$ ssh nosql3
Last login: Thu Apr 26 15:20:03 2018 from 10.0.2.2
[vagrant@nosql3 ~]$ exit
ログアウト
Connection to 127.0.0.1 closed.
使用Itamae在Cassandra上进行构建。
用以下的结构来创建每个文件夹。
CassandraCluster
├── Vagrantfile
├── cookbooks
│ ├── cassandra.rb
│ ├── files
│ │ ├── cassandra.sh
│ │ └── hosts
│ └── templates
│ └── cassandra.yaml.erb
└── node.yml
如果没有安装itamae,请通过gem进行安装。
$ gem install itamae
节点.yml
cluster:
nosql1:
ip: 192.168.33.41
nosql2:
ip: 192.168.33.42
nosql3:
ip: 192.168.33.43
烹饪书/Cassandra.rb
# 必要なパッケージをインストール
package "wget"
package "vim"
package "java-1.8.0-openjdk"
# /etc/hostsファイルを配布する
remote_file "/etc/hosts"
# cassandra-3.11.3ダウンロード
execute "wget http://ftp.tsukuba.wide.ad.jp/software/apache/cassandra/3.11.3/apache-cassandra-3.11.3-bin.tar.gz" do
not_if "test -e /home/vagrant/apache-cassandra-3.11.3-bin.tar.gz"
end
# tar.gzファイルを解凍
execute "tar xvfz apache-cassandra-3.11.3-bin.tar.gz -C /opt/" do
not_if "test -d /opt/apache-cassandra-3.11.3"
end
# ディレクトリ権限変更
directory "/opt/apache-cassandra-3.11.3" do
owner "vagrant"
group "vagrant"
end
# /opt/casssandraでアクセスできるようにシンボリックリンクを作成
link "/opt/cassandra" do
to "/opt/apache-cassandra-3.11.3"
end
# 環境変数の設定を追加
remote_file "/etc/profile.d/cassandra.sh"
# Cassandra設定ファイル
# /opt/cassandra/conf/cassandra.yamlを配布
template "/opt/cassandra/conf/cassandra.yaml" do
# レシピ実行対象のノードのIPアドレスを取得
self_host = node['hostname']
listen_address = node['cluster'][self_host]['ip']
# seedsに設定するIPアドレスを設定
seeds = node['cluster'].values.map {|v| v['ip']}.join(",")
# テンプレートcassandra.yaml.rbにパラメータを渡す
variables(listen_address: listen_address, seeds: seeds)
end
烹饪书籍/文件/卡桑德拉.sh
export CASSANDRA_HOME=/opt/cassandra
export PATH=$PATH:$CASSANDRA_HOME/bin:$CASSANDRA_HOME/tools/bin
烹饪书/文件/主机
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.33.41 nosql1
192.168.33.42 nosql2
192.168.33.43 nosql3
烹饪书籍/模板/cassandra.erb
cluster_name: 'develop-cluster'
num_tokens: 256
hinted_handoff_enabled: true
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
hints_flush_period_in_ms: 10000
max_hints_file_size_in_mb: 128
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
role_manager: CassandraRoleManager
roles_validity_in_ms: 2000
permissions_validity_in_ms: 2000
credentials_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
cdc_enabled: false
disk_failure_policy: stop
commit_failure_policy: stop
prepared_statements_cache_size_mb:
thrift_prepared_statements_cache_size_mb:
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: <%= @seeds %>
concurrent_reads: 32
concurrent_writes: 32
concurrent_counter_writes: 32
concurrent_materialized_view_writes: 32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: <%= @listen_address %>
start_native_transport: true
native_transport_port: 9042
start_rpc: false
rpc_address: <%= @listen_address %>
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
column_index_cache_size_in_kb: 2
compaction_throughput_mb_per_sec: 16
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
slow_query_log_timeout_in_ms: 500
cross_node_timeout: false
endpoint_snitch: SimpleSnitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: false
optional: false
keystore: conf/.keystore
keystore_password: cassandra
internode_compression: dc
inter_dc_tcp_nodelay: false
tracetype_query_ttl: 86400
tracetype_repair_ttl: 604800
enable_user_defined_functions: false
enable_scripted_user_defined_functions: false
windows_timer_interval: 1
transparent_data_encryption_options:
enabled: false
chunk_length_kb: 64
cipher: AES/CBC/PKCS5Padding
key_alias: testing:1
key_provider:
- class_name: org.apache.cassandra.security.JKSKeyProvider
parameters:
- keystore: conf/.keystore
keystore_password: cassandra
store_type: JCEKS
key_password: cassandra
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000
batch_size_warn_threshold_in_kb: 5
batch_size_fail_threshold_in_kb: 50
unlogged_batch_across_partitions_warn_threshold: 10
compaction_large_partition_warning_threshold_mb: 100
gc_warn_threshold_in_ms: 1000
back_pressure_enabled: false
back_pressure_strategy:
- class_name: org.apache.cassandra.net.RateBasedBackPressure
parameters:
- high_ratio: 0.90
factor: 5
flow: FAST
完成到这一步后,运行Itamae。
$ itamae ssh -h nosql1 cookbooks/cassandra.rb -y node.yml
$ itamae ssh -h nosql2 cookbooks/cassandra.rb -y node.yml
$ itamae ssh -h nosql3 cookbooks/cassandra.rb -y node.yml
启动Cassandra
在虚拟机上启动,并使用Cassandra命令启动进程。从nosql1开始按顺序启动。
[vagrant@nosql1 ~]$ cassandra
[vagrant@nosql2 ~]$ cassandra
[vagrant@nosql3 ~]$ cassandra
最後,使用任意一台机器进行nodetool status命令,以确认集群是否已建立。
[vagrant@nosql1 ~]$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.33.41 87.41 KiB 256 67.0% 15c8d0cd-5cd0-4eac-8e4a-dae2afff5722 rack1
UN 192.168.33.42 69.92 KiB 256 68.4% 4709aa74-1fa6-4c1d-9936-53021a1ea639 rack1
UN 192.168.33.43 156.02 KiB 256 64.5% 5d5ddf53-e5eb-4491-95e3-cf079a3797a2 rack1
如果以这种方式输出就可以了。
接下来打算进行数据输入和其他一些工作。