?一. 概念簡單描述
1. MongoDB分片集群包含組件: mongos,configserver,shardding分片
2. Mongos:路由服務是Sharded cluster的訪問入口,本身不存儲數據
(1) 負載處理客戶端連接;
(2) 負責集群數據的分片
hadoop怎樣搭建集群,
3. Configserver: 配置服務器,存儲所有數據庫元信息(路由、分片)的配置。mongos本身沒有物理存儲分片服務器和數據路由信息,只是緩存在內存里,配置服務器則實際存儲這些數據。mongos第一次啟動或者關掉重啟就會從 config server 加載配置信息,以后如果配置服務器信息變化會通知到所有的 mongos 更新自己的狀態, 這樣mongos 就能繼續準確路由。在生產環境通常有多個 config server 配置服務器,因為它存儲了分片路由的元數據,防止數據丟失.
4. 分片(sharding)是指將數據庫拆分,將其分散在不同的機器上的過程。將數據分散到不同的機器上,不需要功能強大的服務器就可以存儲更多的數據和處理更大的負載,基本思想就是將集合切成小塊,這些塊分散到若干片里,每個片只負責總數據的一部分,最后通過一個均衡器來對各個分片進行均衡(數據遷移)
(1) 分片節點可以是一個實例,也可以說一個副本集集群
(2) 副本集集群中的仲裁節點(Arbiter),只負責在主節點宕機時,將從節點選舉為主節點
?
tomcat集群搭建。?
二.MongoDB分片+副本集集群部署環境
1.服務器信息(實際服務器配置,看需求,根據自身情況決定)
?
副本集。2.服務端口
?
3.軟件版本
(1)系統: Centos7.4
(2)Mongo: Percona mongo 3.4
(3)supervisord : 3.3.4
k8s集群搭建,二. 集群部署
1. 安裝mongo
(1) 下載軟件包
#所有服務器操作
mkdir -p /opt/upload/mongo_packge cd /opt/upload/mongo_packge wget 'https://www.percona.com/downloads/percona-server-mongodb-3.4/percona-server-mongodb-3.4.14-2.12/binary/redhat/7/x86_64/percona-server-mongodb-3.4.14-2.12-r28ff075-el7-x86_64-bundle.tar'
?
(2)解壓軟件包并安裝mongo
# 所有節點操作
mkdir -p rpm tar -xf percona-server-mongodb-3.4.14-2.12-r28ff075-el7-x86_64-bundle.tar -C rpm/ cd rpm yum -y install ./*.rpm
?
mongodb分片集群,2. 部署configserver
(1) 創建configserver相關目錄
# mongos-01, mongos-02, mongos-03操作
mkdir -p /opt/configserver/configserver_conf mkdir -p /opt/configserver/configserver_data mkdir -p /opt/configserver/configserver_key mkdir -p /opt/configserver/configserver_log
?
(2) 創建配置文件
# mongos-01, mongos-02, mongos-03操作
cat <<EOF> /opt/configserver/configserver_conf/configserver.conf dbpath=/opt/configserver/configserver_data directoryperdb=true logpath=/opt/configserver/configserver_log/config.log bind_ip = 0.0.0.0 port=21000 maxConns=10000 replSet=configs configsvr=true logappend=true fork=true httpinterface=true #auth=true #keyFile=/opt/configserver/configserver_key/mongo_keyfile EOFchown -R mongod:mongod /opt/configserver
?
如何搭建服務器集群。(3) 啟動configserver
# mongos-01, mongos-02, mongos-03操作
/usr/bin/mongod -f /opt/configserver/configserver_conf/configserver.conf
?
(4) 創建configserver副本集
# mongos-01, mongos-02, mongos-03中任意節點操作?
mongo --port 21000 #登陸mongo config = {_id : "configs",members : [{_id : 0, host : "172.18.6.87:21000" },{_id : 1, host : "172.18.6.86:21000" },{_id : 2, host : "172.18.6.85:21000" }]} #配置副本集成員節點 rs.initiate(config) #初始化副本集
3. 部署sharding01分片副本集(本次部署兩組副本集,每組副本集作為一個分片節點)
(1) 創建程序相關目錄 #操作服務器shardding01_arbitration-01, shardding01_mongodb-01, shardding01_mongodb-02 mkdir -p /opt/mongodb/mongodb_data mkdir -p /opt/mongodb/mongodb_keyfile mkdir -p /opt/mongodb/ mongodb_log chow -R mongod:mongod /opt/mongodb/ mongodb_log(2) 修改配置文件(shardding01_arbitration-01節點的cacheSizeGB參數,調整為2G) #操作服務器shardding01_arbitration-01, shardding01_mongodb-01, shardding01_mongodb-02 cat <<EOF >/etc/mongod.conf # mongod.conf, Percona Server for MongoDB # for documentation of all options, see: # http://docs.mongo.org/manual/reference/configuration-options/ # Where and how to store data. storage:dbPath: /opt/mongodb/mongodb_datadirectoryPerDB: truejournal:enabled: true # engine: mmapv1 # engine: rocksdbengine: wiredTiger # engine: inMemory# Storage engine various options # More info for mmapv1: https://docs.mongo.com/v3.4/reference/configuration-options/#storage-mmapv1-options # mmapv1: # preallocDataFiles: true # nsSize: 16 # quota: # enforced: false # maxFilesPerDB: 8 # smallFiles: false# More info for wiredTiger: https://docs.mongo.com/v3.4/reference/configuration-options/#storage-wiredtiger-options wiredTiger:engineConfig:cacheSizeGB: 20 # checkpointSizeMB: 1000 # statisticsLogDelaySecs: 0 # journalCompressor: snappy # directoryForIndexes: false # collectionConfig: # blockCompressor: snappy # indexConfig: # prefixCompression: true# More info for rocksdb: https://github.com/mongo-partners/mongo-rocks/wiki#configuration # rocksdb: # cacheSizeGB: 1 # compression: snappy # maxWriteMBPerSec: 1024 # crashSafeCounters: false # counters: true # singleDeleteIndex: false# More info for inMemory: https://www.percona.com/doc/percona-server-for-mongo/3.4/inmemory.html#configuring-percona-memory-engine # inMemory: # engineConfig: # inMemorySizeGB: 1 # statisticsLogDelaySecs: 0# Two options below can be used for wiredTiger and inMemory storage engines #setParameter: # wiredTigerConcurrentReadTransactions: 128 # wiredTigerConcurrentWriteTransactions: 128# where to write logging data. systemLog:destination: filelogAppend: truepath: /opt/mongodb/mongodb_log/mongod.logprocessManagement:fork: truepidFilePath: /opt/mongodb/mongodb_log/mongod.pid# network interfaces net:port: 22000bindIp: 0.0.0.0maxIncomingConnections: 100000wireObjectCheck : truehttp:JSONPEnabled: falseRESTInterfaceEnabled: false#security: # authorization: enabled # keyFile: /opt/mongodb/mongodb_keyfile/mongo_keyfile#operationProfiling:replication:replSetName: "sharding01"sharding:clusterRole: shardsvrarchiveMovedChunks: false## Enterprise-Only Options:#auditLog:#snmp: EOF(3) 修改systemd mongo的管理程序文件 #操作服務器shardding01_arbitration-01, shardding01_mongodb-01, shardding01_mongodb-02 sed -i 's#64000#100000#g' /usr/lib/systemd/system/mongod.service #修改程序打開文件數 sed -i 's#/var/run/mongod.pid#/opt/mongodb/mongodb_log/mongod.pid#g' /usr/lib/systemd/system/mongod.service #修改pid文件的位置 systemctl daemon-reload(3) 啟動monod #操作服務器shardding01_arbitration-01, shardding01_mongodb-01, shardding01_mongodb-02 systemctl restart mongod systemctl enable mongod(4) 配置副本集 mongo --port 22000 config = {_id : "sharding01",members : [{_id : 0, host : "172.18.6.89:22000" },{_id : 1, host : "172.18.6.88:22000" },{_id : 2, host : "172.18.6.92:22000",arbiterOnly:true}]} #配置副本集節點, arbiterOnly:true 表示為仲裁節點 rs.initiate(config) #初始化副本集(5) 查看副本集狀態 sharding01:SECONDARY> rs.status() {"set" : "sharding01","date" : ISODate("2018-11-02T16:05:37.648Z"),"myState" : 2,"term" : NumberLong(17),"syncingTo" : "172.18.6.89:22000","heartbeatIntervalMillis" : NumberLong(2000),"optimes" : {"lastCommittedOpTime" : {"ts" : Timestamp(1541174730, 1127),"t" : NumberLong(17)},"appliedOpTime" : {"ts" : Timestamp(1541174730, 1127),"t" : NumberLong(17)},"durableOpTime" : {"ts" : Timestamp(1541174730, 1127),"t" : NumberLong(17)}},"members" : [{"_id" : 0,"name" : "172.18.6.89:22000","health" : 1,"state" : 1,"stateStr" : "PRIMARY","uptime" : 27141,"optime" : {"ts" : Timestamp(1541174730, 1127),"t" : NumberLong(17)},"optimeDurable" : {"ts" : Timestamp(1541174730, 1127),"t" : NumberLong(17)},"optimeDate" : ISODate("2018-11-02T16:05:30Z"),"optimeDurableDate" : ISODate("2018-11-02T16:05:30Z"),"lastHeartbeat" : ISODate("2018-11-02T16:05:35.909Z"),"lastHeartbeatRecv" : ISODate("2018-11-02T16:05:37.428Z"),"pingMs" : NumberLong(0),"electionTime" : Timestamp(1541147605, 1),"electionDate" : ISODate("2018-11-02T08:33:25Z"),"configVersion" : 1},{"_id" : 1,"name" : "172.18.6.88:22000","health" : 1,"state" : 2,"stateStr" : "SECONDARY","uptime" : 27142,"optime" : {"ts" : Timestamp(1541174730, 1127),"t" : NumberLong(17)},"optimeDate" : ISODate("2018-11-02T16:05:30Z"),"syncingTo" : "172.18.6.89:22000","configVersion" : 1,"self" : true},{"_id" : 2,"name" : "172.18.6.92:22000","health" : 1,"state" : 7,"stateStr" : "ARBITER","uptime" : 27141,"lastHeartbeat" : ISODate("2018-11-02T16:05:35.909Z"),"lastHeartbeatRecv" : ISODate("2018-11-02T16:05:33.315Z"),"pingMs" : NumberLong(0),"configVersion" : 1}],"ok" : 1 }
mongodb分片和副本集。4. 部署sharding02分片副本集(本次部署兩組副本集,每組副本集作為一個分片節點)
參考 sharding01
?
5. 配置mongos
# mongos-01, mongos-02, mongos-03操作 (1) 創建程序相關目錄 mkdir -p /opt/mongos/mongos_conf mkdir -p /opt/mongos/mongos_data mkdir -p /opt/mongos/mongos_key mkdir -p /opt/mongos/mongos_log(2) 創建配置文件 # mongos-01, mongos-02, mongos-03操作 cat <<EOF> /opt/mongos/mongos_conf/mongos.conf logpath=/opt/mongos/mongos_log/mongos.log logappend=true bind_ip = 0.0.0.0 port=20000 configdb=configs/172.18.6.87:21000,172.18.6.86:21000,172.18.6.85:21000 #配置分片服務器集群地址 fork=frue #keyFile=/opt/mongos/mongos_key/mongo_keyfile EOF(3) 啟動mongos # mongos-01, mongos-02, mongos-03操作 /usr/bin/mongos -f /opt/mongos/mongos_conf/mongos.conf(4) 配置分片 # mongos-01, mongos-02, mongos-03任意節點操作 mongo –port use admin db.runCommand( { addshard : "sharding01/172.18.6.89:22000,172.18.6.88:22000,172.18.6.92:22000"}); #添加分片節點1 db.runCommand( { addshard : "sharding02/172.18.6.91:22000,172.18.6.90:22000,172.18.6.93:22000"}); #添加分片節點2(5) 查看集群狀態 mongos> db.runCommand( { listshards : 1 } ) {"shards" : [{"_id" : "sharding01","host" : "sharding01/172.18.6.88:22000,172.18.6.89:22000","state" : 1},{"_id" : "sharding02","host" : "sharding02/172.18.6.90:22000,172.18.6.91:22000","state" : 1}],"ok" : 1 }
?
mongodb集群模式,6. 配置片鍵(本次根據業務場景使用hash分片)
Mongo分片需要為有分片需求的數據庫開啟分片,并為庫內需要分片的集合設置片鍵 (1) 為分片庫開啟分片 #mongos任意節點操作 mongo --port 20000 use admin db.runCommand( { enablesharding :"dingkai"}); #為指定庫開啟分片(dingkai 是庫名)use dingkai #進入dingkai 庫 db.dingkaitable.createIndex({dingkaifields: "hashed"}); #為作為片鍵的字段創建hash索引,dingkaitable是需要分片的集合, dingkaifields是作為片鍵的字段use admin #創建片鍵,需要在admin庫執行 db.runCommand({shardcollection : "dingkai. dingkaitable",key:{dingkaifields: "hashed"} }) #創建片鍵(hashed表示使用hash分片,使用范圍分片則把hashed改為 1)
?
三. 開啟認證
1. 在各個集群中創建賬號(建議開始只建立root權限的用戶,后續需要其他用戶,可以通過root賬號建立)
(1) configserver集群 #主節點操作 mongo --port 21000 use admin db.createUser({user:"root", pwd:"Dingkai.123", roles:[{role: "root", db:"admin" }]}) #root賬號(超級管理員) use admin db.createUser({user:"admin", pwd:"Dingkai.123", roles:[{role: "userAdminAnyDatabase", db:"admin" }]}) #管理員賬號 use admin db.createUser({user:"clusteradmin", pwd:"Dingkai.123", roles:[{role: "clusterAdmin", db:"admin" }]}) #集群管理賬號(2) sharding集群的主節點(sharding節點設置,主要用于單獨登陸分片節點的副本集集群時使用,業務庫的賬號,在mongos上建立即可) mongo --port 21000 use admin db.createUser({user:"root", pwd:"Dingkai.123", roles:[{role: "root", db:"admin" }]}) #root賬號(超級管理員) use admin db.createUser({user:"admin", pwd:"Dingkai.123", roles:[{role: "userAdminAnyDatabase", db:"admin" }]}) #管理員賬號 use admin db.createUser({user:"clusteradmin", pwd:"Dingkai.123", roles:[{role: "clusterAdmin", db:"admin" }]}) #集群管理賬號
2.? 創建集群認證文件
openssl rand -base64 64 > mongo_keyfile
mongodb集群??
3. 將認證文件分發至各個節點中配置文件制定的位置
(1) configserver集群節點(mongos-01, mongos-02, mongos-03)配置文件中
keyfile制定目錄為: keyFile=/opt/configserver/configserver_key/mongo_keyfile
(2) sharding節點集群中各個節點配置文件指定目錄:
keyFile: /opt/mongodb/mongodb_keyfile/mongo_keyfile
(3) mongos 各個節點(mongos-01, mongos-02, mongos-03)配置文件指定目錄:
keyFile=/opt/mongos/mongos_key/mongo_keyfile
(4)分發完成后, 認證文件 mongo_keyfile 權限設置為 600, 屬主屬組為mongod
chmod 600
chown mongod:mongod
?
4. 修改各個節點配置文件中認證相關配置
(1) Sharding個節點(shardding01_arbitration-01, shardding01_mongodb-01, shardding01_mongodb-02, shardding02_arbitration-01, shardding02_mongodb-01, shardding02_mongodb-02)
開啟配置文件中:
security:
authorization: enabled
keyFile: /opt/mongodb/mongodb_keyfile/mongo_keyfile
mongodb副本集連接,
(2)configserver個節點(mongos-01, mongos-02, mongos-03)
keyFile=/opt/mongos/mongos_key/mongo_keyfile
5. 重啟所有節點
(1) 重啟shardding分片各個節點
(2) 重啟configserver各節點
(3) 重啟mongos各節點
6. 驗證
(1)不認證無法執行命令查看集群狀態
mongos -port 20000 use admin db.runCommand( { listshards : 1 } ) {"ok" : 0,"errmsg" : "not authorized on admin to execute command { listshards: 1.0 }","code" : 13,"codeName" : "Unauthorized"
(2) 使用集群管理賬號認證
mongos> use admin switched to db admin mongos> db.auth("clusteradmin","Dingkai.123") 1 mongos> db.runCommand( { listshards : 1 } ) {"shards" : [{"_id" : "sharding01","host" : "sharding01/172.18.6.88:22000,172.18.6.89:22000","state" : 1},{"_id" : "sharding02","host" : "sharding02/172.18.6.90:22000,172.18.6.91:22000","state" : 1}],"ok" : 1 }
服務器集群搭建、?
?
四. 常用命令
(1)集群管理
db.runCommand( { listshards : 1 } ) #查看集群狀態
db.printShardingStatus() #給出整個分片系統的一些狀態信息
db.表名.stats() #查看表的存儲狀態
(2)用戶管理
Read:允許用戶讀取指定數據庫
readWrite:允許用戶讀寫指定數據庫
dbAdmin:允許用戶在指定數據庫中執行管理函數,如索引創建、刪除,查看統計或訪問system.profile
userAdmin:允許用戶向system.users集合寫入,可以找指定數據庫里創建、刪除和管理用戶
clusterAdmin:只在admin數據庫中可用,賦予用戶所有分片和復制集相關函數的管理權限。
readAnyDatabase:只在admin數據庫中可用,賦予用戶所有數據庫的讀權限
readWriteAnyDatabase:只在admin數據庫中可用,賦予用戶所有數據庫的讀寫權限
userAdminAnyDatabase:只在admin數據庫中可用,賦予用戶所有數據庫的userAdmin權限
dbAdminAnyDatabase:只在admin數據庫中可用,賦予用戶所有數據庫的dbAdmin權限。
root:只在admin數據庫中可用。超級賬號,超級權限
######創建用戶######
db.createUser({user:"XXX",pwd:"XXX",roles:[{role:"readWrite", db:"myTest"}]})
######查看用戶######
(1)查看所有用戶
use admin
db.system.users.find()
(2)查看當前庫的用戶
use 庫名
show users
######刪除用戶######
(1)刪除當前庫用戶
use 庫名
db.dropUser('用戶名')
?
??