SlideShare a Scribd company logo
Running	on	RHEL-7
This	document	will	describe	how	to	install	tftp	and	set	up	asyncrhonous
sncing	between	nodes	in	a	masterless	cluster.	This	will	anable	you	to
drop	a	file	onto	one	server	with	tftp	and	the	file	will	be	available	on	all
the	other	servers	in	the	cluster.	The	cluster	can	have	2	or	more	nodes
and	nodes	can	be	added	or	removed	wihtout	downing	any	of	the
machines	or	services.	Each	added	node	will	pull	the	files	from	the	other
servers	upon	a	change	on	any	of	the	machines.
These	instruction	can	be	used	to	create	any	type	of	asynchronous
masterless	cluster.	Just	point	Watcher	and	Unison	at	whatever
folder	you	use	and	that	folder	will	stay	in	sync	accross	the	cluster.
Document	Version
1.0.0
Installation	Steps
The	goal	of	this	guide	is	to	start	with	a	blank	RHEL	servers	and	get
them	all	synced	and	operational.	All	steps	in	this	guide	must	be
executed	on	each	node	of	the	cluster.	All	commands	will	be	run	as	root
and	SELinux	will	be	disabled.	If	you	are	using	this	in	production	these
security	features	should	be	configured.
Install	EPEL-7.5
set	up	SSH	keys
Install	and	configure	tftp
Install	unison
Create	scripts
Install	and	configure	Watcher
set	crontab	to	create	archive	once	a	day
Environment
In	this	guide,	we	will	have	two	fresh	RHEL	servers	with	IPs	10.1.1.38
and	10.1.1.39.	The	folders	that	will	be	synced	are	‘~/configs’
Installing	EPEL-7.5
Firstly,	we	want	the	latest	and	the	greatest	of	all	of	the	packages,	so
$	yum	update	-y
$	reboot
Next,	install	EPEL-7.5	using	the	command
$	cd	/tmp/
$	yum	install	wget	-y
$	wget	https://dl.fedoraproject.org/pub/epel/epel-release-l
atest-7.noarch.rpm
$	yum	install	epel-release-latest-7.noarch.rpm	-y
Setting	up	SSH	keys
SSH	keys	must	be	set	up	between	the	servers	for	Unison	to	function
$	ssh-keygen	-t	rsa
You	will	be	provided	with	several	prompts.	Press	enter	at	each	step	to
accept	the	default	values.
Press	enter	here	to	save	to	the	default	location:	~/.ssh/id_rsa.pud
$	Enter	file	in	which	to	save	the	key	(/home/demo/.ssh/id_r
sa):
A	passphrase	is	not	necassary	for	the	installation,	but	it	does	make	the
key	harder	to	access	if	it	were	to	fall	into	the	wrong	hands.	Press	enter
to	not	use	a	passphare
$	Enter	passphrase	(empty	for	no	passphrase):
The	entire	key	generation	process	is	shown	below
[root@tftp1	.ssh]#	ssh-keygen	-t	rsa
Generating	public/private	rsa	key	pair.
Enter	file	in	which	to	save	the	key	(/root/.ssh/id_rsa):	
Enter	passphrase	(empty	for	no	passphrase):
Enter	same	passphrase	again:	
Your	identification	has	been	saved	in	/root/.ssh/id_rsa.
Your	public	key	has	been	saved	in	/root/.ssh/id_rsa.pub.
The	key	fingerprint	is:
9f:93:69:71:92:69:be:a0:0a:16:42:b5:20:a0:f9:9b	root@tftp1.
novalocal
The	key's	randomart	image	is:
+--[	RSA	2048]----+
|o...													|
|ooo	.												|
|	o..o												|
|.		E													|
|..						S								|
|.	.						.	.					|
|..		.		+		+						|
|o		.	.=	o+							|
|...		.o++o							|
+-----------------+
This	key	must	be	copied	into	the	other	server’s	‘.ssh/authorized_keys’
folder.	this	can	be	done	with	scp,	but	since	this	guide	is	made	using	two
virtual	machines	pulled	up	on	the	same	Mac	computer,	we	can	just	use
clipboard
show	the	key
$	cd
$	cat	.ssh/id_rsa.pub
Now	copy	and	paste	this	key	into	the	other	servers
‘.ssh/authorized_keys’	file	using	vi.	Do	this	step	as	the	root	user
$	cd
$	vi	.ssh/authorized_keys
Complete	the	SSH	key	setup	for	the	other	server,	copying	it’s
‘id_rsa.pub’	key	into	this	server’s	‘.ssh/authorized_keys’	file
log	into	the	other	server	via	SSH	to	ensure	that	the	other	server	is
permanantly	saved	to	known_hosts
Installing	and	Configuring	TFTP
TFTP	is	inherently	insecure,	so	it	should	only	be	run	on	a	machine	in	a
trusted	zone
$	yum	install	tftp	tftp-server	xinetd	-y
we	need	to	edit	the	tftp	config	file	to	the	following	and	create	the	folder
we	will	use	to	store	tftp	files
$	vi	/etc/xinetd.d/tftp
#	default:	off
#	description:	The	tftp	server	serves	files	using	the	trivi
al	file	transfer	
#			protocol.		The	tftp	protocol	is	often	used	to	boot	disk
less	
#			workstations,	download	configuration	files	to	network-a
ware	printers,	
#			and	to	start	the	installation	process	for	some	operatin
g	systems.
service	tftp
{
				socket_type					=	dgram
				protocol								=	udp
				wait												=	yes
				user												=	root
				server										=	/usr/sbin/in.tftpd
				server_args					=	-c	-s	/configs
				disable									=	no
				per_source						=	11
				cps													=	100	2
				flags											=	IPv4	
}
Now	create	the	TFTP	folder	we	defined	in	the	config	above
$	cd
$	mkdir	/configs
$	chmod	777	configs/
Start	and	enable	tftp	and	xinetd	at	boot
$	systemctl	enable	tftp
$	systemctl	start	tftp
$	systemctl	enable	xinetd
$	systemctl	start	xinetd
Install	Unison
Unison	is	not	yet	available	on	RHEL-7	so	it	must	be	compiled	manually
as	shown	below.	(You	can	just	copy	the	entire	block	of	commands	into
the	CLI	and	it	will	run	all	of	them.	remeber	to	press	enter	after	the	last
command)
$	yum	install	-y	ocaml	ocaml-camlp4-devel	ctags	ctags-etags
$	mkdir	-p	/opt/unison
$	cd	/opt/unison
$	curl	-O	http://www.seas.upenn.edu/~bcpierce/unison//downl
oad/releases/stable/unison-2.48.3.tar.gz
$	tar	xzvf	unison-2.48.3.tar.gz
$	cd	unison-2.48.3
$	make
$	cp	unison	/usr/local/sbin/
$	cp	unison	/usr/bin/
$	cd	/opt
$	rm	-rf	unison/
Create	Scripts
This	cluster	relies	on	a	few	scripts	to	run	effectivily.	These	will	be
created	in	the	following	steps
This	script	is	run	whenever	a	file	is	changed	inside	the	configs	folder
$	cd	/usr/local/bin/
$	vi	rununison.sh
Enter	the	following	line	into	the	script,	replacing	the	IP	with	the	IP	of
the	machine	you	want	to	sync.	For	each	node	in	the	cluster,	add	another
line	to	his	script	setting	the	IP	to	the	other	machine	to	sync.
$	unison	-batch	/configs	ssh://10.1.1.39//configs
Now	make	the	script	executable
$	chmod	+x	rununison.sh
This	script	will	start	Watcher	at	boot.	It	will	fix	a	conflict	with	Watcher
(which	is	installed	in	the	next	step)	that	causes	it	to	not	start	correctly
on	a	server	reboot	because	of	a	.pid	file	it	creates
$	cd	/etc/init.d/
$	vi	startwatcher.py
Enter	the	following	line	into	the	script
$	rm	-rf	/tmp/watcher.pid
$	cd	/opt/Watcher-master/
$	./watcher.py	start	-c	watcher.ini
Enable	the	script	to	run	at	startup
$	ls	-s	/etc/init.d/startwatcher.sh	/etc/rc.d/
This	script	will	arhive	the	configs	folder	into	a	zip	file	in	the	directory
/archives
Enable	the	script	to	run	at	startup
$	cd	/usr/local/bin/
$	vi	runarchive.sh
Enter	the	following	text	into	the	vi	window.	change	the	source	and
destination	folder	as	needed	for	your	system
Enable	the	script	to	run	at	startup
mkdir	-p	/archives
OUTPUT="configs_"$(date	+%Y)-$(date	+%m)-$(date	+%d)
cd	/scripts/
zip	-r	/archives/$OUTPUT	/configs/*
Now	set	crontab	to	run	the	job	once	a	day	at	11:59	PM.	This	arhcvice
file	is	not	synced.	When	it	is	created,	all	of	the	nodes	will	be	in	synce
already.	keeping	these	backups	offline	means	that	they	cannot	erased
from	all	machines	accidentaly.
$	crontab	-e
$	59	23	*	*	*	/usr/loca/bin/runarchive.sh
Install	Watcher
Watcher	monitors	the	file	specified	for	events	and	triggers	a	script	when
something	happens.	To	wathc	multiple	folders	just	create	more	jobs
inside	of	the	watcher.ini	file
$	yum	install	unzip	python-pip	-y
$	pip	install	--upgrade	pip
$	pip	install	pyinotify
$	cd	/opt/
$	wget	https://github.com/splitbrain/Watcher/archive/master
.zip
$	unzip	Watcher-master/
$	rm	-f	master.zip
$	vi	Watcher-master/watcher.ini
Inside	the	watcher.ini	file,	change	the	following	lines:
$	watch=/configs
$	events=all
$	command=/usr/local/bin/rununison.sh
Start	Watcher
$	cd	/opt/Watcher-master/
$	./watcher.py	start	-c	watcher.ini	
Scaling
To	add	a	node	to	the	cluster,	just	edit	the	rununison.sh	script	with
another	line	like	te	one	already	there	for	each	machine.	as	soon	as	a
sync	is	triggered	on	any	of	the	machines,	the	new	node	will	get	the
exisitng	files	from	the	cluster.	Because	we	are	just	editing	a	script,	there
is	no	serive	outage.

More Related Content

PDF
P.G.C.C Stationary engineer cerficate
ODP
PPTX
MedCross Imaging PC
PPTX
Dorothy Butler's poster in power point
PPT
PLANIFICACIÓN DIDACTICÁ GRUPO 3
PDF
Mat6 b prueba-modulo4
PPS
ABD'nin Kürdistan Haritalari
P.G.C.C Stationary engineer cerficate
MedCross Imaging PC
Dorothy Butler's poster in power point
PLANIFICACIÓN DIDACTICÁ GRUPO 3
Mat6 b prueba-modulo4
ABD'nin Kürdistan Haritalari

Viewers also liked (12)

PDF
Soluzioni srl company profile rev 0
PDF
Acting Resume
DOCX
JULIE's Resume' Oct 2015
PDF
Kolleen-McMahon
PDF
When & How to Send Surveys
PDF
Environmental & Sustainable Design
PDF
Sales To-Post Sales
PPT
Therapeutic aerosols
PDF
Informe sobre propiedad industrial en Andalucía (datos 2013)
PDF
Informe de actividades de i+d en Andalucía (datos 2014)
PDF
Informe sobre innovacion empresarial en Andalucía (datos 2015)
PPTX
Post secondary presentation
Soluzioni srl company profile rev 0
Acting Resume
JULIE's Resume' Oct 2015
Kolleen-McMahon
When & How to Send Surveys
Environmental & Sustainable Design
Sales To-Post Sales
Therapeutic aerosols
Informe sobre propiedad industrial en Andalucía (datos 2013)
Informe de actividades de i+d en Andalucía (datos 2014)
Informe sobre innovacion empresarial en Andalucía (datos 2015)
Post secondary presentation
Ad

Setting up TFTP cluster.md(2)