source: main/waeup-ansible/README.rst @ 14922

Last change on this file since 14922 was 14693, checked in by uli, 7 years ago

In README, describe host setup after bootstrapping.

Give hints how we do host setup, the 2nd step in server lifecycle after bootstrapping.

File size: 8.6 KB
RevLine 
[13823]1=============================================
2 Playbooks for administrating WAeUP servers.
3=============================================
[13821]4
5These are materials to use with our servers.
[13823]6
7For starters: the tutorial given on
8
9  https://github.com/leucos/ansible-tuto
10
11is a really nice hands-on intro to `ansible`. Please read it!
12
[14323]13If you want to devel/test scripts in here, try to work with virtual machines
14first. The ``Vagrant`` section below explains the details.
15
16Server Lifecircle
17=================
18
19When we get a server freshly installed from Hetzner, we want to make sure, at
20least some common security holes are closed.
21
22
23Right after first install: `bootstrap.yml`
24------------------------------------------
25
26For starters we "bootstrap" a server install with the ``bootstrap.yml``
27playbook. This playbook does three things:
28
29- It secures the ``SSHD`` config according to infos from
30  https://bettercrypto.org
31- It adds accounts for admin users (including sudo rights)
32- It disables root login via SSH.
33
34Before the playbook can be run, you have to fix some things.
35
361) Make sure you can ssh into the systems as ``root``.
37
382) Make sure, Python2.x is installed on the target systems. This is not the
39   case anymore for instance for minimal Ubuntu images starting with 16.04 LTS.
40
41   If Python2.x is not installed, do::
42
43     # apt-get update
44     # apt-get install python python-simplejson
45
46   as `root` on each targeted system.
47
483) For each server to handle, make an entry in the ``[yet-untouched]`` section
49   of the ``hosts`` file like this::
50
51     # hosts
52     [yet-untouched]
53     h23.waeup.org ansible_user=root ansible_ssh_pass=so-secret ansible_sudo_pass="{{ ansible_ssh_pass }}"
54     h24.waeup.org ansible_user=root ansible_ssh_pass=123456789 ansible_sudo_pass="{{ ansible_ssh_pass }}"
55
56   The ``ansible_sudo_pass`` is not neccessary for now, but will be needed if
57   you want to run everything as a normal user. And it is just a blank copy of
58   ``ansible_ssh_pass``.
59
60   Yes, this is a very dangerous part and you should not check this
61   modifications in. Instead you should remove the entries after you are done.
62
634) Update the ``vars`` in ``bootstrap.yml``. Tell, whether SSH root access
64   should stay enabled and say ``no`` or ``false``.
65
66   Then, you have to create a dict of admin users. For each user we need a name
67   (key) and a hashed password. This can be done like this::
68
69     $ diceware -d '-' -n 6 --no-caps | tee mypw | mkpasswd -s --method=sha-512 >> mypw
70
71   which will create a random password and its SHA512-hashed variant in a file
72   called ``mypw``. If you do not have `diceware` installed, you can use
73   `pwgen` (or any other password maker)::
74
75     $ pwgen -s 33 | tee mypw | mkpasswd -s --method=sha-512 >> mypw
76
77   The hashed variant then has to be entered as ``hashed_pw`` in the `vars` of
78   ``bootstrap.yml``.
79
80   In the end, there should be something like::
81
82     # bootstrap.yml
83     # ...
84     vars:
85       permit_ssh_root: false
86       admin_users:
87         user1:
88           hashed_pw: "$6$Wsdfhwelkl32lslk32lkdslk43...."
89         user2:
90           hashed_pw: "$6$FDwlkjewlkWs2434SVRDE65DFF...."
91     ...
92
93   Please note, that all users listed in this dict will have the same passwords
94   on all servers handled when running the script.
95
965) Finally, run the play::
97
98     $ ansible-playbook -i hosts -C bootstrap.yml
99
100   to see, whether setup is fine (dry run) and::
101
[14693]102     $ ansible-playbook -i hosts bootstrap.yml
[14323]103
104   to actually perform the changes.
105
[14693]1066) In `hosts` move the host we handle from ``[yet-untouched]`` over to
107   ``[bootstapped]``.
[14323]108
[14693]109
110Setup
111=====
112
113After bootstrapping, there should be a user account we can use.
114
1151) Create a local SSH key to connect to the new server and copy it over::
116
117     $ ssh-keygen -t ed25519 -C "uli@foo to myremote" -f ~/.ssh/id_myremote
118
119   Where ``myremote`` is normally one of h1, h2, ...., hN. Then::
120
121     $ ssh-copy-id -i ~/.ssh/id_myremote user@myremote.waeup.org
122
123   and eventually edit ``~/.ssh/config`` to register your new key.
124   If you are out for adventure, do not create a new key but use the one you
125   use on all other machines as well. This is, of course, not recommended.
126
1272) Update the entry of the handled host in the local `hosts` inventory:
128   - Remove ``ansible_user=root``
129   - Remove ``ansible_ssh_pass``.
130   - Set ``ansible_sudo_pass`` to the password of the user you connect as.
131
1323) Update the server::
133
134     $ ansible -i hosts hmyremote.waeup.org -b -m apt -a "upgrade=safe update_cache=yes"
135
136   This way we can ensure that your SSH setup works correctly.
137
1384) Run setup.py::
139
140     $ ansible-playbook -i hosts -l hmyremote.waeup.org -C setup.yml
141
142   (for a dry run) and::
143
144     $ ansible-playbook -i hosts -l hmyremote.waeup.org setup.yml
145
146   for the real run.
147
148
[13823]149Vagrant
150=======
151
152In `Vagrantfile` we set up a vagrant environment which provides three
153hosts as virtualbox:
154
155  ``vh5.sample.org``, ``vh6.sample.org``, ``vh7.sample.org``
156
157running Ubuntu 14.04. ``vh5`` represents "virtual host 5" and should
158reflect h5.waeup.org. The same holds for ``vh6`` and ``vh7``
159accordingly.
160
161The three virtual hosts are for testing any upcoming ansible
162playbooks. They should be used before running playbooks on the real
163hosts!
164
165
166Initialize Vagrant Env
167----------------------
168
169You must have `vagrant` installed, if possible in a fairly recent
170version. I (uli) use `vagrant 1.8.1` (latest as time of writing).  As
171Ubuntu 14.04 is pretty outdated in that respect, I had to grab a .deb
172package from
173
174  https://www.vagrantup.com/downloads.html
175
176that could be installed with::
177
178 $ sudo dpkg -i vagrant_1.8.1_x86_64.deb
179
180
181When everything is in place, change into this directory and run::
182
183  $ vagrant up
184  Bringing machine 'vh5' up with 'virtualbox' provider...
185  Bringing machine 'vh6' up with 'virtualbox' provider...
186  Bringing machine 'vh7' up with 'virtualbox' provider...
187  ==> vh5: Importing base box 'ubuntu/trusty32'...
188  ...
189
190This will fetch Vagrant virtualbox images for trusty32, i.e. Ubuntu
19114.04 images, 32bit version (plays nice also on 64bit hosts).
192
[13837]193When hosts are being supplied by Hetzner or another hosting provider,
194then we normally get access as `root` user only. Therefore, After base
195init the root accounts of all hosts are enabled with password
196``vagrant``. This is done by the ansible playbook in
197``vagrant-provision.yml``.
198
[13823]199All three hosts provide ssh access via::
200
201  $ vagrant ssh vh0
202
203or equivalent commands. They have a user 'vagrant' installed, which
204can sudo without password.
205
[13837]206After install all three hosts can also be accessed as `root` using
207password `vagrant` (for example vh5):
208
209  $ ssh -l root 192.168.36.10
210
211See ``Vagrantfile`` for the IP addresses set.
212
[13823]213You can halt (all) the virtual hosts with::
214
215  $ vagrant halt
[13832]216
217
[13837]218
[13832]219Ansible Environment
220===================
221
222The ansible environment should provide ansible roles and playbooks for
223WAeUP related server administration.
224
225The general file-layout and naming should follow
226
227  https://docs.ansible.com/ansible/playbooks_best_practices.html#directory-layout
228
[13839]229
230Bootstrapping - Freshmechs
231--------------------------
232
[14192]233We call those machines "freshmech" that are freshly delivered from the
[13839]234hosting provider or that were freshly provisioned by `vagrant` (see
235above).
236
237These machines are expected to have only a single root account and
238normally a (security-wise) poor SSH configuration.
239
240Bootstrapping these machines means we secure SSH, restart the SSH
241daemon and then add important accounts: "uli", "henrik", "ansible".
242
[13843]243To make sure, the connection to a "freshmech" works, you should at
244least one time login via SSH before proceeding with ansible and all
245bells and whistles::
246
247  ssh -l root 192.168.36.10
248
249(with the real IP of the machine you want to reach, of course).
250
[13839]251Any host you want to "bootstrap" must be entered in a local hosts
252file, normally ``hosts-virtual``, with a line like this:
253
254  [yet-untouched]
255  vh5.sample.org ansible_host=192.168.36.10 ansible_user=root
256
257in the "yet-untouched" section.
258
259Afterwards try:
260
261  $ ansible-playbook -i hosts-virtual --ask-pass bootstrap.yml
262
263The ``ask-pass`` parameter is needed to enter the password given by
264the provider on the commandline. For the local `vagrant` machines this
265will be `vagrant`.
[13843]266
267If run on local virtual machines, you might want to make sure that
268your local `known_hosts` file does not contain an old ssh host
269fingerprint. Otherwise you have to remove entries for::
270
271  192.168.36.10
272  192.168.36.11
273  192.168.36.12
274
275respectively before running `bootstrap.yml`.
276
277Alternatively you can run everything with the
278`ANSIBLE_HOST_KEY_CHECKING` environment variable set to ``False``::
279
280  $ ANSIBLE_HOST_KEY_CHECKING=False  ansible-playbook -i hosts-virtual --ask-pass bootstrap.yml
281
282This will suppress host fingerprint checking.
Note: See TracBrowser for help on using the repository browser.