1 | ============================================= |
---|
2 | Playbooks for administrating WAeUP servers. |
---|
3 | ============================================= |
---|
4 | |
---|
5 | These are materials to use with our servers. |
---|
6 | |
---|
7 | For starters: the tutorial given on |
---|
8 | |
---|
9 | https://github.com/leucos/ansible-tuto |
---|
10 | |
---|
11 | is a really nice hands-on intro to `ansible`. Please read it! |
---|
12 | |
---|
13 | If you want to devel/test scripts in here, try to work with virtual machines |
---|
14 | first. The ``Vagrant`` section below explains the details. |
---|
15 | |
---|
16 | Server Lifecircle |
---|
17 | ================= |
---|
18 | |
---|
19 | When we get a server freshly installed from Hetzner, we want to make sure, at |
---|
20 | least some common security holes are closed. |
---|
21 | |
---|
22 | |
---|
23 | Right after first install: `bootstrap.yml` |
---|
24 | ------------------------------------------ |
---|
25 | |
---|
26 | For starters we "bootstrap" a server install with the ``bootstrap.yml`` |
---|
27 | playbook. This playbook does three things: |
---|
28 | |
---|
29 | - It secures the ``SSHD`` config according to infos from |
---|
30 | https://bettercrypto.org |
---|
31 | - It adds accounts for admin users (including sudo rights) |
---|
32 | - It disables root login via SSH. |
---|
33 | |
---|
34 | Before the playbook can be run, you have to fix some things. |
---|
35 | |
---|
36 | 1) Make sure you can ssh into the systems as ``root``. |
---|
37 | |
---|
38 | 2) Make sure, Python2.x is installed on the target systems. This is not the |
---|
39 | case anymore for instance for minimal Ubuntu images starting with 16.04 LTS. |
---|
40 | |
---|
41 | If Python2.x is not installed, do:: |
---|
42 | |
---|
43 | # apt-get update |
---|
44 | # apt-get install python python-simplejson |
---|
45 | |
---|
46 | as `root` on each targeted system. |
---|
47 | |
---|
48 | 3) For each server to handle, make an entry in the ``[yet-untouched]`` section |
---|
49 | of the ``hosts`` file like this:: |
---|
50 | |
---|
51 | # hosts |
---|
52 | [yet-untouched] |
---|
53 | h23.waeup.org ansible_user=root ansible_ssh_pass=so-secret ansible_sudo_pass="{{ ansible_ssh_pass }}" |
---|
54 | h24.waeup.org ansible_user=root ansible_ssh_pass=123456789 ansible_sudo_pass="{{ ansible_ssh_pass }}" |
---|
55 | |
---|
56 | The ``ansible_sudo_pass`` is not neccessary for now, but will be needed if |
---|
57 | you want to run everything as a normal user. And it is just a blank copy of |
---|
58 | ``ansible_ssh_pass``. |
---|
59 | |
---|
60 | Yes, this is a very dangerous part and you should not check this |
---|
61 | modifications in. Instead you should remove the entries after you are done. |
---|
62 | |
---|
63 | 4) Update the ``vars`` in ``bootstrap.yml``. Tell, whether SSH root access |
---|
64 | should stay enabled and say ``no`` or ``false``. |
---|
65 | |
---|
66 | Then, you have to create a dict of admin users. For each user we need a name |
---|
67 | (key) and a hashed password. This can be done like this:: |
---|
68 | |
---|
69 | $ diceware -d '-' -n 6 --no-caps | tee mypw | mkpasswd -s --method=sha-512 >> mypw |
---|
70 | |
---|
71 | which will create a random password and its SHA512-hashed variant in a file |
---|
72 | called ``mypw``. If you do not have `diceware` installed, you can use |
---|
73 | `pwgen` (or any other password maker):: |
---|
74 | |
---|
75 | $ pwgen -s 33 | tee mypw | mkpasswd -s --method=sha-512 >> mypw |
---|
76 | |
---|
77 | The hashed variant then has to be entered as ``hashed_pw`` in the `vars` of |
---|
78 | ``bootstrap.yml``. |
---|
79 | |
---|
80 | In the end, there should be something like:: |
---|
81 | |
---|
82 | # bootstrap.yml |
---|
83 | # ... |
---|
84 | vars: |
---|
85 | permit_ssh_root: false |
---|
86 | admin_users: |
---|
87 | user1: |
---|
88 | hashed_pw: "$6$Wsdfhwelkl32lslk32lkdslk43...." |
---|
89 | user2: |
---|
90 | hashed_pw: "$6$FDwlkjewlkWs2434SVRDE65DFF...." |
---|
91 | ... |
---|
92 | |
---|
93 | Please note, that all users listed in this dict will have the same passwords |
---|
94 | on all servers handled when running the script. |
---|
95 | |
---|
96 | 5) Finally, run the play:: |
---|
97 | |
---|
98 | $ ansible-playbook -i hosts -C bootstrap.yml |
---|
99 | |
---|
100 | to see, whether setup is fine (dry run) and:: |
---|
101 | |
---|
102 | $ ansible-playbook -i hosts bootstrap.yml |
---|
103 | |
---|
104 | to actually perform the changes. |
---|
105 | |
---|
106 | 6) In `hosts` move the host we handle from ``[yet-untouched]`` over to |
---|
107 | ``[bootstapped]``. |
---|
108 | |
---|
109 | |
---|
110 | Setup |
---|
111 | ===== |
---|
112 | |
---|
113 | After bootstrapping, there should be a user account we can use. |
---|
114 | |
---|
115 | 1) Create a local SSH key to connect to the new server and copy it over:: |
---|
116 | |
---|
117 | $ ssh-keygen -t ed25519 -C "uli@foo to myremote" -f ~/.ssh/id_myremote |
---|
118 | |
---|
119 | Where ``myremote`` is normally one of h1, h2, ...., hN. Then:: |
---|
120 | |
---|
121 | $ ssh-copy-id -i ~/.ssh/id_myremote user@myremote.waeup.org |
---|
122 | |
---|
123 | and eventually edit ``~/.ssh/config`` to register your new key. |
---|
124 | If you are out for adventure, do not create a new key but use the one you |
---|
125 | use on all other machines as well. This is, of course, not recommended. |
---|
126 | |
---|
127 | 2) Update the entry of the handled host in the local `hosts` inventory: |
---|
128 | - Remove ``ansible_user=root`` |
---|
129 | - Remove ``ansible_ssh_pass``. |
---|
130 | - Set ``ansible_sudo_pass`` to the password of the user you connect as. |
---|
131 | |
---|
132 | 3) Update the server:: |
---|
133 | |
---|
134 | $ ansible -i hosts hmyremote.waeup.org -b -m apt -a "upgrade=safe update_cache=yes" |
---|
135 | |
---|
136 | This way we can ensure that your SSH setup works correctly. |
---|
137 | |
---|
138 | 4) Run setup.py:: |
---|
139 | |
---|
140 | $ ansible-playbook -i hosts -l hmyremote.waeup.org -C setup.yml |
---|
141 | |
---|
142 | (for a dry run) and:: |
---|
143 | |
---|
144 | $ ansible-playbook -i hosts -l hmyremote.waeup.org setup.yml |
---|
145 | |
---|
146 | for the real run. |
---|
147 | |
---|
148 | |
---|
149 | Vagrant |
---|
150 | ======= |
---|
151 | |
---|
152 | In `Vagrantfile` we set up a vagrant environment which provides three |
---|
153 | hosts as virtualbox: |
---|
154 | |
---|
155 | ``vh5.sample.org``, ``vh6.sample.org``, ``vh7.sample.org`` |
---|
156 | |
---|
157 | running Ubuntu 14.04. ``vh5`` represents "virtual host 5" and should |
---|
158 | reflect h5.waeup.org. The same holds for ``vh6`` and ``vh7`` |
---|
159 | accordingly. |
---|
160 | |
---|
161 | The three virtual hosts are for testing any upcoming ansible |
---|
162 | playbooks. They should be used before running playbooks on the real |
---|
163 | hosts! |
---|
164 | |
---|
165 | |
---|
166 | Initialize Vagrant Env |
---|
167 | ---------------------- |
---|
168 | |
---|
169 | You must have `vagrant` installed, if possible in a fairly recent |
---|
170 | version. I (uli) use `vagrant 1.8.1` (latest as time of writing). As |
---|
171 | Ubuntu 14.04 is pretty outdated in that respect, I had to grab a .deb |
---|
172 | package from |
---|
173 | |
---|
174 | https://www.vagrantup.com/downloads.html |
---|
175 | |
---|
176 | that could be installed with:: |
---|
177 | |
---|
178 | $ sudo dpkg -i vagrant_1.8.1_x86_64.deb |
---|
179 | |
---|
180 | |
---|
181 | When everything is in place, change into this directory and run:: |
---|
182 | |
---|
183 | $ vagrant up |
---|
184 | Bringing machine 'vh5' up with 'virtualbox' provider... |
---|
185 | Bringing machine 'vh6' up with 'virtualbox' provider... |
---|
186 | Bringing machine 'vh7' up with 'virtualbox' provider... |
---|
187 | ==> vh5: Importing base box 'ubuntu/trusty32'... |
---|
188 | ... |
---|
189 | |
---|
190 | This will fetch Vagrant virtualbox images for trusty32, i.e. Ubuntu |
---|
191 | 14.04 images, 32bit version (plays nice also on 64bit hosts). |
---|
192 | |
---|
193 | When hosts are being supplied by Hetzner or another hosting provider, |
---|
194 | then we normally get access as `root` user only. Therefore, After base |
---|
195 | init the root accounts of all hosts are enabled with password |
---|
196 | ``vagrant``. This is done by the ansible playbook in |
---|
197 | ``vagrant-provision.yml``. |
---|
198 | |
---|
199 | All three hosts provide ssh access via:: |
---|
200 | |
---|
201 | $ vagrant ssh vh0 |
---|
202 | |
---|
203 | or equivalent commands. They have a user 'vagrant' installed, which |
---|
204 | can sudo without password. |
---|
205 | |
---|
206 | After install all three hosts can also be accessed as `root` using |
---|
207 | password `vagrant` (for example vh5): |
---|
208 | |
---|
209 | $ ssh -l root 192.168.36.10 |
---|
210 | |
---|
211 | See ``Vagrantfile`` for the IP addresses set. |
---|
212 | |
---|
213 | You can halt (all) the virtual hosts with:: |
---|
214 | |
---|
215 | $ vagrant halt |
---|
216 | |
---|
217 | |
---|
218 | |
---|
219 | Ansible Environment |
---|
220 | =================== |
---|
221 | |
---|
222 | The ansible environment should provide ansible roles and playbooks for |
---|
223 | WAeUP related server administration. |
---|
224 | |
---|
225 | The general file-layout and naming should follow |
---|
226 | |
---|
227 | https://docs.ansible.com/ansible/playbooks_best_practices.html#directory-layout |
---|
228 | |
---|
229 | |
---|
230 | Bootstrapping - Freshmechs |
---|
231 | -------------------------- |
---|
232 | |
---|
233 | We call those machines "freshmech" that are freshly delivered from the |
---|
234 | hosting provider or that were freshly provisioned by `vagrant` (see |
---|
235 | above). |
---|
236 | |
---|
237 | These machines are expected to have only a single root account and |
---|
238 | normally a (security-wise) poor SSH configuration. |
---|
239 | |
---|
240 | Bootstrapping these machines means we secure SSH, restart the SSH |
---|
241 | daemon and then add important accounts: "uli", "henrik", "ansible". |
---|
242 | |
---|
243 | To make sure, the connection to a "freshmech" works, you should at |
---|
244 | least one time login via SSH before proceeding with ansible and all |
---|
245 | bells and whistles:: |
---|
246 | |
---|
247 | ssh -l root 192.168.36.10 |
---|
248 | |
---|
249 | (with the real IP of the machine you want to reach, of course). |
---|
250 | |
---|
251 | Any host you want to "bootstrap" must be entered in a local hosts |
---|
252 | file, normally ``hosts-virtual``, with a line like this: |
---|
253 | |
---|
254 | [yet-untouched] |
---|
255 | vh5.sample.org ansible_host=192.168.36.10 ansible_user=root |
---|
256 | |
---|
257 | in the "yet-untouched" section. |
---|
258 | |
---|
259 | Afterwards try: |
---|
260 | |
---|
261 | $ ansible-playbook -i hosts-virtual --ask-pass bootstrap.yml |
---|
262 | |
---|
263 | The ``ask-pass`` parameter is needed to enter the password given by |
---|
264 | the provider on the commandline. For the local `vagrant` machines this |
---|
265 | will be `vagrant`. |
---|
266 | |
---|
267 | If run on local virtual machines, you might want to make sure that |
---|
268 | your local `known_hosts` file does not contain an old ssh host |
---|
269 | fingerprint. Otherwise you have to remove entries for:: |
---|
270 | |
---|
271 | 192.168.36.10 |
---|
272 | 192.168.36.11 |
---|
273 | 192.168.36.12 |
---|
274 | |
---|
275 | respectively before running `bootstrap.yml`. |
---|
276 | |
---|
277 | Alternatively you can run everything with the |
---|
278 | `ANSIBLE_HOST_KEY_CHECKING` environment variable set to ``False``:: |
---|
279 | |
---|
280 | $ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i hosts-virtual --ask-pass bootstrap.yml |
---|
281 | |
---|
282 | This will suppress host fingerprint checking. |
---|