SE4 Virtualisation TP Wasilewski
Déroulé du TP
Target: discover technologies and use a web server
1. Create VM
- Create a virtual machine on Proxmox (server is accessible through: this link) by cloning the template. The virtual machine must be named
team-XX
with XX being your team number assigned at the beginning of the tutorial. - Manual configure virtual machine's network to be persistent. Your VM's IP must be
10.99.XX.1/16
with the gateway being10.99.99.253
. - Update
/etc/hostname
to reflect the VM's name to the hostname - Update
/etc/hosts
with the two previous updates - Ensure it can ping google.
- Reboot the VM and validate it.
2. First web server
- Install nginx. (Hint: if it fails, do not forget to configure your VM's DNS).
- Setup a custom/cool static page.
3. First load balancer
- Install HAProxy 2.8 (Hint: Search "debian haproxy" on your favorite search engine)
- Redirect port 8888 to the previously installed nginx.
4. Docker time
- Install docker.
- Pull nginx image.
- Copy your custom static page into the container.
- Reconfigure HAProxy to use port 8889 onto the container.
Target: discover technologies and use a web server
5. First Docker image
- Create a Docker image based on
nginx:latest
that use your previously created static page. This image will be namedtXX-web
and will use semantic versioning. - Publish this new image on the common registry, located at
registry.se4-virt.fr
.
6. Nomad installation
- Install nomad as a Debian package. See the official documentation
- Have Nomad running as client and server. Validate it through its interface which should be accessible through
http://10.99.XX.1:4646
. (Check it withcurl
)
7. Access Nomad UI from Zabeth
- Add a route on your Zabeth to view Nomad UI from it (Hint: Nomad network is
10.99.0.0/16
and hop is172.26.145.200
) - Add a route on your VM to allow traffic back to Zabeth (Hint: Zabeth network is
172.26.145.0/24
and hop is10.99.99.254
)
8. First Nomad deployment
- Create a nomad job definition with
nomad job init
command. - Adapt it to deploy the container you created on step 5.
9. Scale and load-balance
- Increase job definition count to run 3 container.
- Install Consul locally. (Hint: its Debian package should already be available as you should have installed the Hashicorp repositories for the Nomad install).
- Configure Nomad to use the newly installed Consul.
- Configure HAProxy to redirect port 8890 to those containers through service discovery.
Target: setup a distributed service + deploy from CI
10. Your Nomad joins the common cluster
- Reconfigure Nomad to only be a client.
- Use the common nomad by setting its settings to use the common consul (located at
consul.se4-virt.fr:8500
). (Hint, this example should help you). - You should see your Nomad instance on the common Nomad server and Consul.
11. Starting the "DevOps" way
- Create an account on Gitlab
https://git.se4-virt.fr
[1] - Create a group named
teamXX
- Create a repository named
web
under yourteamXX
group - Under this new repository, store your Dockerfile, your job definition and your static page
12. Automating deployments
This step will make all previous work deployable automatically from Gitlab. Sub steps will be detailed to help you.
- Edit your
project.nomad
file- Set job name to
teamXX-web
- Set memory requirement to 128 Mo
- Set CPU requirement to 250 MHz
- Set count to 4
- Set
ephemeral_disk
to 101. - Set docker image to
registry.se4-virt.fr/tXX-web:${CI_COMMIT_SHORT_SHA}
- Set service name to
tXX-web
- Set service tag and port to
web
- Commit and push your changes
- Set job name to
- Copy and adapt the
.gitlab-ci.yml
available here (SE4 Virtualisation TP Wasilewski - Gitlab CI file)
13. Releasing your website to the wild
- Update global HAProxy (hosted at
10.99.99.253
) configuration to redirect to the the newly deployed web service. - Your website should now be accessible through
https://tXX.se4-virt.fr
- Update your
index.html
file and see the update on the website.
Target: build and deploy more complex services
Congratulations! We have just finished the first part of the tutorial. The second part of this tutorial is way less guided as the first one.
During these parts, you will have to perform your own technological research (commonly called "Veille technique" in French).
For the next two question, you will have to:
- Gather information about 2 to 3 solutions
- Understand their differences
- Present them orally
- Select the solution the best fits your requirements
14. Creation of DHCP server
Preliminary steps
- On Proxmox, add a new network interface to your VM on bridge
vmbr2
on VLAN10XX
configured with IP10.99.1XX.1/24
. - Create a new VM, named
team-XX-os
, with no hard disk drive and one network interface attached to bridgevmbr2
on VLAN10XX
. - Update your Nomad agent configuration (aka not your job definition) to allow privileges for Docker
Next steps
- Create a new repository named
teamXX/dhcp
- Create a new CI/CD configuration file
- Create a new Nomad job file. Network mode will be
host
and you will have declare the DHCP ports. - Create a new Dockerfile if needed, and publish it through CI/CD as
tXX-dhcp
. - Deploy the elected solution on your virtual machine (and your virtual machine only!). (Hint: you will have to setup affinity policies in your Nomad job file).
Validation
To validate this step, you will have to show your new VM having an IP from your DHCP server.
15. Creation of TFTP server
- Create a new repository named
teamXX/tftp
- Create a new CI/CD configuration file
- Create a new Nomad job file. Network mode will be
host
and you will have declare the TFTP port. - Create a new Dockerfile if needed, and publish it through CI/CD as
tXX-tftp
. - Your TFTP server will have to provide the iPXE file
undionly.kpxe
. - Deploy the elected solution on your virtual machine (and your virtual machine only!). (Hint: you will have to setup constraint policies in your Nomad job file).
For reference:
To validate this step, you will have to download the iPXE file with a TFTP command from the se4-tool VM.
16. Creation of your custom Debian image
- Create a new repository named
teamXX/os
- Create a new CI configuration file
- This CI file will build an OS image of the latest Debian version using Debootstrap (Hint: you will need the
--include linux-image-amd64
flag to have a kernel) - It is recommended not to perform all steps directly in the CI file but rather to use a bash script.
- Create a bucket for your team, named
teamXX
, on the Minio Object storage (available here) - On your bucket, allow read+write anonymous access
- In your bash script:
- Create a folder for your working OS
- Debootstrap inside it
- Create an archive of your minimal OS with cpio (Hint:
find . |cpio -o -H newc > ../initrd.img
) - Create connection details to S3 with
mc alias set se4v https://s3.se4-virt.fr root USUAL_PASSWORD
- Push your new minimal image, aka initrd, to S3 using a command similar to
mc cp ./initrd se4v/teamXX/initrd
- Push your new kernel located in
boot/vmlinuz-*-amd64
tose4v/teamXX/kernel
, we'll boot on it later
- In your CI, you can use the
registry.se4-virt.fr/os-builder:v1.0.1
image as it already containsdebootstrap
andmc
Validation
To validate this step, your new initrd should downloadable through https://s3.se4-virt.fr/teamXX/initrd.img
Target: boot an OS from the network
17. Host iPXE configuration
Update web server to host an ipxe configuration file boot this new OS
Hint: here, you will mostly struggle with iPXE configuration and Linux parameters. This documentation should help you a bit.
18. Create a remote block disk and use it
Mount `/home/xxx` as rbd volume from Ceph
Now the new OS boots and use remote storage for users.
Target: build a custom OS
19. Your first kernel
Build a kernel (Bonus: less than 50 Mo + boot time < 2s)
20. Your first init
Build an init that displays your team ID on the terminal (Bonus: pimped init banner)
21. Your second init
Build an init that starts network interfaces
Target: create a logging service
22. Even further with init
Create a script / service that request user credentials, checks its validity against /etc/passwd, then mount dynamically user's home from rbd
23. Finally, a working UI
Improve script and update initrd to have a window manager starting after logging