« SE4 Virtualisation TP Wasilewski » : différence entre les versions

De wiki-se.plil.fr
Aller à la navigation Aller à la recherche
 
(37 versions intermédiaires par le même utilisateur non affichées)
Ligne 1 : Ligne 1 :
== Déroulé du TP ==
== Déroulé du TP ==


==== Target: discover technologies and use a web server ====
=== Target: discover technologies and use a web server ===


===== 1. Create VM =====
==== 1. Create VM ====
Create a virtual machine on proxmox.
# Create a virtual machine on Proxmox (server is accessible through: [https://antifer.plil.info:8006 this link]) by cloning the template. The virtual machine must be named {{Inline-code | 1=team-XX}} with XX being your team number assigned at the beginning of the tutorial.
# Manual configure virtual machine's network to be persistent. Your VM's IP must be {{Inline-code | 1=10.99.XX.1/16}} with the gateway being {{Inline-code | 1=10.99.99.253}}.
# Update {{Inline-code|/etc/hostname}}to reflect the VM's name to the hostname
# Update {{Inline-code|/etc/hosts}} with the two previous updates
# Ensure it can ping google.
# Reboot the VM and validate it.


Manual configure virtual machine's network to be persistent.
==== 2. First web server ====
# Install nginx. (Hint: if it fails, do not forget to configure your VM's DNS).
# Setup a custom/cool static page.


Ensure it can ping google.
==== 3. First load balancer ====
# Install HAProxy 2.8 (Hint: Search "debian haproxy" on your favorite search engine)
# Redirect port 8888 to the previously installed nginx.


Reboot the VM and validate it.
==== 4. Docker time ====
# Install docker.
# Pull nginx image.
# Copy your custom static page into the container.
# Reconfigure HAProxy to use port 8889 onto the container.


===== 2. First web server =====
=== Target: discover technologies and use a web server ===
Install nginx.


(Hint: if it fails, do not forget to configure your VM's DNS).
==== 5. First Docker image ====
# Create a Docker image based on {{Inline-code|nginx:latest}} that use your previously created static page. This image will be named {{Inline-code|tXX-web}} and will use semantic versioning.
# Publish this new image on the common registry, located at {{Inline-code|registry.se4-virt.fr}}.


Setup a custom/cool static page.
==== 6. Nomad installation ====
# Install nomad as a Debian package. See [https://developer.hashicorp.com/nomad/tutorials/get-started/gs-install the official documentation]
# Have Nomad running as client and server. Validate it through its interface which should be accessible through {{Inline-code|http://10.99.XX.1:4646}}. (Check it with {{Inline-code|curl}})


===== 3. First load balancer =====
==== 7. Access Nomad UI from Zabeth ====
Install HAProxy 2.8.
# Add a route on your Zabeth to view Nomad UI from it (Hint: Nomad network is {{Inline-code|10.99.0.0/16}}and hop is {{Inline-code|172.26.145.200}})
# Add a route on your VM to allow traffic back to Zabeth (Hint: Zabeth network is {{Inline-code|172.26.145.0/24}} and hop is {{Inline-code|10.99.99.254}})


Redirect port 8888 to the previously installed nginx.
==== 8. First Nomad deployment ====
# Create a nomad job definition with {{Inline-code|nomad job init}} command.
# Adapt it to deploy the container you created on step 5.


===== 4. Docker time =====
==== 9. Scale and load-balance ====
Install docker.
# Increase job definition count to run 3 container.
# Install Consul locally. (Hint: its Debian package should already be available as you should have installed the Hashicorp repositories for the Nomad install).
# Configure Nomad to use the newly installed Consul.
# Configure HAProxy to redirect port 8890 to those containers through service discovery.


Pull nginx image.
=== Target: setup a distributed service + deploy from CI ===


Copy your custom static page into the container.
==== 10. Your Nomad joins the common cluster ====


Reconfigure HAProxy to use port 8889 onto the container.
# Reconfigure Nomad to only be a client.
# Use the common nomad by setting its settings to use the common consul (located at {{Inline-code|consul.se4-virt.fr:8500}}). (Hint, [https://developer.hashicorp.com/nomad/docs/configuration/consul#default this example] should help you).
# You should see your Nomad instance on the common [https://nomad.se4-virt.fr/ui/clients Nomad] server and [https://consul.se4-virt.fr/ui/dc1/services Consul].


==== '''Target: discover technologies and use a web server''' ====
==== 11. Starting the "DevOps" way ====
5. Docker image creation with static page / publish on registry


6. Install nomad single instance
# Create an account on Gitlab {{Inline-code|https://git.se4-virt.fr}}[https://git.se4-virt.fr]
# Create a group named {{Inline-code|teamXX}}
# Create a repository named {{Inline-code|web}}under your {{Inline-code|teamXX}}group
# Under this new repository, store your Dockerfile, your job definition and your static page


7. Create nomad job definition to run previous container with Nomad
==== 12. Automating deployments ====
This step will make all previous work deployable automatically from Gitlab. Sub steps will be detailed to help you.


8. Increase job definition count to run 3 container / configure HAProxy with auto discovery
# Edit your {{Inline-code|project.nomad}}file
## Set job name to {{Inline-code|teamXX-web}}
## Set memory requirement to 128 Mo
## Set CPU requirement to 250 MHz
## Set count to 4
## Set {{Inline-code|ephemeral_disk}} to 101.
## Set docker image to {{Inline-code|registry.se4-virt.fr/tXX-web:${CI_COMMIT_SHORT_SHA<nowiki>}</nowiki>}}
## Set service name to {{Inline-code|tXX-web}}
## Set service tag and port to {{Inline-code|web}}
## Commit and push your changes
# Copy and adapt the {{Inline-code|.gitlab-ci.yml}}available here ([[SE4 Virtualisation TP Wasilewski - Gitlab CI file]])


9. Add route on Zabeth to view the page
==== 13. Releasing your website to the wild ====


# Update global HAProxy (hosted at {{Inline-code|10.99.99.253}}) configuration to redirect to the the newly deployed web service.
# Your website should now be accessible through {{Inline-code|https://tXX.se4-virt.fr}}
# Update your {{Inline-code|index.html}} file and see the update on the website.


'''Target: setup a distributed service + deploy from CI'''
=== Target: build and deploy more complex services ===
Congratulations! We have just finished the first part of the tutorial. The second part of this tutorial is way less guided as the first one.


10. Reconfigure nomad to use the common consul and common nomad server
During these parts, you will have to perform your own technological research (commonly called "Veille technique" in French).


11. Create git repo on Gitlab / Store dockerfile + static page
For the next two question, you will have to:


12. Update job definition in repo / setup CI / deploy new configuration to global nomad
* Gather information about 2 to 3 solutions
* Understand their differences
* Present them orally
* Select the solution the best fits your requirements


13. Update global haproxy configuration to redirect to the the newly deployed web service
==== 14. Creation of DHCP server ====
'''Preliminary steps'''


* On Proxmox, add a new network interface to your VM on bridge {{Inline-code|vmbr2}} on VLAN {{Inline-code|10XX}} configured with IP {{Inline-code|10.99.1XX.1/24}}.
* Create a new VM, named {{Inline-code|team-XX-os}}, with no hard disk drive and one network interface attached to bridge {{Inline-code|vmbr2}} on VLAN {{Inline-code|10XX}}.
* Update your Nomad agent configuration ('''aka not your job definition''') to allow privileges for Docker


'''Target: build and deploy more complex services'''
'''Next steps'''
* Create a new repository named {{Inline-code|teamXX/dhcp}}
* Create a new CI/CD configuration file
* Create a new Nomad job file. Network mode will be {{Inline-code|host}} and you will have declare the DHCP ports.
* Create a new Dockerfile if needed, and publish it through CI/CD as {{Inline-code|tXX-dhcp}}.
* Deploy the elected solution on your virtual machine (and your virtual machine only!). (Hint: you will have to setup affinity policies in your Nomad job file).
'''Validation'''


14. Create new repo as the web repo to build and push a dockerized DHCP server
To validate this step, you will have to show your new VM having an IP from your DHCP server.


15. Create a new repo as the web repo to build and push a dockerized TFTP server with ipxe
==== 15. Creation of TFTP server ====


16. Create a new repo to build and push on S3 a debootstrap image of latest debian version
* Create a new repository named {{Inline-code|teamXX/tftp}}
* Create a new CI/CD configuration file
* Create a new Nomad job file. Network mode will be {{Inline-code|host}} and you will have declare the TFTP port.
* Create a new Dockerfile if needed, and publish it through CI/CD as {{Inline-code|tXX-tftp}}.
* Your TFTP server will have to provide the iPXE file {{Inline-code|undionly.kpxe}}.
* Deploy the elected solution on your virtual machine (and your virtual machine only!). (Hint: you will have to setup constraint policies in your Nomad job file).
For reference:


* [https://ipxe.org/start iPXE home page]
* [https://boot.ipxe.org/ iPXE download page]


'''Target: boot an OS from the network'''
To validate this step, you will have to download the iPXE file with a TFTP command from the se4-tool VM.


17. Update web server to host an ipxe configuration file boot this new OS
==== 16. Creation of your custom Debian image ====


18. Mount `/home/xxx` as rbd volume from Ceph
* Create a new repository named {{Inline-code|teamXX/os}}
* Create a new CI configuration file
* This CI file will build an OS image of the latest Debian version using Debootstrap (Hint: you will need the {{Inline-code|--include linux-image-amd64}} flag to have a kernel)
* It is recommended not to perform all steps directly in the CI file but rather to use a bash script.
* Create a bucket for your team, named {{Inline-code|teamXX}}, on the Minio Object storage (available [https://s3ui.se4-virt.fr/browser here])
* On your bucket, allow read+write anonymous access
* In your bash script:
** Create a folder for your working OS
** Debootstrap inside it
** Create an archive of your minimal OS with cpio (Hint: {{Inline-code|find . {{!}}cpio -o -H newc > ../initrd.img}})
** Create connection details to S3 with {{Inline-code|mc alias set se4v https://s3.se4-virt.fr root USUAL_PASSWORD}}
** Push your new minimal image, aka initrd, to S3 using a command similar to {{Inline-code|mc cp ./initrd se4v/teamXX/initrd}}
** Push your new kernel located in {{Inline-code|boot/vmlinuz-*-amd64}} to {{Inline-code|se4v/teamXX/kernel}}, we'll boot on it later
*In your CI, you can use the {{Inline-code|registry.se4-virt.fr/os-builder:v1.0.1}} image as it already contains {{Inline-code|debootstrap}} and {{Inline-code|mc}}
'''Validation'''


Now the new OS boots and use remote storage for users.
To validate this step, your new initrd should downloadable through {{Inline-code|https://s3.se4-virt.fr/teamXX/initrd.img}}
 
=== Target: boot an OS from the network ===
 
==== 17. Host iPXE configuration ====
Update web server to host an ipxe configuration file boot this new OS
 
Hint: here, you will mostly struggle with iPXE configuration and Linux parameters. This [https://docs.kernel.org/filesystems/ramfs-rootfs-initramfs.html documentation] should help you a bit.


=== Extra parts for experimented SE4 ===
Additional hint:
'''Target: build a custom OS'''


19. Build a kernel (Bonus: less than 50 Mo + boot time < 2s)
* Configure your DHCP server to tell iPXE to use a configuration file on your web server
* Create a iPXE configuration file to retrieve and boot from your kernel and initrd (Hint: only use HTTP and not HTTPS / Configure masquerade on your VM)
* Pass correct parameters to your kernel to boot it properly (Hint: you will need to set {{Inline-code|initrd}},{{Inline-code|root}}, {{Inline-code|rootfstype}} and the ramdisk init target)


20. Build an init that displays your team ID on the terminal (Bonus: pimped init banner)
==== 18. Create a remote block disk and use it ====
Mount `/home/xxx` as rbd volume from Ceph


21. Build an init that starts network interfaces
Now the new OS boots and use remote storage for users.


=== Target: build a custom OS ===


'''Target: create a logging service'''
==== 19. Your first kernel ====
Build a kernel (Bonus: less than 50 Mo + boot time < 2s)


22. Create a script / service that request user credentials, checks its validity against /etc/passwd, then mount dynamically user's home from rbd
==== 20. Your first init ====
Build an init that displays your team ID on the terminal (Bonus: pimped init banner)


23. Improve script and update initrd to have a window manager starting after logging
==== 21. Your second init ====
Build an init that starts network interfaces


== Notation des élèves ==
=== Target: create a logging service ===
La notation du TP est décomposée en deux parties.


14 points sont liés à l’achèvement des différentes étapes décrites précédemment.
==== 22. Even further with init ====
Create a script / service that request user credentials, checks its validity against /etc/passwd, then mount dynamically user's home from rbd


6 points sont liés à la performance. En effet; à la fin de chaque étape, il sera noté le classement des équipes.
==== 23. Finally, a working UI ====
{| class="wikitable"
Improve script and update initrd to have a window manager starting after logging
|+
!Equipe \ Etape
!1
!2
!3
!4
!5
!6
!7
!8
!9
!10
!11
!12
!13
!14
!15
!16
!17
!18
!19
!20
!21
!22
!23
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|}

Version actuelle datée du 9 avril 2024 à 07:27

Déroulé du TP

Target: discover technologies and use a web server

1. Create VM

  1. Create a virtual machine on Proxmox (server is accessible through: this link) by cloning the template. The virtual machine must be named team-XX with XX being your team number assigned at the beginning of the tutorial.
  2. Manual configure virtual machine's network to be persistent. Your VM's IP must be 10.99.XX.1/16 with the gateway being 10.99.99.253.
  3. Update /etc/hostnameto reflect the VM's name to the hostname
  4. Update /etc/hosts with the two previous updates
  5. Ensure it can ping google.
  6. Reboot the VM and validate it.

2. First web server

  1. Install nginx. (Hint: if it fails, do not forget to configure your VM's DNS).
  2. Setup a custom/cool static page.

3. First load balancer

  1. Install HAProxy 2.8 (Hint: Search "debian haproxy" on your favorite search engine)
  2. Redirect port 8888 to the previously installed nginx.

4. Docker time

  1. Install docker.
  2. Pull nginx image.
  3. Copy your custom static page into the container.
  4. Reconfigure HAProxy to use port 8889 onto the container.

Target: discover technologies and use a web server

5. First Docker image

  1. Create a Docker image based on nginx:latest that use your previously created static page. This image will be named tXX-web and will use semantic versioning.
  2. Publish this new image on the common registry, located at registry.se4-virt.fr.

6. Nomad installation

  1. Install nomad as a Debian package. See the official documentation
  2. Have Nomad running as client and server. Validate it through its interface which should be accessible through http://10.99.XX.1:4646. (Check it with curl)

7. Access Nomad UI from Zabeth

  1. Add a route on your Zabeth to view Nomad UI from it (Hint: Nomad network is 10.99.0.0/16and hop is 172.26.145.200)
  2. Add a route on your VM to allow traffic back to Zabeth (Hint: Zabeth network is 172.26.145.0/24 and hop is 10.99.99.254)

8. First Nomad deployment

  1. Create a nomad job definition with nomad job init command.
  2. Adapt it to deploy the container you created on step 5.

9. Scale and load-balance

  1. Increase job definition count to run 3 container.
  2. Install Consul locally. (Hint: its Debian package should already be available as you should have installed the Hashicorp repositories for the Nomad install).
  3. Configure Nomad to use the newly installed Consul.
  4. Configure HAProxy to redirect port 8890 to those containers through service discovery.

Target: setup a distributed service + deploy from CI

10. Your Nomad joins the common cluster

  1. Reconfigure Nomad to only be a client.
  2. Use the common nomad by setting its settings to use the common consul (located at consul.se4-virt.fr:8500). (Hint, this example should help you).
  3. You should see your Nomad instance on the common Nomad server and Consul.

11. Starting the "DevOps" way

  1. Create an account on Gitlab https://git.se4-virt.fr[1]
  2. Create a group named teamXX
  3. Create a repository named webunder your teamXXgroup
  4. Under this new repository, store your Dockerfile, your job definition and your static page

12. Automating deployments

This step will make all previous work deployable automatically from Gitlab. Sub steps will be detailed to help you.

  1. Edit your project.nomadfile
    1. Set job name to teamXX-web
    2. Set memory requirement to 128 Mo
    3. Set CPU requirement to 250 MHz
    4. Set count to 4
    5. Set ephemeral_disk to 101.
    6. Set docker image to registry.se4-virt.fr/tXX-web:${CI_COMMIT_SHORT_SHA}
    7. Set service name to tXX-web
    8. Set service tag and port to web
    9. Commit and push your changes
  2. Copy and adapt the .gitlab-ci.ymlavailable here (SE4 Virtualisation TP Wasilewski - Gitlab CI file)

13. Releasing your website to the wild

  1. Update global HAProxy (hosted at 10.99.99.253) configuration to redirect to the the newly deployed web service.
  2. Your website should now be accessible through https://tXX.se4-virt.fr
  3. Update your index.html file and see the update on the website.

Target: build and deploy more complex services

Congratulations! We have just finished the first part of the tutorial. The second part of this tutorial is way less guided as the first one.

During these parts, you will have to perform your own technological research (commonly called "Veille technique" in French).

For the next two question, you will have to:

  • Gather information about 2 to 3 solutions
  • Understand their differences
  • Present them orally
  • Select the solution the best fits your requirements

14. Creation of DHCP server

Preliminary steps

  • On Proxmox, add a new network interface to your VM on bridge vmbr2 on VLAN 10XX configured with IP 10.99.1XX.1/24.
  • Create a new VM, named team-XX-os, with no hard disk drive and one network interface attached to bridge vmbr2 on VLAN 10XX.
  • Update your Nomad agent configuration (aka not your job definition) to allow privileges for Docker

Next steps

  • Create a new repository named teamXX/dhcp
  • Create a new CI/CD configuration file
  • Create a new Nomad job file. Network mode will be host and you will have declare the DHCP ports.
  • Create a new Dockerfile if needed, and publish it through CI/CD as tXX-dhcp.
  • Deploy the elected solution on your virtual machine (and your virtual machine only!). (Hint: you will have to setup affinity policies in your Nomad job file).

Validation

To validate this step, you will have to show your new VM having an IP from your DHCP server.

15. Creation of TFTP server

  • Create a new repository named teamXX/tftp
  • Create a new CI/CD configuration file
  • Create a new Nomad job file. Network mode will be host and you will have declare the TFTP port.
  • Create a new Dockerfile if needed, and publish it through CI/CD as tXX-tftp.
  • Your TFTP server will have to provide the iPXE file undionly.kpxe.
  • Deploy the elected solution on your virtual machine (and your virtual machine only!). (Hint: you will have to setup constraint policies in your Nomad job file).

For reference:

To validate this step, you will have to download the iPXE file with a TFTP command from the se4-tool VM.

16. Creation of your custom Debian image

  • Create a new repository named teamXX/os
  • Create a new CI configuration file
  • This CI file will build an OS image of the latest Debian version using Debootstrap (Hint: you will need the --include linux-image-amd64 flag to have a kernel)
  • It is recommended not to perform all steps directly in the CI file but rather to use a bash script.
  • Create a bucket for your team, named teamXX, on the Minio Object storage (available here)
  • On your bucket, allow read+write anonymous access
  • In your bash script:
    • Create a folder for your working OS
    • Debootstrap inside it
    • Create an archive of your minimal OS with cpio (Hint: find . |cpio -o -H newc > ../initrd.img)
    • Create connection details to S3 with mc alias set se4v https://s3.se4-virt.fr root USUAL_PASSWORD
    • Push your new minimal image, aka initrd, to S3 using a command similar to mc cp ./initrd se4v/teamXX/initrd
    • Push your new kernel located in boot/vmlinuz-*-amd64 to se4v/teamXX/kernel, we'll boot on it later
  • In your CI, you can use the registry.se4-virt.fr/os-builder:v1.0.1 image as it already contains debootstrap and mc

Validation

To validate this step, your new initrd should downloadable through https://s3.se4-virt.fr/teamXX/initrd.img

Target: boot an OS from the network

17. Host iPXE configuration

Update web server to host an ipxe configuration file boot this new OS

Hint: here, you will mostly struggle with iPXE configuration and Linux parameters. This documentation should help you a bit.

Additional hint:

  • Configure your DHCP server to tell iPXE to use a configuration file on your web server
  • Create a iPXE configuration file to retrieve and boot from your kernel and initrd (Hint: only use HTTP and not HTTPS / Configure masquerade on your VM)
  • Pass correct parameters to your kernel to boot it properly (Hint: you will need to set initrd,root, rootfstype and the ramdisk init target)

18. Create a remote block disk and use it

Mount `/home/xxx` as rbd volume from Ceph

Now the new OS boots and use remote storage for users.

Target: build a custom OS

19. Your first kernel

Build a kernel (Bonus: less than 50 Mo + boot time < 2s)

20. Your first init

Build an init that displays your team ID on the terminal (Bonus: pimped init banner)

21. Your second init

Build an init that starts network interfaces

Target: create a logging service

22. Even further with init

Create a script / service that request user credentials, checks its validity against /etc/passwd, then mount dynamically user's home from rbd

23. Finally, a working UI

Improve script and update initrd to have a window manager starting after logging