« SE4 Virtualisation TP Wasilewski » : différence entre les versions

De wiki-se.plil.fr
Aller à la navigation Aller à la recherche
mAucun résumé des modifications
 
(28 versions intermédiaires par le même utilisateur non affichées)
Ligne 1 : Ligne 1 :
== Déroulé du TP ==
== Déroulé du TP ==


==== Target: discover technologies and use a web server ====
=== Target: discover technologies and use a web server ===


===== 1. Create VM =====
==== 1. Create VM ====
 
# Create a virtual machine on Proxmox (server is accessible through: [https://antifer.plil.info:8006 this link]) by cloning the template. The virtual machine must be named {{Inline-code | 1=team-XX}} with XX being your team number assigned at the beginning of the tutorial.
# Create a virtual machine on Proxmox (server is accessible through: [https://antifer.plil.info this link]) by cloning the template. The virtual machine must be named {{Inline-code | 1=team-XX}} with XX being your team number assigned at the beginning of the tutorial.
# Manual configure virtual machine's network to be persistent. Your VM's IP must be {{Inline-code | 1=10.99.XX.1/16}} with the gateway being {{Inline-code | 1=10.99.99.253}}.
# Manual configure virtual machine's network to be persistent. Your VM's IP must be {{Inline-code | 1=10.99.XX.1/16}} with the gateway being {{Inline-code | 1=10.99.99.254}}.
# Update {{Inline-code|/etc/hostname}}to reflect the VM's name to the hostname
# Update {{Inline-code|/etc/hostname}}to reflect the VM's name to the hostname
# Update {{Inline-code|/etc/hosts}} with the two previous updates
# Update {{Inline-code|/etc/hosts}} with the two previous updates
Ligne 12 : Ligne 11 :
# Reboot the VM and validate it.
# Reboot the VM and validate it.


===== 2. First web server =====
==== 2. First web server ====
 
# Install nginx. (Hint: if it fails, do not forget to configure your VM's DNS).
# Install nginx. (Hint: if it fails, do not forget to configure your VM's DNS).
# Setup a custom/cool static page.
# Setup a custom/cool static page.


===== 3. First load balancer =====
==== 3. First load balancer ====
 
# Install HAProxy 2.8 (Hint: Search "debian haproxy" on your favorite search engine)
# Install HAProxy 2.8 (Hint: Search "debian haproxy" on your favorite search engine)
# Redirect port 8888 to the previously installed nginx.
# Redirect port 8888 to the previously installed nginx.


===== 4. Docker time =====
==== 4. Docker time ====
 
# Install docker.
# Install docker.
# Pull nginx image.
# Pull nginx image.
Ligne 29 : Ligne 25 :
# Reconfigure HAProxy to use port 8889 onto the container.
# Reconfigure HAProxy to use port 8889 onto the container.


==== '''Target: discover technologies and use a web server''' ====
=== Target: discover technologies and use a web server ===
 
===== 5. First Docker image =====


==== 5. First Docker image ====
# Create a Docker image based on {{Inline-code|nginx:latest}} that use your previously created static page. This image will be named {{Inline-code|tXX-web}} and will use semantic versioning.
# Create a Docker image based on {{Inline-code|nginx:latest}} that use your previously created static page. This image will be named {{Inline-code|tXX-web}} and will use semantic versioning.
# Publish this new image on the common registry, located at {{Inline-code|http://10.99.99.254:5000}}. (Hint: you will have to update your Docker daemon configuration to allow this insecure registry).
# Publish this new image on the common registry, located at {{Inline-code|registry.se4-virt.fr}}.
 
===== 6. Nomad installation =====


==== 6. Nomad installation ====
# Install nomad as a Debian package. See [https://developer.hashicorp.com/nomad/tutorials/get-started/gs-install the official documentation]
# Install nomad as a Debian package. See [https://developer.hashicorp.com/nomad/tutorials/get-started/gs-install the official documentation]
# Have Nomad running as client and server. Validate it through its interface which should be accessible through {{Inline-code|http://10.99.XX.1:4646}}. (Check it with {{Inline-code|curl}})
# Have Nomad running as client and server. Validate it through its interface which should be accessible through {{Inline-code|http://10.99.XX.1:4646}}. (Check it with {{Inline-code|curl}})


===== 7. Access Nomad UI from Zabeth =====
==== 7. Access Nomad UI from Zabeth ====
 
# Add a route on your Zabeth to view Nomad UI from it (Hint: Nomad network is {{Inline-code|10.99.0.0/16}}and hop is {{Inline-code|172.26.145.200}})
# Add a route on your Zabeth to view Nomad UI from it.
# Add a route on your VM to allow traffic back to Zabeth (Hint: Zabeth network is {{Inline-code|172.26.145.0/24}} and hop is {{Inline-code|10.99.99.254}})
 
===== 8. First Nomad deployment =====


==== 8. First Nomad deployment ====
# Create a nomad job definition with {{Inline-code|nomad job init}} command.
# Create a nomad job definition with {{Inline-code|nomad job init}} command.
# Adapt it to deploy the container you created on step 5.
# Adapt it to deploy the container you created on step 5.


===== 9. Scale and load-balance =====
==== 9. Scale and load-balance ====
 
# Increase job definition count to run 3 container.
# Increase job definition count to run 3 container.
# Install Consul locally. (Hint: its Debian package should already be available as you should have installed the Hashicorp repositories for the Nomad install).
# Configure Nomad to use the newly installed Consul.
# Configure HAProxy to redirect port 8890 to those containers through service discovery.
# Configure HAProxy to redirect port 8890 to those containers through service discovery.


===== '''Target: setup a distributed service + deploy from CI''' =====
=== Target: setup a distributed service + deploy from CI ===
10. Reconfigure nomad to use the common consul and common nomad server
 
==== 10. Your Nomad joins the common cluster ====
 
# Reconfigure Nomad to only be a client.
# Use the common nomad by setting its settings to use the common consul (located at {{Inline-code|consul.se4-virt.fr:8500}}). (Hint, [https://developer.hashicorp.com/nomad/docs/configuration/consul#default this example] should help you).
# You should see your Nomad instance on the common [https://nomad.se4-virt.fr/ui/clients Nomad] server and [https://consul.se4-virt.fr/ui/dc1/services Consul].
 
==== 11. Starting the "DevOps" way ====
 
# Create an account on Gitlab {{Inline-code|https://git.se4-virt.fr}}[https://git.se4-virt.fr]
# Create a group named {{Inline-code|teamXX}}
# Create a repository named {{Inline-code|web}}under your {{Inline-code|teamXX}}group
# Under this new repository, store your Dockerfile, your job definition and your static page
 
==== 12. Automating deployments ====
This step will make all previous work deployable automatically from Gitlab. Sub steps will be detailed to help you.
 
# Edit your {{Inline-code|project.nomad}}file
## Set job name to {{Inline-code|teamXX-web}}
## Set memory requirement to 128 Mo
## Set CPU requirement to 250 MHz
## Set count to 4
## Set {{Inline-code|ephemeral_disk}} to 101.
## Set docker image to {{Inline-code|registry.se4-virt.fr/tXX-web:${CI_COMMIT_SHORT_SHA<nowiki>}</nowiki>}}
## Set service name to {{Inline-code|tXX-web}}
## Set service tag and port to {{Inline-code|web}}
## Commit and push your changes
# Copy and adapt the {{Inline-code|.gitlab-ci.yml}}available here ([[SE4 Virtualisation TP Wasilewski - Gitlab CI file]])
 
==== 13. Releasing your website to the wild ====
 
# Update global HAProxy (hosted at {{Inline-code|10.99.99.253}}) configuration to redirect to the the newly deployed web service.
# Your website should now be accessible through {{Inline-code|https://tXX.se4-virt.fr}}
# Update your {{Inline-code|index.html}} file and see the update on the website.
 
=== Target: build and deploy more complex services ===
Congratulations! We have just finished the first part of the tutorial. The second part of this tutorial is way less guided as the first one.
 
During these parts, you will have to perform your own technological research (commonly called "Veille technique" in French).
 
For the next two question, you will have to:
 
* Gather information about 2 to 3 solutions
* Understand their differences
* Present them orally
* Select the solution the best fits your requirements
 
==== 14. Creation of DHCP server ====
'''Preliminary steps'''


11. Create git repo on Gitlab / Store dockerfile + static page
* On Proxmox, add a new network interface to your VM on bridge {{Inline-code|vmbr2}} on VLAN {{Inline-code|10XX}} configured with IP {{Inline-code|10.99.1XX.1/24}}.
* Create a new VM, named {{Inline-code|team-XX-os}}, with no hard disk drive and one network interface attached to bridge {{Inline-code|vmbr2}} on VLAN {{Inline-code|10XX}}.
* Update your Nomad agent configuration ('''aka not your job definition''') to allow privileges for Docker


12. Update job definition in repo / setup CI / deploy new configuration to global nomad
'''Next steps'''
* Create a new repository named {{Inline-code|teamXX/dhcp}}
* Create a new CI/CD configuration file
* Create a new Nomad job file. Network mode will be {{Inline-code|host}} and you will have declare the DHCP ports.
* Create a new Dockerfile if needed, and publish it through CI/CD as {{Inline-code|tXX-dhcp}}.
* Deploy the elected solution on your virtual machine (and your virtual machine only!). (Hint: you will have to setup affinity policies in your Nomad job file).
'''Validation'''


13. Update global haproxy configuration to redirect to the the newly deployed web service
To validate this step, you will have to show your new VM having an IP from your DHCP server.


==== 15. Creation of TFTP server ====


'''Target: build and deploy more complex services'''
* Create a new repository named {{Inline-code|teamXX/tftp}}
* Create a new CI/CD configuration file
* Create a new Nomad job file. Network mode will be {{Inline-code|host}} and you will have declare the TFTP port.
* Create a new Dockerfile if needed, and publish it through CI/CD as {{Inline-code|tXX-tftp}}.
* Your TFTP server will have to provide the iPXE file {{Inline-code|undionly.kpxe}}.
* Deploy the elected solution on your virtual machine (and your virtual machine only!). (Hint: you will have to setup constraint policies in your Nomad job file).
For reference:


14. Create new repo as the web repo to build and push a dockerized DHCP server
* [https://ipxe.org/start iPXE home page]
* [https://boot.ipxe.org/ iPXE download page]


15. Create a new repo as the web repo to build and push a dockerized TFTP server with ipxe
To validate this step, you will have to download the iPXE file with a TFTP command from the se4-tool VM.


16. Create a new repo to build and push on S3 a debootstrap image of latest debian version
==== 16. Creation of your custom Debian image ====


* Create a new repository named {{Inline-code|teamXX/os}}
* Create a new CI configuration file
* This CI file will build an OS image of the latest Debian version using Debootstrap (Hint: you will need the {{Inline-code|--include linux-image-amd64}} flag to have a kernel)
* It is recommended not to perform all steps directly in the CI file but rather to use a bash script.
* Create a bucket for your team, named {{Inline-code|teamXX}}, on the Minio Object storage (available [https://s3ui.se4-virt.fr/browser here])
* On your bucket, allow read+write anonymous access
* In your bash script:
** Create a folder for your working OS
** Debootstrap inside it
** Create an archive of your minimal OS with cpio (Hint: {{Inline-code|find . {{!}}cpio -o -H newc > ../initrd.img}})
** Create connection details to S3 with {{Inline-code|mc alias set se4v https://s3.se4-virt.fr root USUAL_PASSWORD}}
** Push your new minimal image, aka initrd, to S3 using a command similar to {{Inline-code|mc cp ./initrd se4v/teamXX/initrd}}
** Push your new kernel located in {{Inline-code|boot/vmlinuz-*-amd64}} to {{Inline-code|se4v/teamXX/kernel}}, we'll boot on it later
*In your CI, you can use the {{Inline-code|registry.se4-virt.fr/os-builder:v1.0.1}} image as it already contains {{Inline-code|debootstrap}} and {{Inline-code|mc}}
'''Validation'''


'''Target: boot an OS from the network'''
To validate this step, your new initrd should downloadable through {{Inline-code|https://s3.se4-virt.fr/teamXX/initrd.img}}


17. Update web server to host an ipxe configuration file boot this new OS
=== Target: boot an OS from the network ===


18. Mount `/home/xxx` as rbd volume from Ceph
==== 17. Host iPXE configuration ====
Update web server to host an ipxe configuration file boot this new OS


Now the new OS boots and use remote storage for users.
Hint: here, you will mostly struggle with iPXE configuration and Linux parameters. This [https://docs.kernel.org/filesystems/ramfs-rootfs-initramfs.html documentation] should help you a bit.


=== Extra parts for experimented SE4 ===
Additional hint:
'''Target: build a custom OS'''


19. Build a kernel (Bonus: less than 50 Mo + boot time < 2s)
* Configure your DHCP server to tell iPXE to use a configuration file on your web server
* Create a iPXE configuration file to retrieve and boot from your kernel and initrd (Hint: only use HTTP and not HTTPS / Configure masquerade on your VM)
* Pass correct parameters to your kernel to boot it properly (Hint: you will need to set {{Inline-code|initrd}},{{Inline-code|root}}, {{Inline-code|rootfstype}} and the ramdisk init target)


20. Build an init that displays your team ID on the terminal (Bonus: pimped init banner)
==== 18. Create a remote block disk and use it ====
Mount `/home/xxx` as rbd volume from Ceph


21. Build an init that starts network interfaces
Now the new OS boots and use remote storage for users.


=== Target: build a custom OS ===


'''Target: create a logging service'''
==== 19. Your first kernel ====
Build a kernel (Bonus: less than 50 Mo + boot time < 2s)


22. Create a script / service that request user credentials, checks its validity against /etc/passwd, then mount dynamically user's home from rbd
==== 20. Your first init ====
Build an init that displays your team ID on the terminal (Bonus: pimped init banner)


23. Improve script and update initrd to have a window manager starting after logging
==== 21. Your second init ====
Build an init that starts network interfaces


== Notation des élèves ==
=== Target: create a logging service ===
La notation du TP est décomposée en deux parties.


14 points sont liés à l’achèvement des différentes étapes décrites précédemment.
==== 22. Even further with init ====
Create a script / service that request user credentials, checks its validity against /etc/passwd, then mount dynamically user's home from rbd


6 points sont liés à la performance. En effet; à la fin de chaque étape, il sera noté le classement des équipes.
==== 23. Finally, a working UI ====
{| class="wikitable"
Improve script and update initrd to have a window manager starting after logging
|+
!Equipe \ Etape
!1
!2
!3
!4
!5
!6
!7
!8
!9
!10
!11
!12
!13
!14
!15
!16
!17
!18
!19
!20
!21
!22
!23
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|}

Version actuelle datée du 9 avril 2024 à 07:27

Déroulé du TP

Target: discover technologies and use a web server

1. Create VM

  1. Create a virtual machine on Proxmox (server is accessible through: this link) by cloning the template. The virtual machine must be named team-XX with XX being your team number assigned at the beginning of the tutorial.
  2. Manual configure virtual machine's network to be persistent. Your VM's IP must be 10.99.XX.1/16 with the gateway being 10.99.99.253.
  3. Update /etc/hostnameto reflect the VM's name to the hostname
  4. Update /etc/hosts with the two previous updates
  5. Ensure it can ping google.
  6. Reboot the VM and validate it.

2. First web server

  1. Install nginx. (Hint: if it fails, do not forget to configure your VM's DNS).
  2. Setup a custom/cool static page.

3. First load balancer

  1. Install HAProxy 2.8 (Hint: Search "debian haproxy" on your favorite search engine)
  2. Redirect port 8888 to the previously installed nginx.

4. Docker time

  1. Install docker.
  2. Pull nginx image.
  3. Copy your custom static page into the container.
  4. Reconfigure HAProxy to use port 8889 onto the container.

Target: discover technologies and use a web server

5. First Docker image

  1. Create a Docker image based on nginx:latest that use your previously created static page. This image will be named tXX-web and will use semantic versioning.
  2. Publish this new image on the common registry, located at registry.se4-virt.fr.

6. Nomad installation

  1. Install nomad as a Debian package. See the official documentation
  2. Have Nomad running as client and server. Validate it through its interface which should be accessible through http://10.99.XX.1:4646. (Check it with curl)

7. Access Nomad UI from Zabeth

  1. Add a route on your Zabeth to view Nomad UI from it (Hint: Nomad network is 10.99.0.0/16and hop is 172.26.145.200)
  2. Add a route on your VM to allow traffic back to Zabeth (Hint: Zabeth network is 172.26.145.0/24 and hop is 10.99.99.254)

8. First Nomad deployment

  1. Create a nomad job definition with nomad job init command.
  2. Adapt it to deploy the container you created on step 5.

9. Scale and load-balance

  1. Increase job definition count to run 3 container.
  2. Install Consul locally. (Hint: its Debian package should already be available as you should have installed the Hashicorp repositories for the Nomad install).
  3. Configure Nomad to use the newly installed Consul.
  4. Configure HAProxy to redirect port 8890 to those containers through service discovery.

Target: setup a distributed service + deploy from CI

10. Your Nomad joins the common cluster

  1. Reconfigure Nomad to only be a client.
  2. Use the common nomad by setting its settings to use the common consul (located at consul.se4-virt.fr:8500). (Hint, this example should help you).
  3. You should see your Nomad instance on the common Nomad server and Consul.

11. Starting the "DevOps" way

  1. Create an account on Gitlab https://git.se4-virt.fr[1]
  2. Create a group named teamXX
  3. Create a repository named webunder your teamXXgroup
  4. Under this new repository, store your Dockerfile, your job definition and your static page

12. Automating deployments

This step will make all previous work deployable automatically from Gitlab. Sub steps will be detailed to help you.

  1. Edit your project.nomadfile
    1. Set job name to teamXX-web
    2. Set memory requirement to 128 Mo
    3. Set CPU requirement to 250 MHz
    4. Set count to 4
    5. Set ephemeral_disk to 101.
    6. Set docker image to registry.se4-virt.fr/tXX-web:${CI_COMMIT_SHORT_SHA}
    7. Set service name to tXX-web
    8. Set service tag and port to web
    9. Commit and push your changes
  2. Copy and adapt the .gitlab-ci.ymlavailable here (SE4 Virtualisation TP Wasilewski - Gitlab CI file)

13. Releasing your website to the wild

  1. Update global HAProxy (hosted at 10.99.99.253) configuration to redirect to the the newly deployed web service.
  2. Your website should now be accessible through https://tXX.se4-virt.fr
  3. Update your index.html file and see the update on the website.

Target: build and deploy more complex services

Congratulations! We have just finished the first part of the tutorial. The second part of this tutorial is way less guided as the first one.

During these parts, you will have to perform your own technological research (commonly called "Veille technique" in French).

For the next two question, you will have to:

  • Gather information about 2 to 3 solutions
  • Understand their differences
  • Present them orally
  • Select the solution the best fits your requirements

14. Creation of DHCP server

Preliminary steps

  • On Proxmox, add a new network interface to your VM on bridge vmbr2 on VLAN 10XX configured with IP 10.99.1XX.1/24.
  • Create a new VM, named team-XX-os, with no hard disk drive and one network interface attached to bridge vmbr2 on VLAN 10XX.
  • Update your Nomad agent configuration (aka not your job definition) to allow privileges for Docker

Next steps

  • Create a new repository named teamXX/dhcp
  • Create a new CI/CD configuration file
  • Create a new Nomad job file. Network mode will be host and you will have declare the DHCP ports.
  • Create a new Dockerfile if needed, and publish it through CI/CD as tXX-dhcp.
  • Deploy the elected solution on your virtual machine (and your virtual machine only!). (Hint: you will have to setup affinity policies in your Nomad job file).

Validation

To validate this step, you will have to show your new VM having an IP from your DHCP server.

15. Creation of TFTP server

  • Create a new repository named teamXX/tftp
  • Create a new CI/CD configuration file
  • Create a new Nomad job file. Network mode will be host and you will have declare the TFTP port.
  • Create a new Dockerfile if needed, and publish it through CI/CD as tXX-tftp.
  • Your TFTP server will have to provide the iPXE file undionly.kpxe.
  • Deploy the elected solution on your virtual machine (and your virtual machine only!). (Hint: you will have to setup constraint policies in your Nomad job file).

For reference:

To validate this step, you will have to download the iPXE file with a TFTP command from the se4-tool VM.

16. Creation of your custom Debian image

  • Create a new repository named teamXX/os
  • Create a new CI configuration file
  • This CI file will build an OS image of the latest Debian version using Debootstrap (Hint: you will need the --include linux-image-amd64 flag to have a kernel)
  • It is recommended not to perform all steps directly in the CI file but rather to use a bash script.
  • Create a bucket for your team, named teamXX, on the Minio Object storage (available here)
  • On your bucket, allow read+write anonymous access
  • In your bash script:
    • Create a folder for your working OS
    • Debootstrap inside it
    • Create an archive of your minimal OS with cpio (Hint: find . |cpio -o -H newc > ../initrd.img)
    • Create connection details to S3 with mc alias set se4v https://s3.se4-virt.fr root USUAL_PASSWORD
    • Push your new minimal image, aka initrd, to S3 using a command similar to mc cp ./initrd se4v/teamXX/initrd
    • Push your new kernel located in boot/vmlinuz-*-amd64 to se4v/teamXX/kernel, we'll boot on it later
  • In your CI, you can use the registry.se4-virt.fr/os-builder:v1.0.1 image as it already contains debootstrap and mc

Validation

To validate this step, your new initrd should downloadable through https://s3.se4-virt.fr/teamXX/initrd.img

Target: boot an OS from the network

17. Host iPXE configuration

Update web server to host an ipxe configuration file boot this new OS

Hint: here, you will mostly struggle with iPXE configuration and Linux parameters. This documentation should help you a bit.

Additional hint:

  • Configure your DHCP server to tell iPXE to use a configuration file on your web server
  • Create a iPXE configuration file to retrieve and boot from your kernel and initrd (Hint: only use HTTP and not HTTPS / Configure masquerade on your VM)
  • Pass correct parameters to your kernel to boot it properly (Hint: you will need to set initrd,root, rootfstype and the ramdisk init target)

18. Create a remote block disk and use it

Mount `/home/xxx` as rbd volume from Ceph

Now the new OS boots and use remote storage for users.

Target: build a custom OS

19. Your first kernel

Build a kernel (Bonus: less than 50 Mo + boot time < 2s)

20. Your first init

Build an init that displays your team ID on the terminal (Bonus: pimped init banner)

21. Your second init

Build an init that starts network interfaces

Target: create a logging service

22. Even further with init

Create a script / service that request user credentials, checks its validity against /etc/passwd, then mount dynamically user's home from rbd

23. Finally, a working UI

Improve script and update initrd to have a window manager starting after logging