Chapter 9: Neutron - Advanced Commands - Part 3
QoS
Introduction :
Todo
Neutron Configuration :
Qos service plugin gets installed by default(devstack location: /opt/stack/neutron/neutron/services/qos). Only we need to enable this plugin in the neutron.conf file, as below
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.qos.qos_plugin.QoSPlugin
In the /etc/neutron/plugins/ml2/ml2_conf.ini ( in devstack)
[ml2]
extension_drivers = port_security,qos
[agent]
extensions = qos
You have to allow the network policy for allow member to use QoS . I am going to test using admin user, hence i am not changing the policy
Restart the neutron server and neutron-openvswitch-agent (In devstack , restart q-svc and q-agt screen)
Theory:
Neutron supports two types of Qos.
- Bandwidth Limit
- DSCP Marking
A.Bandwidth Limit:
Bandwidth Limit is limiting /applying the bandwidth for the VM interface. By default, its unlimited (10G or 100G) veth bandwidth limite. This feature enable us to configure the bandwidth to the VM.
Steps:
- Create the bandwidth Qos Policy
- Create the bandwidth Qos Rule
- Apply it in the Port or Network.
Create the Bandwidth QoS- Policy
neutron qos-policy-create bw-limiter
create the Bandwidth Qos rule
neutron qos-bandwidth-limit-rule-create bw-limiter --max-kbps 3000 --max-burst-kbps 300
Apply the Bandwidth Qos Policy to the Port .
neutron port-update 18883df8-af5d-4110-aabd-726019c2e8e8 --qos-policy bw-limiter
Other Commands:
List the Qos Policy
neutron qos-policy-list
List the Qos Bandwidth Limit Rule
neutron qos-bandwidth-limit-rule-list bw-limiter
Remove the bandwidth policy from the port.
neutron port-update 18883df8-af5d-4110-aabd-726019c2e8e8 --no-qos-policy
Bandwidth can be dynamically updated.
neutron qos-bandwidth-limit-rule-update eaf23a3c-a2b5-45f6-942b-74bacdbd29b5 bw-limiter --max-kbps 2000
Example:
In the Testing section,
- Test IPERF TCP traffic on the two VMs in the same network
- Create the Qos Bandwidth for 3Mbps and apply the ports of both VMs
- Again , Test IPERF TCP Traffic again.
- Compare both results
- First time the bandwidht test results are in Gbps(as no bandwidth policy),second time its 3Mbps(as Qos Policy is applied for 3Mbps).
Execution Logs are available in Testing section.
B. DSCP Marking
Todo
References:
Ref: https://docs.openstack.org/ocata/networking-guide/config-qos.html
https://specs.openstack.org/openstack/neutron-specs/specs/liberty/qos-api-extension.html
Meter
https://wiki.openstack.org/wiki/Neutron/Metering/Bandwidth
https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/network-meter.html
Todo
Firewall
Todo
BGPVPN
Todo
Testing :
QoS:
cloud@dev1:~/devstack$ neutron qos-policy-create bw-limiter
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Created a new policy:
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| created_at | 2017-09-04T14:57:17Z |
| description | |
| id | 63c70c5f-5ec9-4e9e-9427-e1a1ac7dc0d4 |
| name | bw-limiter |
| project_id | 856b4a0ff2b04903ad09b6a12bdd4900 |
| revision_number | 1 |
| rules | |
| shared | False |
| tenant_id | 856b4a0ff2b04903ad09b6a12bdd4900 |
| updated_at | 2017-09-04T14:57:17Z |
+-----------------+--------------------------------------+
cloud@dev1:~/devstack$
cloud@dev1:~/devstack$ neutron qos-bandwidth-limit-rule-create bw-limiter --max-kbps 3000 \
> --max-burst-kbps 300
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Created a new bandwidth_limit_rule:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| id | 6914d8eb-8ad4-40e7-b18f-fedddc50c224 |
| max_burst_kbps | 300 |
| max_kbps | 3000 |
+----------------+--------------------------------------+
cloud@dev1:~/devstack$
#Two VMs are created
cloud@devstack:~$ nova list
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------------------+
| ff0db123-9203-4c54-8f5d-2826947484e6 | vm1 | ACTIVE | - | Running | private=fd5d:e0d9:1218:0:f816:3eff:feef:270b, 10.0.0.3, 172.24.4.3 |
| db2d49ed-cb50-4130-89a1-58c37c9bae72 | vm2 | ACTIVE | - | Running | private=fd5d:e0d9:1218:0:f816:3eff:fe70:359c, 10.0.0.4, 172.24.4.4 |
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------------------+
cloud@devstack:~$ neutron qos-policy-list
+--------------------------------------+------------+
| id | name |
+--------------------------------------+------------+
| 42ed0726-7aae-4771-890c-ab336c099c80 | bw-limiter |
+--------------------------------------+------------+
cloud@devstack:~$
cloud@devstack:~$ neutron qos-bandwidth-limit-rule-list bw-limiter
+--------------------------------------+----------------+----------+
| id | max_burst_kbps | max_kbps |
+--------------------------------------+----------------+----------+
| eaf23a3c-a2b5-45f6-942b-74bacdbd29b5 | 300 | 3000 |
+--------------------------------------+----------------+----------+
cloud@devstack:~$
#############################################################################################################"
#Install IPERF on both VMs
ubuntu@vm1:~$ sudo apt-get install iperf
sudo: unable to resolve host vm1
Reading package lists... Done
Building dependency tree
Reading state information... Done
iperf is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
ubuntu@vm1:~$
# TEst the Bandwidth between VMS
#VM1 is IPERF SERVER
ubuntu@vm1:~$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.0.3 port 5001 connected with 10.0.0.4 port 35647
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 4.39 GBytes 3.77 Gbits/sec
#VM2 is iperf Client
ubuntu@vm2:~$ iperf -c 10.0.0.3
------------------------------------------------------------
Client connecting to 10.0.0.3, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.4 port 35647 connected with 10.0.0.3 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 4.39 GBytes 3.77 Gbits/sec
ubuntu@vm2:~$
#############################################################################################################"
# Lets apply Qos Policy and repeat the same test
cloud@devstack:~$ neutron port-update a5ee456b-14e9-47a3-8abe-3b8a306bec0b --qos-policy bw-limiter
Updated port: a5ee456b-14e9-47a3-8abe-3b8a306bec0b
cloud@devstack:~$
cloud@devstack:~$ neutron port-update 18883df8-af5d-4110-aabd-726019c2e8e8 --qos-policy bw-limiter
Updated port: 18883df8-af5d-4110-aabd-726019c2e8e8
cloud@devstack:~$
ubuntu@vm1:~$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.0.3 port 5001 connected with 10.0.0.4 port 35652
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.7 sec 4.00 MBytes 3.13 Mbits/sec
^Cubuntu@vm1:~$
ubuntu@vm2:~$ iperf -c 10.0.0.3
------------------------------------------------------------
Client connecting to 10.0.0.3, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.4 port 35652 connected with 10.0.0.3 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.5 sec 4.00 MBytes 3.20 Mbits/sec
ubuntu@vm2:~$
ubuntu@vm2:~$