Tag Archives: openstack

RBAC in Keystone and OpenStack

The following post is based on Red Hat OpenStack Platform 10 (Newton) but should largely be compatible with other OpenStack distributions of the equivalent version.

Keystone allows defining roles as a way of assigning an OpenStack user or group’s privileges. For example, the default admin user has the admin role assigned to it:

[stack@controller ~]$ openstack role assignment list --names --user admin --project admin
+-------+-------+---------+
| Role  | User  | Project |
+-------+-------+---------+
| admin | admin | admin   |
+-------+-------+---------+

By default, OpenStack comes with two useful roles defined out of the box – admin and _member_:

  • admin – Used for users and/or groups which require privileged access to OpenStack resources and services. 
  • _member_ – Users or groups that are members of an OpenStack project can create/modify/delete resources within their own project. However they cannot see or modify resources outside the project (unless it is defined as shared or public). The OpenStack source code contains hardcoded references to the _member_ role and is treated as a default role, though this can be customised in each service’s configuration file.

Additional roles can easily be created using the openstack role create command, but will not normally have any affect on users or groups. This is because the actual rules associated with the roles determining what a user or group can do are defined separately by each OpenStack service, using a file called policy.json or policy.yaml. Each of these rules are assessed whenever an API request has been made to the OpenStack service and if the policy does not allow for a request to be performed, an error message will be displayed. As a result, the policy files will need to be modified for any custom privileges an additional role requires. Default policy rules exist for the OpenStack services. However, most services have not defined the rules in a default policy file. 

How an OpenStack service processes policies

To demonstrate this workflow, lets use nova as an example. By default, only a user/group assigned an admin role or the owner of an instance can reboot an instance. If we try this (reboot an instance outside our project as a non admin), I get the following error message:

[stack@controller ~]$ nova reboot 3a4b2f15-ca10-95e2-41ab-56a21cb5db21
Policy doesn't allow os_compute_api:servers:reboot to be performed. (HTTP 403) (Request-ID: req-32453065-1af5-4130-ba18-5a66be43d61e)
ERROR (CommandError): Unable to reboot the specified server(s).

To see what is going on in the background though, lets trace this nova command using the –debug option. The debug option is very useful in going into detail the underlying REST API requests the CLI commands make to their respective OpenStack service APIs. As you can see below, the policy is assessed when the actual API request is made to the nova API to reboot the instance (output omitted before reboot API request for brevity reasons):

[stack@controller ~]$ nova --debug reboot 3a4b2f15-ca10-95e2-41ab-56a21cb5db21
(... output omitted ...)
 
DEBUG (session:342) REQ: curl -g -i -X POST https://192.168.1.10:13774/v2.1/servers/3a4b2f15-ca10-95e2-41ab-56a21cb5db21/action -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "OpenStack-API-Version: compute 2.37" -H "X-OpenStack-Nova-API-Version: 2.37" -H "X-Auth-Token: {SHA1}XXXXXXXX" -H "Content-Type: application/json" -d '{"reboot": {"type": "SOFT"}}'
DEBUG (connectionpool:401) "POST /v2.1/servers/3a4b2f15-ca10-95e2-41ab-56a21cb5db21/action HTTP/1.1" 403 110
DEBUG (session:390) RESP: [403] Openstack-Api-Version: compute 2.37 X-Openstack-Nova-Api-Version: 2.37 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version Content-Type: application/json; charset=UTF-8 Content-Length: 110 X-Compute-Request-Id: req-ce835787-9ac4-4e1d-922e-e7f069cc63ab Date: Thu, 15 Nov 2018 00:15:00 GMT
RESP BODY: {"forbidden": {"message": "Policy doesn't allow os_compute_api:servers:reboot to be performed.", "code": 403}}
 
DEBUG (client:85) POST call to compute for https://192.168.1.10:13774/v2.1/servers/3a4b2f15-ca10-95e2-41ab-56a21cb5db21/action used request id req-ce835787-9ac4-4e1d-922e-e7f069cc63ab
Policy doesn't allow os_compute_api:servers:reboot to be performed. (HTTP 403) (Request-ID: req-ce835787-9ac4-4e1d-922e-e7f069cc63ab)
DEBUG (shell:984) Unable to reboot the specified server(s).
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 982, in main
    OpenStackComputeShell().main(argv)
  File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 909, in main
    args.func(self.cs, args)
  File "/usr/lib/python2.7/site-packages/novaclient/v2/shell.py", line 1787, in do_reboot
    _("Unable to reboot the specified server(s)."))
  File "/usr/lib/python2.7/site-packages/novaclient/utils.py", line 413, in do_action_on_many
    raise exceptions.CommandError(error_msg)
CommandError: Unable to reboot the specified server(s).
ERROR (CommandError): Unable to reboot the specified server(s).

OpenStack has documentation on how to interpret the rules defined in a policy file as well as the required syntax of the configuration files. Please refer to https://docs.openstack.org/oslo.policy/latest/admin/policy-json-file.html for the policy.json format and https://docs.openstack.org/oslo.policy/latest/admin/policy-yaml-file.html for the policy.yaml format.

Some OpenStack services themselves document how their APIs map to the rules in the policy files. Please refer to https://docs.openstack.org/nova/latest/configuration/policy.html for the nova rules and https://docs.openstack.org/keystone/latest/getting-started/policy_mapping.html for keystone rules.

Using the previous section example of rebooting a nova instance, lets have a quick look at the rules defined for it in the nova policy. The rule associated with this action is defined below (using policy.json format):

"os_compute_api:servers:reboot": "rule:admin_or_owner"

The left hand side of the line refers to the target of the rule, i.e. where the rule will be applied. The right hand side of the line refers to the rule itself. In this case, it is referencing another rule (called an alias) defined in the same policy file called admin_or_owner:

"admin_or_owner": "is_admin:True or project_id:%(project_id)s"

The rule here in this case is a Boolean expression comparing two different attributes. The first part of the rule is verifying if the request is using a keystone admin token. The second part of the rule is verifying the project_id atrribute from the API itself, essentially if the user/group who is performing the request has the same project uuid as the resource being requested. Since this is an OR expression, either condition can be true in order for the rule to be applied.

At a project level, there is a push to move towards what is called Policy as Code, where the default policy rules are defined as python code. By default it is located in /usr/lib/python2.7/site-packages/<service>/policies directories and the idea is that the end user can generate their own sample policy files for a specific service themselves if they require custom policy rules with the following command:

oslopolicy-policy-generator --namespace <service> --output-file <filename>

Please refer to the following link for additional information about Policy as Code:

https://docs.openstack.org/oslo.policy/latest/user/usage.html#registering-policy-defaults-in-code

Enabling nested KVM for OpenStack/Packstack

A month or two ago I tried deploying an all-in-one Packstack OpenStack distribution, all in a single KVM virtual machine, for CloudForms testing. Using the instructions from the RDO project, I was able to deploy OpenStack without too many issues (hint: use flat networking if you experience configuration issues with neutron). However, I wanted to enable nested KVM for improved performance of the nova instances deployed, which the install instructions doesn’t really cover.

In order to get nested KVM working, I had to perform the following steps.

  • Enable kernel module settings for nested KVM (processor-dependant) on the KVM host as well as the VM where Packstack will be deployed. Ensure the following lines are present (for Intel CPUs):
# cat << EOF > /etc/modprobe.d/kvm.conf
options kvm-intel nested=1
options kvm-intel enable_shadow_vmcs=1
options kvm-intel enable_apicv=1
options kvm-intel ept=1
EOF
  • Configure the Packstack VM’s nova.conf so that it is using KVM as its hypervisor instead of using qemu software emulation (which is really slow). The following line needs to be in /etc/nova/nova.conf in the [libvirt] section:
# grep ^virt_type /etc/nova/nova.conf  
virt_type=kvm
  • Enable reverse path filtering on the KVM host and the Packstack VM, then reboot your KVM host and VM:
# cat << EOF > /etc/sysctl.d/98-rp-filter.conf  
net.ipv4.conf.default.rp_filter = 0 
net.ipv4.conf.all.rp_filter = 0
EOF
  • Ensure that the cpu mode set for the Packstack VM is set as the following, either using virsh edit <domain> or virt-manager (while the VM is powered off):
# virsh dumpxml <domain>| grep "cpu mode" 
 <cpu mode='host-passthrough' check='partial'/>

virt-manager-cpu-mode

  • To confirm that nested KVM is enabled after a reboot, see if /proc/cpuinfo displays the correct CPU flag (vmx or svm) inside the Packstack VM:
cat /proc/cpuinfo | egrep '(vmx|svm)' 
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_
perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2a
pic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbas
e tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp fl
ush_l1d

 

Once all the above steps have been implemented, nova will now be able to use KVM as a hypervisor inside the VM itself. However, I ran into an issue where I could not access any of the nested instances nova provisioned via nova boot or horizon, even after assigning a floating IP address to the instance.

When I did a little digging though I found that for any nova instances deployed, ensure that port security is disabled when using nested KVM. Normally you would want this enabled but in a testing/lab environment its OK to disable this setting. MAC address spoofing is used in nested virtualisation environments so that traffic can be routed to the nested instances.

port_security

Afterwards I was able to access all my nested nova instances. If you want, you can perform even more levels of nesting but performance degrades significantly the more levels this is done. I haven’t found a very compelling use case to go more than 1 level of nesting.