As OpenStack continues to take hold as a cloud computing platform, more organizations are weighing whether to seriously pursue their own cloud deployment. Of course, there are a number of important considerations — including whether to find a managed service partner or a build a new environment — but once most of the key decisions are made, IT managers face a daunting educational hurdle. How can they familiarize themselves with OpenStack’s architecture and unique quirks as quickly as possible?
Reviewing the entire list of OpenStack projects is overwhelming, so we’ve compiled a list of the most important projects to help you get started. This article will provide you with a quick overview of the key OpenStack components, including terms for some virtual computing resources such as networks, processors, servers, and storage.
The First Step Toward Deployment: Understanding OpenStack Components
OpenStack consists of many components, or projects, that are needed to control the diverse hardware from different vendors that’s usually found in a modern data center. You can manage OpenStack environments through various means including command-line tools, RESTful APIs, and web-based dashboards like Horizon.
The complexity of your OpenStack installation will vary greatly depending on the specific projects that are used. Many components are included in every installation, while others are found in the majority of OpenStack deployments. Many OpenStack projects should only be included if you need them for a particular purpose.
Controllers & Compute Nodes
At the highest level, the two basic elements of an OpenStack deployment are controller and compute nodes. Controller nodes run the web interface, schedulers, and APIs that control your OpenStack services. It’s the “brain” of your OpenStack environment. All requests to and from the user happen via the controller node, which (unsurprisingly) controls the compute node.
The compute node, by contrast, provides compute power to the platform’s virtual machines. Each compute node will run a hypervisor to deploy and run the VM. So, every time you launch an instance, all the computing resources are consumed from the compute note. Users do not directly interact with the compute node.
An OpenStack deployment needs all of the services living on these two node types, though they are often organized quite differently depending on your chosen architecture. In particular, the controller nodes’ services are often distributed across multiple systems. For example, some platforms may have nodes dedicated to storage that provide a shared file system like NFS to the rest of the cluster. The ideal structure of an OpenStack deployment depends on the prioritization of goals such as minimizing costs or optimizing storage performance.
Core OpenStack Projects
Every OpenStack installation contains these core projects:
- Neutron provides the functionalities needed to create network connectsion to virtual machines (VMs) inside a tenant and outside of OpenStack.
- Glance provides prepared images to spin up VMs/instances and is hardware independent since it was designed as object storage.
- Nova provides controlling and hosting functionalities for VMs/instances and is independent of the platform’s hypervisors.
- Keystone authenticates all OpenStack users to allow them to use other OpenStack services.
- RabbitMQ provides a messaging queue between services.
- Cinder provides block storage for VMs and is hardware independent.
While the above six projects comprise the core OpenStack projects, it is not an exhaustive list. Here are some additional “primary” projects that are found in most installations, but are not required.
- Aodh provides the alarm triggers that are needed for automating resource allocation. For example, Aodh would allow an OpenStack platform to automatically spin up a new instance of a VM when the CPU utilization of an existing VM reaches 80 percent.
- Horizon provides a GUI dashboard for user to manage their OpenStack tenant, which increases usability. Many core OpenStack services can be controlled through this central dashboard, including TrilioVault.
- Heat provides the orchestration features for automate the usage of OpenStack resources, while Ceilometer provides the metering functions needed to track the usage of those resources.
Secondary projects are only used if they’re needed to meet the user’s business requirements.
- Sahara provides big data as a service by allowing OpenStack users to easily provision Hadoop clusters with parameters such as CPU, RAM, and disk space.
- Trove is a database engine that provisions Database-as-a-Service (DaaS) for both relational and non-relational databases.
- Manila provides shared file storage for OpenStack in a vendor-neutral framework that can be used in a variety of network environments.
- Murano provides an application catalog, which is a set of predefined environments that include software packages. This functionality allows for the advanced orchestration of OpenStack services.
- Gnocchi provides additional metering functionalities that aren’t included with Ceilometer.
- Barbican provides a secure key storage and generation system to enable encryption features.
To learn more about each of these important OpenStack components, or to get started with OpenStack as a whole, visit the Foundation’s website.