Introduction to hardening Ansible Automation Platform

This document provides guidance for improving the security posture (referred to as “hardening” throughout this guide) of your Red Hat Ansible Automation Platform deployment on Red Hat Enterprise Linux.

Other deployment targets, such as OpenShift, are not currently within the scope of this guide. Ansible Automation Platform managed services available through cloud service provider marketplaces are also not within the scope of this guide.

This guide takes a practical approach to hardening the Ansible Automation Platform security posture, starting with the planning and architecture phase of deployment and then covering specific guidance for installation, initial configuration, and day two operations. As this guide specifically covers Ansible Automation Platform running on Red Hat Enterprise Linux, hardening guidance for Red Hat Enterprise Linux will be covered where it affects the automation platform components. Additional considerations with regards to the Defense Information Systems Agency (DISA) Security Technical Implementation Guides (STIGs) are provided for those organizations that integrate the DISA STIG as a part of their overall security strategy.

Note

These recommendations do not guarantee security or compliance of your deployment of Ansible Automation Platform. You must assess security from the unique requirements of your organization to address specific threats and risks and balance these against implementation factors.

Audience

This guide is written for personnel responsible for installing, configuring, and maintaining Ansible Automation Platform 2.4 when deployed on Red Hat Enterprise Linux. Additional information is provided for security operations, compliance assessment, and other functions associated with related security processes.

Overview of Ansible Automation Platform

Ansible is an open source, command-line IT automation software application written in Python. You can use Ansible Automation Platform to configure systems, deploy software, and orchestrate advanced workflows to support application deployment, system updates, and more. Ansible’s main strengths are simplicity and ease of use. It also has a strong focus on security and reliability, featuring minimal moving parts. It uses secure, well-known communication protocols like SSH, HTTPS, and WinRM for transport and uses a human-readable language that is designed for getting started quickly without extensive training.

Ansible Automation Platform enhances the Ansible language with enterprise-class features, such as Role-Based Access Controls (RBAC), centralized logging and auditing, credential management, job scheduling, and complex automation workflows. With Ansible Automation Platform you get certified content from our robust partner ecosystem; added security, reporting, and analytics; and life cycle technical support to scale automation across your organization. Ansible Automation Platform simplifies the development and operation of automation workloads for managing enterprise application infrastructure life cycles. It works across multiple IT domains including operations, networking, security, and development, as well as across diverse hybrid environments.

Ansible Automation Platform components

Ansible Automation Platform is a modular platform that includes automation controller, automation hub, Event-Driven Ansible controller, and Insights for Ansible Automation Platform.

Additional resources

For more information about the components provided within Ansible Automation Platform, see Red Hat Ansible Automation Platform components in the Red Hat Ansible Automation Platform Planning Guide.

Ansible Automation Platform Compliance

Hardening Ansible Automation Platform

This guide takes a practical approach to hardening the Ansible Automation Platform security posture, starting with the planning and architecture phase of deployment and then covering specific guidance for the installation phase. As this guide specifically covers Ansible Automation Platform running on Red Hat Enterprise Linux, hardening guidance for Red Hat Enterprise Linux will be covered where it affects the automation platform components.

Planning considerations

When planning an Ansible Automation Platform installation, ensure that the following components are included:

  • Installer-manged components

    • Automation controller

    • Event-Driven Ansible controller

    • Private automation hub

  • PostgreSQL database (if not external)

    • External services

    • Red Hat Insights for Red Hat Ansible Automation Platform

    • Automation hub

    • registry.redhat.io (default execution environment container registry)

See the system requirements section of the Red Hat Ansible Automation Platform Planning Guide for additional information.

Ansible Automation Platform reference architecture

For large-scale production environments with availability requirements, this guide recommends deploying the components described in section 2.1 of this guide using the instructions in the reference architecture documentation for Red Hat Ansible Automation Platform on Red Hat Enterprise Linux. While some variation may make sense for your specific technical requirements, following the reference architecture results in a supported production-ready environment.

Reference architecture for an example setup of an Ansible Automation Platform deployment for large scale production environments
Figure 1. Reference architecture overview

Event-Driven Ansible is a new feature of Ansible Automation Platform 2.4 that was not available when the reference architecture detailed in Figure 1: Reference architecture overview was originally written. Currently, the supported configuration for Event-Driven Ansible is a single Event-Driven Ansible controller with a dedicated internal database. For an organization interested in Event-Driven Ansible, the recommendation is to install according to the configuration documented in the Ansible Automtion Platform Installation Guide. This document will provide additional clarifications when Event-Driven Ansible specific hardening configuration is required.

For smaller production deployments where the full reference architecture may not be needed, this guide recommends deploying Ansible Automation Platform with a dedicated PostgreSQL database server whether managed by the installer or provided externally.

Network, firewall, and network services planning for Ansible Automation Platform

Ansible Automation Platform requires access to a network to integrate to external auxiliary services and to manage target environments and resources such as hosts, other network devices, applications, cloud services. The network ports and protocols section of the Ansible Automation Platform planning guide describes how Ansible Automation Platform components interact on the network as well as which ports and protocols are used, as shown in the following diagram:

Interaction of Ansible Automation Platform components on the network with information about the ports and protocols that are used.
Figure 2. Ansible Automation Platform Network ports and protocols

When planning firewall or cloud network security group configurations related to Ansible Automation Platform, see the Network ports and protocols section of the Ansible Automation Platform Planning Guide to understand what network ports need to be opened on a firewall or security group.

For more information on using a load balancer, and for outgoing traffic requirements for services compatible with Ansible Automation Platform. Consult the Red Hat Knowledgebase article What ports need to be opened in the firewall for Ansible Automation Platform 2 Services?. For internet-connected systems, this article also defines the outgoing traffic requirements for services that Ansible Automation Platform can be configured to use, such as Ansible automation hub, Red Hat Insights for Red Hat Ansible Automation Platform, Ansible Galaxy, the registry.redhat.io container image registry, and so on.

For internet-connected systems, this article also defines the outgoing traffic requirements for services that Ansible Automation Platform can be configured to use, such as Red Hat automation hub, Insights for Ansible Automation Platform, Ansible Galaxy, the registry.redhat.io container image registry, and so on.

Restrict access to the ports used by the Ansible Automation Platform components to protected networks and clients. The following restrictions are highly recommended:

  • Restrict the PostgreSQL database port (5432) on the database servers so that only the other Ansible Automation Platform component servers (automation controller, automation hub, Event-Driven Ansible controller) are permitted access.

  • Restrict SSH access to the Ansible Automation Platform servers from the installation host and other trusted systems used for maintenance access to the Ansible Automation Platform servers.

  • Restrict HTTPS access to the automation controller, automation hub, and Event-Driven Ansible controller from trusted networks and clients.

DNS, NTP, and service planning

DNS

When installing Ansible Automation Platform, the installer script checks that certain infrastructure servers are defined with a Fully Qualified Domain Name (FQDN) in the installer inventory. This guide recommends that all Ansible Automation Platform infrastructure nodes have a valid FQDN defined in DNS which resolves to a routable IP address, and that these FQDNs be used in the installer inventory file.

DNS and load balancing

When using a load balancer with Ansible Automation Platform as described in the reference architecture, an additional FQDN is needed for each load-balanced component (automation controller and private automation hub).

For example, if the following hosts are defined in the Ansible Automation Platform installer inventory file:

[automationcontroller]
controller0.example.com
controller1.example.com
controller2.example.com

[automationhub]
hub0.example.com
hub1.example.com
hub2.example.com

Then the load balancer can use the FQDNs controller.example.com and hub.example.com for the user-facing name of these Ansible Automation Platform services.

When a load balancer is used in front of the private automation hub, the installer must be aware of the load balancer FQDN. Before installing Ansible Automation Platform, in the installation inventory file set the automationhub_main_url variable to the FQDN of the load balancer. For example, to match the previous example, you would set the variable to automationhub_main_url = hub.example.com.

NTP

Configure each server in the Ansible Automation Platform infrastructure to synchronize time with an NTP pool or your organization’s NTP service. This ensures that logging and auditing events generated by Ansible Automation Platform have an accurate time stamp, and that any scheduled jobs running from the automation controller execute at the correct time.

For information on configuring the chrony service for NTP synchronization, see Using Chrony in the Red Hat Enterprise Linux documentation.

User authentication planning

When planning for access to the Ansible Automation Platform user interface or API, be aware that user accounts can either be local or mapped to an external authentication source such as LDAP. This guide recommends that where possible, all primary user accounts should be mapped to an external authentication source. Using external account sources eliminates a source of error when working with permissions in this context and minimizes the amount of time devoted to maintaining a full set of users exclusively within Ansible Automation Platform. This includes accounts assigned to individual persons as well as for non-person entities such as service accounts used for external application integration. Reserve any local administrator accounts such as the default "admin" account for emergency access or "break glass" scenarios where the external authentication mechanism isn’t available.

Note

The Event-Driven Ansible controller does not currently support external authentication, only local accounts.

For user accounts on the Red Hat Enterprise Linux servers that run the Ansible Automation Platform services, follow your organizational policies to determine if individual user accounts should be local or from an external authentication source. Only users who have a valid need to perform maintenance tasks on the Ansible Automation Platform components themselves should be granted access to the underlying Red Hat Enterprise Linux servers, as the servers will have configuration files that contain sensitive information such as encryption keys and service passwords. Because these individuals must have privileged access to maintain Ansible Automation Platform services, minimizing the access to the underlying Red Hat Enterprise Linux servers is critical. Do not grant sudo access to the root account or local Ansible Automation Platform service accounts (awx, pulp, postgres) to untrusted users.

Note

The local Ansible Automation Platform service accounts such as awx, pulp, and postgres are created and managed by the Ansible Automation Platform installer. These particular accounts on the underlying Red Hat Enterprise Linux hosts cannot come from an external authentication source.

Automation controller authentication

Automation controller currently supports the following external authentication mechanisms:

  • Azure Activity Directory

  • GitHub single sign-on

  • Google OAuth2 single sign-in

  • LDAP

  • RADIUS

  • SAML

  • TACACS+

  • Generic OIDC

Choose an authentication mechanism that adheres to your organization’s authentication policies, and refer to the Controller Configuration - Authentication documentation to understand the prerequisites for the relevant authentication mechanism. The authentication mechanism used must ensure that the authentication-related traffic between Ansible Automation Platform and the authentication back-end is encrypted when the traffic occurs on a public or non-secure network (for example, LDAPS or LDAP over TLS, HTTPS for OAuth2 and SAML providers, etc.).

In automation controller, any “system administrator” account can edit, change, and update any inventory or automation definition. Restrict these account privileges to the minimum set of users possible for low-level automation controller configuration and disaster recovery.

Private automation hub authentication

Private automation hub currently supports the following external authentication mechanisms:

  • Ansible Automation Platform central authentication (based on RHSSO)

  • LDAP

For production use, LDAP is the preferred external authentication mechanism for private automation hub. Ansible Automation Platform central authentication is an option that can be deployed with the Ansible Automation Platform installer, but it only deploys one central authentication server instance, making it a potential single point of failure. Standalone mode for Ansible Automation Platform central authentication is not recommended in a production environment. However, if you already have the separate Red Hat Single Sign-On (RHSSO) product deployed in your production environment, it can be used as an external authentication source for private automation hub.

The Ansible Automation Platform Installer configures LDAP authentication for private automation hub during installation. For more information, see LDAP configuration on a private automation hub.

The following installer inventory file variables must be filled out prior to installation:

Table 1. Inventory variable for automation hub LDAP settings
Variable Details

automationhub_authentication_backend

Set to "ldap" in order to use LDAP authentication.

automationhub_ldap_server_uri

The LDAP server URI, for example "ldap://ldap-server.example.com" or "ldaps://ldap-server.example.com:636".

automationhub_ldap_bind_dn

The account used to connect to the LDAP server. This account should be one with sufficient privileges to query the LDAP server for users and groups, but it should not be an administrator account or one with the ability to modify LDAP records.

automationhub_ldap_bind_password

The password used by the bind account to access the LDAP server.

automationhub_ldap_user_search_base_dn

The base DN used to search for users.

automationhub_ldap_group_search_base_dn

The base DN used to search for groups.

In order to ensure that LDAP traffic is encrypted between the private automation hub and the LDAP server, the LDAP server must support LDAP over TLS or LDAP over SSL (LDAPS).

Credential management planning for Ansible Automation Platform

Automation controller uses credentials to authenticate requests to jobs against machines, synchronize with inventory sources, and import project content from a version control system. Automation controller manages three sets of secrets:

  • User passwords for local automation controller users. See the User Authentication Planning section of this guide for additional details.

  • Secrets for automation controller operational use (database password, message bus password, and so on).

  • Secrets for automation use (SSH keys, cloud credentials, external password vault credentials, and so on).

Implementing a privileged access or credential management solution to protect credentials from compromise is a highly recommended practice. Organizations should audit the use of, and provide additional programmatic control over, access and privilege escalation.

You can further secure automation credentials by ensuring they are unique and stored only in automation controller. Services such as OpenSSH can be configured to allow credentials on connections only from specific addresses. Use different credentials for automation from those used by system administrators to log into a server. Although direct access should be limited where possible, it can be used for disaster recovery or other ad-hoc management purposes, allowing for easier auditing.

Different automation jobs might need to access a system at different levels. For example, you can have low-level system automation that applies patches and performs security baseline checking, while a higher-level piece of automation deploys applications. By using different keys or credentials for each piece of automation, the effect of any one key vulnerability is minimized. This also allows for easy baseline auditing.

Automation controller operational secrets

Automation controller contains the following secrets used operationally:

Table 2. Automation controller operational secrets

File

Details

/etc/tower/SECRET_KEY

A secret key used for encrypting automation secrets in the database. If the SECRET_KEY changes or is unknown, no encrypted fields in the database will be accessible.

/etc/tower/tower.cert

/etc/tower/tower.key

SSL certificate and key for the automation controller web service. A self-signed cert/key is installed by default; you can provide a locally appropriate certificate and key (see Installing with user-provided PKI certificates for more information).

/etc/tower/conf.d/postgres.py

Contains the password used by the automation controller to connect to the database.

/etc/tower/conf.d/channels.py

Contains the secret used by the automation controller for websocket broadcasts.

These secrets are stored unencrypted on the Automation controller server, as the automation controller service must read them all in an automated fashion at startup. All files are protected by Unix permissions, and restricted to the root user or the automation controller service user awx. These files should be routinely monitored to ensure there has been no unauthorized access or modification.

Note

Automation controller was formerly named Ansible Tower. These file locations retain the previous product name.

Automation use secrets

Automation controller stores a variety of secrets in the database that are either used for automation or are a result of automation. Automation use secrets include:

  • All secret fields of all credential types (passwords, secret keys, authentication tokens, secret cloud credentials).

  • Secret tokens and passwords for external services defined in automation controller settings.

  • “password” type survey field entries.

You can grant users and teams the ability to use these credentials without actually exposing the credential to the user. This means that if a user moves to a different team or leaves the organization, you don’t have to re-key all of your systems.

automation controller uses SSH (or the Windows equivalent) to connect to remote hosts . To pass the key from the automation controller to SSH, the key must be decrypted before it can be written to a named pipe. Automation controller then uses that pipe to send the key to SSH (so that it is never written to disk). If passwords are used, the automation controller handles those by responding directly to the password prompt and decrypting the password before writing it to the prompt.

As an administrator with superuser access, you can define a custom credential type in a standard format using a YAML/JSON-like definition, enabling the assignment of new credential types to jobs and inventory updates. This enables you to define a custom credential type that works in ways similar to existing credential types. For example, you can create a custom credential type that injects an API token for a third-party web service into an environment variable, which your playbook or custom inventory script can consume.

To encrypt secret fields, Ansible Automation Platform uses AES in CBC mode with a 256-bit key for encryption, PKCS7 padding, and HMAC using SHA256 for authentication. The encryption/decryption process derives the AES-256 bit encryption key from the SECRET_KEY, the field name of the model field, and the database-assigned auto-incremented record ID. Thus, if any attribute used in the key generation process changes, Ansible Automation Platform fails to correctly decrypt the secret. Ansible Automation Platform is designed such that the SECRET_KEY is never readable in playbooks Ansible Automation Platform launches, so that these secrets are never readable by Ansible Automation Platform users, and no secret field values are ever made available through the Ansible Automation Platform REST API. If a secret value is used in a playbook, you must use no_log on the task so that it is not accidentally logged. For more information, see Protecting sensitive data with no log.

Logging and log capture

Visibility and analytics is an important pillar of Enterprise Security and Zero Trust Architecture. Logging is key to capturing actions and auditing. You can manage logging and auditing by using the built-in audit support described in the Auditing the system section of the Security hardening for Red Hat Enterprise Linux guide. Controller’s built-in logging and activity stream support automation controller logs all changes within automation controller and automation logs for auditing purposes. More detailed information is available in the Logging and Aggregation section of the automation controller documentation.

This guide recommends that you configure Ansible Automation Platform and the underlying Red Hat Enterprise Linux systems to collect logging and auditing centrally, rather than reviewing it on the local system. Automation controller must be configured to use external logging to compile log records from multiple components within the controller server. The events occurring must be time-correlated to conduct accurate forensic analysis. This means that the controller server must be configured with an NTP server that is also used by the logging aggregator service, as well as the targets of the controller. The correlation must meet certain industry tolerance requirements. In other words, there might be a varying requirement that time stamps of different logged events must not differ by any amount greater than X seconds. This capability should be available in the external logging service.

Another critical capability of logging is the ability to use cryptography to protect the integrity of log tools. Log data includes all information (for example, log records, log settings, and log reports) needed to successfully log information system activity. It is common for attackers to replace the log tools or inject code into the existing tools to hide or erase system activity from the logs. To address this risk, log tools must be cryptographically signed so that you can identify when the log tools have been modified, manipulated, or replaced. For example, one way to validate that the log tool(s) have not been modified, manipulated or replaced is to use a checksum hash against the tool file(s). This ensures the integrity of the tool(s) has not been compromised.

Auditing and incident detection

Ansible Automation Platform should be used to fulfill security policy requirements by applying the NIST Cybersecurity Framework for common use cases, such as:

  • Requiring HTTPS for web servers on Red Hat Enterprise Linux.

  • Requiring TLS encryption for internal communication between web servers and database servers on Red Hat Enterprise Linux.

  • Generating reports showing that the policy is properly deployed.

  • Monitoring for drift that violates the policy.

  • Automating correction of any policy violation.

This can be done through 5 steps of the cybersecurity framework:

IDENTIFY

Define the requirements to be implemented according to the security policy.

PROTECT

Implement and apply the requirements as an Ansible playbook.

DETECT

Monitor for drift and generate an audit report.

RESPOND

Explore actions that could be taken when an incident is detected.

RECOVER

Use Ansible to restore the systems to the known good configuration.

Red Hat Enterprise Linux host planning

The security of Ansible Automation Platform relies in part on the configuration of the underlying Red Hat Enterprise Linux servers. For this reason, the underlying Red Hat Enterprise Linux hosts for each Ansible Automation Platform component must be installed and configured in accordance with the Security hardening for Red Hat Enterprise Linux 8 or Security hardening for Red Hat Enterprise Linux 9 (depending on which operating system will be used), as well as any security profile requirements (CIS, STIG, HIPAA, and so on) used by your organization.

Note that applying certain security controls from the STIG or other security profiles may conflict with Ansible Automation Platform support requirements. Some examples are listed in the Automation controller STIG considerations section, although this is not an exhaustive list. To maintain a supported configuration, be sure to discuss any such conflicts with your security auditors so the Ansible Automation Platform requirements are understood and approved.

Ansible Automation Platform and additional software

When installing the Ansible Automation Platform components on Red Hat Enterprise Linux servers, the Red Hat Enterprise Linux servers should be dedicated to that use alone. Additional server capabilities should not be installed in addition to Ansible Automation Platform, as this is an unsupported configuration and may affect the security and performance of the Ansible Automation Platform software.

Similarly, when Ansible Automation Platform is deployed on a Red Hat Enterprise Linux host, it installs software like the nginx web server, the Pulp software repository, and the PostgreSQL database server. This software should not be modified or used in a more generic fashion (for example, do not use nginx to server additional web site content or PostgreSQL to host additional databases) as this is an unsupported configuration and may affect the security and performance of Ansible Automation Platform. The configuration of this software is managed by the Ansible Automation Platform installer, and any manual changes might be undone when performing upgrades.

Installation

There are installation-time decisions that affect the security posture of Ansible Automation Platform. The installation process includes setting a number of variables, some of which are relevant to the hardening of the Ansible Automation Platform infrastructure. Before installing Ansible Automation Platform, consider the guidance in the installation section of this guide.

Install from a dedicated installation host

The Ansible Automation Platform installer can be run from one of the infrastructure servers, such as an automation controller, or from an external system that has SSH access to the Ansible Automation Platform infrastructure servers. The Ansible Automation Platform installer is also used not just for installation, but for subsequent day-two operations, such as backup and restore, as well as upgrades. This guide recommends performing installation and day-two operations from a dedicated external server, hereafter referred to as the installation host. Doing so eliminates the need to log in to one of the infrastructure servers to run these functions. The installation host must only be used for management of Ansible Automation Platform and must not run any other services or software.

The installation host must be a Red Hat Enterprise Linux server that has been installed and configured in accordance with Security hardening for Red Hat Enterprise Linux and any security profile requirements relevant to your organization (CIS, STIG, and so on). Obtain the Ansible Automation Platform installer as described in the Automation Platform Planning Guide, and create the installer inventory file as describe in the Automation Platform Installation Guide. This inventory file is used for upgrades, adding infrastructure components, and day-two operations by the installer, so preserve the file after installation for future operational use.

Access to the installation host must be restricted only to those personnel who are responsible for managing the Ansible Automation Platform infrastructure. Over time, it will contain sensitive information, such as the installer inventory (which contains the initial login credentials for Ansible Automation Platform), copies of user-provided PKI keys and certificates, backup files, and so on. The installation host must also be used for logging in to the Ansible Automation Platform infrastructure servers through SSH when necessary for infrastructure management and maintenance.

Security-relevant variables in the installation inventory

The installation inventory file defines the architecture of the Ansible Automation Platform infrastructure, and provides a number of variables that can be used to modify the initial configuration of the infrastructure components. For more information on the installer inventory, see the Ansible Automation Platform Installation Guide.

The following table lists a number of security-relevant variables and their recommended values for creating the installation inventory.

Table 3. Security-relevant inventory variables

Variable

Recommended Value

Details

postgres_use_ssl

true

The installer configures the installer-managed Postgres database to accept SSL-based connections when this variable is set.

pg_sslmode

verify-full

By default, when the controller connects to the database, it tries an encrypted connection, but it is not enforced. Setting this variable to "verify-full" requires a mutual TLS negotiation between the controller and the database. The postgres_use_ssl variable must also be set to "true" for this pg_sslmode to be effective.

NOTE: If a third-party database is used instead of the installer-managed database, the third-party database must be set up independently to accept mTLS connections.

nginx_disable_https

false

If set to "true", this variable disables HTTPS connections to the controller. The default is "false", so if this variable is absent from the installer inventory it is effectively the same as explicitly defining the variable to "false".

automationhub_disable_https

false

If set to "true", this variable disables HTTPS connections to the private automation hub. The default is "false", so if this variable is absent from the installer inventory it is effectively the same as explicitly defining the variable to "false".

automationedacontroller_disable_https

false

If set to "true", this variable disables HTTPS connections to the Event-Driven Ansible controller. The default is "false", so if this variable is absent from the installer inventory it is effectively the same as explicitly defining the variable to "false".

In scenarios such as the reference architecture where a load balancer is used with multiple controllers or hubs, SSL client connections can be terminated at the load balancer or passed through to the individual Ansible Automation Platform servers. If SSL is being terminated at the load balancer, this guide recommends that the traffic gets re-encrypted from the load balancer to the individual Ansible Automation Platform servers, to ensure that end-to-end encryption is in use. In this scenario, the *_disable_https variables listed in Table 2.3 would remain the default value of "false".

Note

This guide recommends using an external database in production environments, but for development and testing scenarios the database could be co-located on the automation controller. Due to current PostgreSQL 13 limitations, setting pg_sslmode = verify-full when the database is co-located on the automation controller results in an error validating the host name during TLS negotiation. Until this issue is resolved, an external database must be used to ensure mutual TLS authentication between the automation controller and the database.

Installing with user-provided PKI certificates

By default, Ansible Automation Platform creates self-signed PKI certificates for the infrastructure components of the platform. Where an existing PKI infrastructure is available, certificates must be generated for the automation controller, private automation hub, Event-Driven Ansible controller, and the postgres database server. Copy the certificate files and their relevant key files to the installer directory, along with the CA certificate used to verify the certificates.

Use the following inventory variables to configure the infrastructure components with the new certificates.

Table 4. PKI certificate inventory variables

Variable

Details

custom_ca_cert

The file name of the CA certificate located in the installer directory.

web_server_ssl_cert

The file name of the automation controller PKI certificate located in the installer directory.

web_server_ssl_key

The file name of the automation controller PKI key located in the installer directory.

automationhub_ssl_cert

The file name of the private automation hub PKI certificate located in the installer directory.

automationhub_ssl_key

The file name of the private automation hub PKI key located in the installer directory.

postgres_ssl_cert

The file name of the database server PKI certificate located in the installer directory. This variable is only needed for the installer-managed database server, not if a third-party database is used.

postgres_ssl_key

The file name of the database server PKI certificate located in the installer directory. This variable is only needed for the installer-managed database server, not if a third-party database is used.

automationedacontroller_ssl_cert

The file name of the Event-Driven Ansible controller PKI certificate located in the installer directory.

automationedacontroller_ssl_key

The file name of the Event-Driven Ansible controller PKI key located in the installer directory.

When multiple automation controller are deployed with a load balancer, the web_server_ssl_cert and web_server_ssl_key are shared by each controller. To prevent hostname mismatches, the certificate’s Common Name (CN) must match the DNS FQDN used by the load balancer. This also applies when deploying multiple private automation hub and the automationhub_ssl_cert and automationhub_ssl_key variables. If your organizational policies require unique certificates for each service, each certificate requires a Subject Alt Name (SAN) that matches the DNS FQDN used for the load-balanced service. To install unique certificates and keys on each automation controller, the certificate and key variables in the installation inventory file must be defined as per-host variables instead of in the [all:vars] section. For example:

[automationcontroller]
controller0.example.com web_server_ssl_cert=/path/to/cert0 web_server_ssl_key=/path/to/key0
controller1.example.com web_server_ssl_cert=/path/to/cert1 web_server_ssl_key=/path/to/key1
controller2.example.com web_server_ssl_cert=/path/to/cert2 web_server_ssl_key=/path/to/key2
[automationhub]
hub0.example.com automationhub_ssl_cert=/path/to/cert0 automationhub_ssl_key=/path/to/key0
hub1.example.com automationhub_ssl_cert=/path/to/cert1 automationhub_ssl_key=/path/to/key1
hub2.example.com automationhub_ssl_cert=/path/to/cert2 automationhub_ssl_key=/path/to/key2

Sensitive variables in the installation inventory

The installation inventory file contains a number of sensitive variables, mainly those used to set the initial passwords used by Ansible Automation Platform, that are normally kept in plain text in the inventory file. To prevent unauthorized viewing of these variables, you can keep these variables in an encrypted Ansible vault. To do this, go to the installer directory and create a vault file:

  • cd /path/to/ansible-automation-platform-setup-bundle-2.4-1-x86_64

  • ansible-vault create vault.yml

You will be prompted for a password to the new Ansible vault. Do not lose the vault password because it is required every time you need to access the vault file, including during day-two operations and performing backup procedures. You can secure the vault password by storing it in an encrypted password manager or in accordance with your organizational policy for storing passwords securely.

Add the sensitive variables to the vault, for example:

admin_password: <secure_controller_password>
pg_password: <secure_db_password>
automationhub_admin_password: <secure_hub_password>
automationhub_pg_password: <secure_hub_db_password>
automationhub_ldap_bind_password: <ldap_bind_password>
automationedacontroller_admin_password: <secure_eda_password>
automationedacontroller_pg_password: <secure_eda_db_password>

Make sure these variables are not also present in the installation inventory file. To use the new Ansible vault with the installer, run it with the command ./setup.sh -e @vault.yml — --ask-vault-pass.

Automation controller STIG considerations

For organizations that use the Defense Information Systems Agency (DISA) Security Technical Implementation Guides (STIGs) as a part of their overall security strategy, a STIG for the Ansible Automation Platform automation controller is now available. The STIG only covers the automation controller component of Ansible Automation Platform at this time. When applying the STIG to an automation controller, there are a number of considerations to keep in mind.

The automation controller STIG overview document states that it is meant to be used in conjunction with the STIG for Red Hat Enterprise Linux 8. This version of the automation controller STIG was released prior to a STIG for Red Hat Enterprise Linux 9 being available, so Red Hat Enterprise Linux 8 should be used as the underlying host OS when applying the automation controller STIG. Certain Red Hat Enterprise Linux 8 STIG controls will conflict with Ansible Automation Platform installation and operation, which can be mitigated as described in the following sections.

Fapolicyd

The Red Hat Enterprise Linux 8 STIG requires the fapolicyd daemon to be running. However, Ansible Automation Platform is not currently supported when fapolicyd enforcing policy, as this causes failures during the installation and operation of Ansible Automation Platform. Because of this, the installer runs a pre-flight check that will halt installation if it discovers that fapolicyd is enforcing policy. This guide recommends setting fapolicyd to permissive mode on the automation controller using the following steps:

  1. Edit the file /etc/fapolicyd/fapolicyd.conf and set "permissive = 1".

  2. Restart the service with the command sudo systemctl restart fapolicyd.service.

In environments where STIG controls are routinely audited, discuss waiving the fapolicy-related STIG controls with your security auditor.

Note

If the Red Hat Enterprise Linux 8 STIG is also applied to the installation host, the default fapolicyd configuration causes the Ansible Automation Platform installer to fail. In this case, the recommendation is to set fapolicyd to permissive mode on the installation host.

File systems mounted with "noexec"

The Red Hat Enterprise Linux 8 STIG requires that a number of file systems are mounted with the noexec option to prevent execution of binaries located in these file systems. The Ansible Automation Platform installer runs a preflight check that will fail if any of the following file systems are mounted with the noexec option:

  • /tmp

  • /var

  • /var/tmp

To install Ansible Automation Platform, you must re-mount these file systems with the noexec option removed. Once installation is complete, proceed with the following steps:

  1. Reapply the noexec option to the /tmp and /var/tmp file systems.

  2. Change the automation controller job execution path from /tmp to an alternate directory that does not have the noexec option enabled.

  3. To make this change, log in to the automation controller UI as an administrator, navigate to menu:Settings[Jobs] settings.

  4. Change the "Job execution path" setting to the alternate directory.

During normal operations, the file system which contains the /var/lib/awx subdirectory (typically /var) must not be mounted with the noexec option, or the automation controller cannot run automation jobs in execution environments.

In environments where STIG controls are routinely audited, discuss waiving the STIG controls related to file system noexec with your security auditor.

User namespaces

The Red Hat Enterprise Linux 8 STIG requires that the kernel setting user.max_user_namespaces is set to "0", but only if Linux containers are not in use. Because Ansible Automation Platform uses containers as part of its execution environment capability, this STIG control does not apply to the automation controller.

To check the user.max_user_namespaces kernel setting, complete the following steps:

  1. Log in to your automation controller at the command line.

  2. Run the command sudo sysctl user.max_user_namespaces.

  3. If the output indicates that the value is zero, look at the contents of the file /etc/sysctl.conf and all files under /etc/sysctl.d/, edit the file containing the user.max_user_namespaces setting, and set the value to "65535".

  4. To apply this new value, run the command sudo sysctl -p <file>, where <file> is the file just modified.

  5. Re-run the command sudo sysctl user.max_user_namespaces and verify that the value is now set to "65535".

Sudo and NOPASSWD

The Red Hat Enterprise Linux 8 STIG requires that all users with sudo privileges must provide a password (that is, the "NOPASSWD" directive must not be used in a sudoers file). The Ansible Automation Platform installer runs many tasks as a privileged user, and by default expects to be able to elevate privileges without a password. To provide a password to the installer for elevating privileges, append the following options when launching the installer script: ./setup.sh <setup options> — –-ask-become-pass.

This also applies when running the installer script for day-two operations such as backup and restore.

Initial configuration

Granting access to certain parts of the system exposes security vulnerabilities. Apply the following practices to help secure access:

  • Minimize access to system administrative accounts. There is a difference between the user interface (web interface) and access to the operating system that the automation controller is running on. A system administrator or root user can access, edit, and disrupt any system application. Anyone with root access to the controller has the potential ability to decrypt those credentials, and so minimizing access to system administrative accounts is crucial for maintaining a secure system.

  • Minimize local system access. Automation controller should not require local user access except for administrative purposes. Non-administrator users should not have access to the controller system.

  • Enforce separation of duties. Different components of automation may need to access a system at different levels. Use different keys or credentials for each component so that the effect of any one key or credential vulnerability is minimized.

  • Restrict automation controller to the minimum set of users possible for low-level controller configuration and disaster recovery only. In a controller context, any controller ‘system administrator’ or ‘superuser’ account can edit, change, and update any inventory or automation definition in the controller.

Use infrastructure as code paradigm

The Red Hat Community of Practice has created a set of automation content available via collections to manage Ansible Automation Platform infrastructure and configuration as code. This enables automation of the platform itself through Infrastructure as Code (IaC) or Configuration as Code (CaC). While many of the benefits of this approach are clear, there are critical security implications to consider.

The following Ansible content collections are available for managing Ansible Automation Platform components using an infrastructure as code methodology, all of which are found on the Ansible Automation Hub:

Table 5. Ansible content collections

Validated Collection

Collection Purpose

infra.aap_utilities

Ansible content for automating day 1 and day 2 operations of Ansible Automation Platform, including installation, backup and restore, certificate management, and more.

infra.controller_configuration

A collection of roles to manage automation controller components, including managing users and groups (RBAC), projects, job templates and workflows, credentials, and more.

infra.ah_configuration

Ansible content for interacting with automation hub, including users and groups (RBAC), collection upload and management, collection approval, managing the execution environment image registry, and more.

infra.ee_utilities

A collection of roles for creating and managing execution environment images, or migrating from the older Tower virtualenvs to execution environments.

Many organizations use CI/CD platforms to configure pipelines or other methods to manage this type of infrastructure. However, using Ansible Automation Platform natively, a webhook can be configured to link a Git-based repository natively. In this way, Ansible can respond to git events to trigger Job Templates directly. This removes the need for external CI components from this overall process and thus reduces the attack surface.

These practices allow version control of all infrastructure and configuration. Apply Git best practices to ensure proper code quality inspection prior to being synchronized into Ansible Automation Platform. Relevant Git best practices include the following:

  • Creating pull requests.

  • Ensuring that inspection tools are in place.

  • Ensuring that no plain text secrets are committed.

  • Ensuring that pre-commit hooks and any other policies are followed.

IaC also encourages using external vault systems which removes the need to store any sensitive data in the repository, or deal with having to individually vault files as needed. For more information on using external vault systems, see section 2.3.2.3 External credential vault considerations within this guide.

Controller configuration

Configure centralized logging

A critical capability of logging is the ability for the automation controller to detect and take action to mitigate a failure, such as reaching storage capacity, which by default shuts down the controller. This guide recommends that the application server be part of a high availability system. When this is the case, automation controller will take the following steps to mitigate failure:

  • If the failure was caused by the lack of log record storage capacity, the application must continue generating log records if possible (automatically restarting the log service if necessary), overwriting the oldest log records in a first-in-first-out manner.

  • If log records are sent to a centralized collection server and communication with this server is lost or the server fails, the application must queue log records locally until communication is restored or until the log records are retrieved manually. Upon restoration of the connection to the centralized collection server, action must be taken to synchronize the local log data with the collection server.

To verify the rsyslog configuration for each automation controller host, complete the following steps for each automation controller:

The administrator must check the rsyslog configuration for each automation controller host to verify the log rollover against a organizationally defined log capture size. To do this, use the following steps, and correct using the configuration steps as required:

  1. Check the LOG_AGGREGATOR_MAX_DISK_USAGE_GB field in the automation controller configuration. On the host, execute:

    awx-manage print_settings LOG_AGGREGATOR_MAX_DISK_USAGE_GB

    If this field is not set to the organizationally defined log capture size, then follow the configuration steps.

  2. Check LOG_AGGREGATOR_MAX_DISK_USAGE_PATH field in the automation controller configuration for the log file location to /var/lib/awx. On the host, execute:

    awx-manage print_settings LOG_AGGREGATOR_MAX_DISK_USAGE_PATH

    If this field is not set to /var/lib/awx, then follow these configuration steps:

    1. Open a web browser and navigate to https://<automation controller server>/api/v2/settings/logging/, where <automation controller server> is the fully-qualified hostname of your automation controller. If the btn:[Log In] option is displayed, click it, log in as an automation controller adminstrator account, and continue.

    2. In the Content section, modify the following values, then click btn:[PUT]:

      • LOG_AGGREGATOR_MAX_DISK_USAGE_GB = <new log buffer in GB>

      • LOG_AGGREGATOR_MAX_DISK_USAGE_PATH = /var/lib/awx

    Note that this change will need to be made on each automation controller in a load-balanced scenario.

All user session data must be logged to support troubleshooting, debugging and forensic analysis for visibility and analytics. Without this data from the controller’s web server, important auditing and analysis for event investigations will be lost. To verify that the system is configured to ensure that user session data is logged, use the following steps:

For each automation controller host, navigate to console Settings >> System >> Miscellaneous System.

  1. Click btn:[Edit].

  2. Set the following:

    • Enable Activity Stream = On

    • Enable Activity Stream for Inventory Sync = On

    • Organization Admins Can Manage Users and Teams = Off

    • All Users Visible to Organization Admins = On

  3. Click btn:[Save]

To set up logging to any of the aggregator types, read the documentation on supported log aggregators and configure your log aggregator using the following steps:

  1. Navigate to Ansible Automation Platform.

  2. Click btn:[Settings].

  3. Under the list of System options, select Logging settings.

  4. At the bottom of the Logging settings screen, click btn:[Edit].

  5. Set the configurable options from the fields provided:

    • Enable External Logging: Click the toggle button to btn:[ON] if you want to send logs to an external log aggregator. The UI requires the Logging Aggregator and Logging Aggregator Port fields to be filled in before this can be done.

    • Logging Aggregator: Enter the hostname or IP address you want to send logs.

    • Logging Aggregator Port: Specify the port for the aggregator if it requires one.

    • Logging Aggregator Type: Select the aggregator service from the drop-down menu:

      • Splunk

      • Loggly

      • Sumologic

      • Elastic stack (formerly ELK stack)

    • Logging Aggregator Username: Enter the username of the logging aggregator if required.

    • Logging Aggregator Password/Token: Enter the password of the logging aggregator if required.

    • Log System Tracking Facts Individually: Click the tooltip icon for additional information, whether or not you want to turn it on, or leave it off by default.

    • Logging Aggregator Protocol: Select a connection type (protocol) to communicate with the log aggregator. Subsequent options vary depending on the selected protocol.

    • Logging Aggregator Level Threshold: Select the level of severity you want the log handler to report.

    • TCP Connection Timeout: Specify the connection timeout in seconds. This option is only applicable to HTTPS and TCP log aggregator protocols.

    • Enable/disable HTTPS certificate verification: Certificate verification is enabled by default for HTTPS log protocol. Click the toggle button to btn:[OFF] if you do not want the log handler to verify the HTTPS certificate sent by the external log aggregator before establishing a connection.

    • Loggers to Send Data to the Log Aggregator Form: All four types of data are pre-populated by default. Click the tooltip icon next to the field for additional information on each data type. Delete the data types you do not want.

    • Log Format For API 4XX Errors: Configure a specific error message.

  6. Click btn:[Save] to apply the settings or btn:[Cancel] to abandon the changes.

  7. To verify if your configuration is set up correctly, btn:[Save] first then click btn:[Test]. This sends a test log message to the log aggregator using the current logging configuration in the automation controller. You should check to make sure this test message was received by your external log aggregator.

A automation controller account is automatically created for any user who logs in with an LDAP username and password. These users can automatically be placed into organizations as regular users or organization administrators. This means that logging should be turned on when LDAP integration is in use. You can enable logging messages for the SAML adapter the same way you can enable logging for LDAP.

The following steps enable the LDAP logging:

To enable logging for LDAP, you must set the level to DEBUG in the Settings configuration window.

  1. Click btn:[Settings] from the left navigation pane and select Logging settings from the System list of options.

  2. Click btn:[Edit].

  3. Set the Logging Aggregator Level Threshold field to Debug.

  4. Click btn:[Save] to save your changes.

Configure an external authentication source

As noted in the User authentication planning section, external authentication is recommended for user access to the automation controller. After you choose the authentication type that best suits your needs, navigate to menu:Settings[Authentication] in the automation controller UI, click on the relevant link for your authentication back-end, and follow the relevant instructions for configuring the authentication connection.

When using LDAP for external authentication with the automation controller, navigate to menu:Settings[Authentication > LDAP] settings on the automation controller and ensure that one of the following is configured:

  • For LDAP over SSL, the LDAP Server URI setting must begin with ldaps://` and use port 636, for example ldaps://ldap-server.example.com:636.

  • For LDAP over TLS, the LDAP Start TLS setting must be set to "On".

External credential vault considerations

Secrets management is an essential component of maintaining a secure automation platform. We recommend the following secrets management practices:

  • Ensure that there are no unauthorized users with access to the system, and ensure that only users who require access are granted it. Automation controller encrypts sensitive information such as passwords and API tokens, but also stores the key to decryption. Authorized users potentially have access to everything.

  • Use an external system to manage secrets. In cases where credentials need to be updated, an external system can retrieve updated credentials with less complexity than an internal system. External systems for managing secrets include CyberArk, HashiCorp Vault, Microsoft Azure Key Management, and others. For more information, see the Secret Management System section of the Automation controller User Guide v4.4.

Day two operations

Day 2 Operations include Cluster Health and Scaling Checks, including Host, Project, and environment level Sustainment. You should continually analyze configuration and security drift.

RBAC considerations

As an administrator, you can use the Role-Based Access Controls (RBAC) built into automation controller to delegate access to server inventories, organizations, and more. Administrators can also centralize the management of various credentials, allowing end users to leverage a needed secret without ever exposing that secret to the end user. RBAC controls allow the controller to help you increase security and streamline management.

RBAC is the practice of granting roles to users or teams. RBACs are easiest to think of in terms of Roles which define precisely who or what can see, change, or delete an “object” for which a specific capability is being set.

There are a few main concepts that you should become familiar with regarding automation controller’s RBAC design–roles, resources, and users. Users can be members of a role, which gives them certain access to any resources associated with that role, or any resources associated with “descendant” roles.

A role is essentially a collection of capabilities. Users are granted access to these capabilities and the controller’s resources through the roles to which they are assigned or through roles inherited through the role hierarchy.

Roles associate a group of capabilities with a group of users. All capabilities are derived from membership within a role. Users receive capabilities only through the roles to which they are assigned or through roles they inherit through the role hierarchy. All members of a role have all capabilities granted to that role. Within an organization, roles are relatively stable, while users and capabilities are both numerous and may change rapidly. Users can have many roles.

For further detail on Role Hierarchy, access inheritance, Built in Roles, permissions, personas, Role Creation, and so on see Role-Based Access Controls.

The following is an example of an organization with roles and resource permissions:

Reference architecture for an exmaple of an organization with roles and resource permissions.
Figure 3. RBAC role scopes within automation controller

User access is based on managing permissions to system objects (users, groups, namespaces) rather than by assigning permissions individually to specific users. You can assign permissions to the groups you create. You can then assign users to these groups. This means that each user in a group has the permissions assigned to that group.

Groups created in Automation Hub can range from system administrators responsible for governing internal collections, configuring user access, and repository management to groups with access to organize and upload internally developed content to Automation Hub. For more information, see Automation Hub permissions for consistency.

View-only access can be enabled for further lockdown of the private automation hub. By enabling view-only access, you can grant access for users to view collections or namespaces on your private automation hub without the need for them to log in. View-only access allows you to share content with unauthorized users while restricting their ability to only view or download source code, without permissions to edit anything on your private automation hub. Enable view-only access for your private automation hub by editing the inventory file found on your Red Hat Ansible Automation Platform installer.

Updates and upgrades

All upgrades should be no more than two major versions behind what you are currently upgrading to. For example, to upgrade to automation controller 4.3, you must first be on version 4.1.x because there is no direct upgrade path from version 3.8.x or earlier. Refer to Upgrading to Ansible Automation Platform for additional information. To run automation controller 4.3, you must also have Ansible 2.12 or later.

Automation controller STIG considerations

Automation controller must install security-relevant software updates within the time period specified by your organizational policy and any security profiles you require to maintain the integrity and confidentiality of the system and its orgainzational assets.

Security flaws with software applications are discovered daily. Red Hat constantly updates and patches automation controller to address newly discovered security vulnerabilities. Organizations (including any contractor to the organization) are required to promptly install security-relevant software updates (for example, patches, service packs, and hot fixes). Flaws discovered during security assessments, continuous monitoring, incident response activities, or information system error handling must also be addressed expeditiously.

As a system administrator for each automation controller host, perform the following:

  1. Inspect the status of the DNF Automatic timer:

    systemctl status dnf-automatic.timer

  2. If Active: active is not included in the output, this is a finding.

  3. Inspect the configuration of DNF Automatic:

    grep apply_updates /etc/dnf/automatic.conf

  4. If apply_updates = yes is not displayed, this is a finding.

  5. Install and enable DNF Automatic:

    dnf install dnf-automatic (run the install) systemctl enable --now dnf-automatic.timer

  6. Modify /etc/dnf/automatic.conf and set apply_updates = yes.

All automation controller nginx front-end web server files must be verified for their integrity (e.g., checksums and hashes) before becoming part of the production web server. Verifying that a patch, upgrade, certificate, and so on, being added to the web server is unchanged from the producer of the file is essential for file validation and nonrepudiation of the information. The automation controller nginx web server host must have a mechanism to verify that files are valid prior to installation.

As a System Administrator, for each automation controller nginx web server host, perform the following:

  1. Verify the integrity of the automation controller nginx web server hosts files:

    aide --check

  2. Verify the displayed checksums against previously reserved checksums of the Advanced Intrusion Detection Environment (AIDE) database.

  3. If there are any unauthorized or unexplained changes against previous checksums, this is a finding.

As a System Administrator, for each automation controller nginx web server host, perform the following:

  1. Check for existing or install AIDE:

    yum install -y aide

  2. Create or update the AIDE database immediately after initial installation of each automation controller nginx web server host:

    aide --init && mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

  3. Accept any expected changes to the host by updating the AIDE database:

    aide --update

  4. The output will provide checksums for the AIDE database. Save in a protected location.

All automation controller nginx web server accounts not utilized by installed features (for example, tools, utilities, specific services, and so on) must not be created and must be deleted when the web server feature is uninstalled. If web server accounts are not being used, they must be deleted when the web server is uninstalled. This is because the accounts become stale over time and are not tended to. Also, if accounts are not going to be used, they must not be created for the same reason. Both situations create an opportunity for web server exploitation.

When accounts used for web server features such as documentation, sample code, example applications, tutorials, utilities, and services are created, even though the feature is not installed, they become an exploitable threat to a web server. These accounts become inactive and are not monitored through regular use, and passwords for the accounts are not created or updated. An attacker can use these accounts to gain access to the web server and begin investigating ways to elevate the account privileges.

The accounts used for all automation controller nginx web server features not installed must not be created and must be deleted when these features are uninstalled.

As a System Administrator for each automation controller nginx web server, perform the following:

  1. Examine nginx users in /etc/passwd.

  2. Verify a single user nginx exists using the command:

    [ grep -c nginx /etc/passwd == 1 ] || echo FAILED

  3. If FAILED is displayed, this is a finding.

As a System Administrator for each automation controller nginx web server, perform the following:

  1. Reinstall automation controller if no nginx users exist in /etc/passwd

  2. Review all users enumerated in /etc/passwd, and remove any that are not attributable to Red Hat Enterprise Linux or automation controller and/or organizationally disallowed.

The automation controller nginx web server is configured to check for and install security-relevant software updates from an authoritative source within an organizationally identified time period from the availability of the update. By default, this time period will be every 24 hours.

As a System Administrator for each automation controller nginx web server host, perform the following:

  1. Verify the system is configured to receive updates from an organizationally defined source for authoritative system updates:

    yum -v repolist

  2. If each URL is not valid and consistent with organizationally defined requirements, this is a finding.

  3. If each repository is not enabled in accordance with organizationally defined requirements, this is a finding.

  4. If the system is not configured to automatically receive and apply system updates from this source at least every 30 days, or manually receive and apply updates at least every 30 days, this is a finding.

As a system administrator, for each automation controller nginx web server host, perform the following:

  1. Either configure update repositories in accordance with organizationally defined requirements or subscribe to Red Hat update repositories for the underlying operating system.

  2. Execute an update from these repositories:

    $ yum update -y

  3. Perform one of the following:

    1. Schedule an update to occur every 30 days, or in accordance with organizationally defined policy:

      $ yum install -y dnf-automatic && sed -i '/apply_updates/s/no/yes/' /etc/dnf/automatic.conf && sed -i '/OnCalendar/s/^OnCalendar\s*=./OnCalendar=-1-* 6:00/' /usr/lib/systemd/system/dnf-automatic.timer && systemctl enable --now dnf-automatic.timer

    2. Schedule manual updates to occur at least every 30 days, or in accordance with organizationally defined policy.

  4. Restart the automation controller nginx web server host.

Disaster recovery and continuity of operations

Taking regular backups of Ansible Automation Platform is a critical part of disaster recovery planning. Both backups and restores are performed using the installer, so these actions should be performed from the dedicated installation host described earlier in this document. Refer to the Backing Up and Restoring section of the automation controller documentation for further details on how to perform these operations.

An important aspect of backups is that they contain a copy of the database as well as the secret key used to decrypt credentials stored in the database, so the backup files should be stored in a secure encrypted location. This means that access to endpoint credentials are protected properly. Access to backups should be limited only to Ansible Automation Platform administrators who have root shell access to automation controller and the dedicated installation host.

The two main reasons an Ansible Automation Platform administrator needs to back up their Ansible Automation Platform environment are:

  • To save a copy of the data from your Ansible Automation Platform environment, so you can restore it if needed.

  • To use the backup to restore the environment into a different set of servers if you’re creating a new Ansible Automation Platform cluster or preparing for an upgrade.

In all cases, the recommended and safest process is to always use the same versions of PostgreSQL and Ansible Automation Platform to back up and restore the environment.

Using some redundancy on the system is highly recommended. If the secrets system is down, the automation controller cannot fetch the information and can fail in a way that would be recoverable once the service is restored. If you believe the SECRET_KEY automation controller generated for you has been compromised and has to be regenerated, you can run a tool from the installer that behaves much like the automation controller backup and restore tool.

To generate a new secret key, perform the following steps:

  1. Backup your Ansible Automation Platform database before you do anything else! Follow the procedure described in the Backing Up and Restoring Controller section.

  2. Using the inventory from your install (same inventory with which you run backups/restores), run setup.sh -k.

A backup copy of the prior key is saved in /etc/tower/.

Ansible Automation Platform as a security enabling tool

The automation savings planner allows you to scope the size of your automation work by creating automation savings plans. You can create an automation savings plan by outlining a list of steps needed to fully automate your work. Each savings plan also provides tangible metrics, such as estimated job duration or completed tasks, for you to better understand and compare your automation job.

The following procedures introduces the ways to get started with your automation savings plans by creating new plans or managing existing plans for your automation needs.

About the automation savings planner

An automation savings plan gives you the ability to plan, track, and analyze the potential efficiency and cost savings of your automation initiatives. Use Red Hat Insights for Red Hat Ansible Automation Platform to create an automation savings plan by defining a list of tasks needed to complete an automation job. You can then link your automation savings plans to an Ansible job template in order to accurately measure the time and cost savings upon completion of an automation job.

In doing so, you can utilize the automation savings planner to prioritize the various automation jobs throughout your organization and understand the potential time and cost savings from your automation initiatives.

Creating a new automation savings plan

Create an automation savings plan by defining the tasks needed to complete an automation job using the automation savings planner.

  • The details you provide when creating a savings plan, namely the number of hosts and the manual duration, will be used to calculate your savings from automating this plan. See this section for more information.

Procedure
  1. Navigate to menu:Red Hat Insights[Savings Planner].

  2. Click btn:[Add Plan].

  3. Provide some information about your automation job:

    1. Enter descriptive information, such as a name, description, and type of automation.

    2. Enter technical information, such as the number of hosts, the duration to manually complete this job, and how often you complete this job.

    3. Click btn:[Next].

  4. In the tasks section, list the tasks needed to complete this plan:

    1. Enter each task in the field, then click btn:[Add].

    2. Rearrange tasks by dragging the item up/down the tasks list.

    3. Click btn:[Next].

Note

The task list is for your planning purposes only, and does not currently factor into your automation savings calculation.

  1. Select a template to link to this plan, then click btn:[Save].

Your new savings plan is now created and displayed on the automation savings planner list view.

Edit an existing savings plan

Edit any information about an existing savings plan by clicking on it from the savings planner list view.

Procedure
  1. Navigate to menu:Red Hat Insights[Savings Planner].

  2. On the automation savings plan, click Click the btn:[More Actions] icon , then click btn:[Edit].

  3. Make any changes to the automation plan, then click btn:[Save].

You can associate a job template to a savings plan to allow Insights for Ansible Automation Platform to provide a more accurate time and cost savings estimate for completing this savings plan.

Procedure
  1. Navigate to menu:Red Hat Insights[Savings Planner].

  2. Click the btn:[More Actions] icon and select Link Template.

  3. Click btn:[Save].

Review savings calculations for your automation plans

The automation savings planner offers a calculation of how much time and money you can save by automating a job. Red Hat Insights for Red Hat Ansible Automation Platform takes data from the plan details and the associated job template to provide you with an accurate projection of your cost savings when you complete this savings plan.

To do so, navigate to your savings planner page, click the name of an existing plan, then navigate to the Statistics tab.

The statistics chart displays a projection of your monetary and time savings based on the information you provided when creating a savings plan. Primarily, the statistics chart subtracts the automated cost from the manual cost of executing the plan to provide the total resources saved upon automation. The chart then displays this data by year to show you the cumulative benefits for automating the plan over time.

Click between Money and Time to view the different types of savings for automating the plan.

Filter and sort plans on the list view page

Find specific types of automation savings plans by filtering or sorting your savings planner list view.

Procedure
  1. Navigate to menu:Red Hat Insights[Savings Planner].

  2. To filter your saving plans based on type, or sort your savings plans by a certain order, select a filter option on the horizontal toolbar.

About Automation Calculator

The automation calculator provides graphs, metrics and calculations that help you determine the total savings on your investment in automated processes.

Automation savings

Automation savings is produced by an analysis of the time and cost of performing a task manually, such as deploying a server, versus the time and cost associated with automating the same task. Automation savings calculations extend across all organizations, clusters, hosts and templates in an environment. Include your own estimated costs to produce a more accurate calculation.

Note

The initial total savings is based on default values for each variable.

Variables

Several variables are used in evaluating costs:

  • Manual cost of automation - the approximate cost for a mid-level resource to perform a task or series of tasks.

  • Cost of automation - costs associated with automating tasks as job templates.

  • Automation time - the time required to run a job template.

  • Number of hosts - the number of hosts in inventory the template runs on.

Automation formula

Automation savings is based on the following formulas:

  • Manual cost per template = (time for a manual run on one host * (sum of all hosts across job runs)) * cost per hour.

  • Automation cost per template = cost of automation per hour * sum of total elapsed hours for a template.

  • Savings = sum of (manual cost - automation costs) across all templates.

Calculating your automation savings

The Automation Calculator produces its default total savings figure based on estimates for each variable.

You can tune this calculation by providing more specific organizational cost information, as well as adjusting the time values for each of the top templates. The total savings will update dynamically as each field is edited.

Note

Automation savings calculations are not saved in Red Hat Automation Analytics.

To calculate your automation savings:

  1. Under Calculate your automation enter cost information for:

    1. Manual process cost

    2. Automated process cost

  2. Under Top templates:

    1. Adjust time values for top templates to provide time to manually perform each task that the template automates.

Total savings will update based on the information you provide in each field.

Top templates

Top templates lists the 25 most frequently run templates across all hosts in your environment. Templates are listed in descending order starting with the highest run count. You can enter the time it takes to perform tasks manually that are automated by templates in the field adjacent to the run totals to produce a more accurate total savings. The default value is set to 60 minutes.

Curating top templates

You can use the toggle switch for each template to show or hide it in the bar graph to compare performance and savings based on specific templates.

  • Click the toggle switch for each template to display or hide it.

The bar graph on the Automation Calculator will update to display those top templates selected and Total savings will calculate based on those templates.

Viewing template details

You can view detailed information for each template in Top templates to learn more about the template’s context in the calculation of automation savings.

  • Click the Info icon for a job template to view template details.

Top template information is provided for the following:
  • Total elapsed sum - total run time of the template

  • Success elapsed sum - total run time for successful template runs

  • Failed elapsed sum - total run time for failed template runs

  • Automation percentage - template accounts for this percentage of automation in your oganization.

  • Associated organizations - template runs against these organizations

  • Associated clusters - Ansible Tower clusters the template runs on.

Viewing your reports on Red Hat Ansible Automation Platform

The reports feature on the Red Hat Ansible Automation Platform provides users with a visual overview of their automation efforts across different teams using Ansible. Each report is designed to help users monitor the status of their automation environment, be it the frequency of playbook runs or the status of hosts affected by various job templates.

For example, you can use your reports to:

  • View the number of hosts affected by a job template

  • View the number changes made to hosts by a job template

  • View the frequency of a job template run, and the rate of job templates that succeed or fail to run

Reviewing your reports

To view reports about your Ansible automation environment, proceed with the following steps:

Procedure
  1. Log in to console.redhat.com and navigate to the Ansible Automation Platform.

  2. Click btn:[Reports] on the side navigation panel.

  3. Select a report from the results to view it.

Each report presents data to monitor your Ansible automation environment. Use the filter toolbar on each report to adjust your graph view.

Note
We are constantly adding new reports to the system. If you have ideas for new reports that would be helpful for your team, please contact your account representative or log a feature enhancement for Insights for Ansible Automation Platform.

About the Job Explorer

The Job Explorer provides a detailed view of jobs run on Ansible Tower clusters across your organizations. You can access the Job Explorer by directly clicking on the navigation tab or using the drill-down view available across each of the application’s charts.

Using the Job Explorer you can:

  • Filter the types of jobs running in a cluster or organization;

  • Directly link out to templates on your Ansible Tower for further assessment;

  • Identify and review job failures;

  • View more details for top templates running on a cluster;

  • Filter out nested workflows and jobs.

You can review the features and details of the Job Explorer in the following sections.

Creating a filtered and sorted view of jobs

You can view a list of jobs, filtered by attributes you choose, using the Job Explorer.

Filter options include:

  • Status

  • Job

  • Cluster

  • Organization

  • Template

You can sort results by a set of parameters by using the Sort by options from the filter toolbar.

Procedure
  1. Navigate to menu:Insights[Job Explorer].

  2. In the filter toolbar, click the Filter by drop-down menu and select Job.

  3. In that same toolbar, select a time range. Job Explorer will now display jobs within that time range.

  4. To further refine results, return to the filter toolbar and select a different attribute to filter results by, including job status, cluster, or organization.

The Job Explorer view will update and present a list of jobs based on the attributes you selected.

Viewing more information about an individual job

You can click on the arrow icon next to the job Id/Name column to view more details related to that job.

Reviewing job details on Ansible Tower

Click the job in the Id/Name column to view the job itself on the Ansible Tower job details page. For more information on viewing job details on Ansible Tower, see Jobs in the Ansible Tower User Guide.

Drilling down into cluster data

You can drill down into cluster data to review more detailed information about successful or failed jobs. The detailed view, presented on the Job Explorer page, provides information on the cluster, organization, template, and job type. Filters you select on the Clusters view carry over to the Job Explorer page.

Details on those job templates will appear in the Job Explorer view, modified by any filters you selected in the Clusters view.

For example, you can drill down to review details for failed jobs in a cluster. See below to learn more.

Example: Reviewing failed jobs

You can view more detail about failed jobs across your organization by drilling down on the graph on the Cluster view and using the Job Explorer to refine results. Clicking on a specific portion in a graph will open that information in the Job Explorer, preserving contextual information created when using filters on the Clusters view.

Procedure
  1. Navigate to menu:Insights[Clusters].

  2. In the filter toolbar, apply filters for clusters and time range of your choosing.

  3. Click on a segment on the graph.

You will redirected to the Job Explorer view, which will present a list of successful and failed jobs corresponding to that day on the bar graph.

To view only failed jobs:

  1. Click the Filter by drop-down menu and select Status.

  2. Select the Failed filter.

The view will update to show only failed jobs run on that day.

Add additional context to the view by applying additional filters and selecting attributes to sort results by. Link out and review more information for failed jobs on the Ansible Tower job details page.

Viewing top templates job details for a specific cluster

You can view job instances for top templates in a cluster to learn more about individual job runs associated with that template or to apply filters to further drill down into the data.

Procedure
  1. Navigate to menu:Clusters[].

  2. Select a cluster from the clusters drop-down list. The view will update with that cluster’s data.

  3. Click on a template name in Top Templates.

  4. Click btn:[View all jobs] in the modal that appears.

The Job Explorer will display all jobs on the chosen cluster associated with that template. The view presented will preserve the contextual information of the template based on the parameters selected in the Clusters view.

Ignoring nested workflows and jobs

Use the toggle switch on the Job Explorer view to ignore nested workflows and job. Select this option to filter out duplicate workflow and job templates entries and exclude those items from overall totals.

Note

About nested workflows

Nested workflows allow you to create workflow job templates that call other workflow job templates. Nested workflows promotes reuse, as modular components, of workflows that include existing business logic and organizational requirements in automating complex processes and operations.

To learn more about nested workflows, see Workflows in the Ansible Tower User Guide.

Introduction to Automation execution environments

Using Ansible content that depends on non-default dependencies can be complicated because the packages must be installed on each node, interact with other software installed on the host system, and be kept in sync.

Automation execution environments help simplify this process and can easily be created with Ansible Builder.

About automation execution environments

Automation execution environments are container images on which all automation in Red Hat Ansible Automation Platform is run. Automation execution environments create a common language for communicating automation dependencies, and provide a standard way to build and distribute the automation environment.

An automation execution environment is expected to contain the following:

  • Ansible 2.9 or Ansible Core 2.11-2.13

  • Python 3.8-3.10

  • Ansible Runner

  • Ansible content collections

  • Collection, Python, or system dependencies

Why use automation execution environments?

With automation execution environments, Red Hat Ansible Automation Platform has transitioned to a distributed architecture by separating the control plane from the execution plane. Keeping automation execution independent of the control plane results in faster development cycles and improves scalability, reliability, and portability across environments. Red Hat Ansible Automation Platform also includes access to Ansible content tools, making it easy to build and manage automation execution environments.

In addition to speed, portability, and flexibility, automation execution environments provide the following benefits:

  • They ensure that automation runs consistently across multiple platforms and make it possible to incorporate system-level dependencies and collection-based content.

  • They give Red Hat Ansible Automation Platform administrators the ability to provide and manage automation environments to meet the needs of different teams.

  • They allow automation to be easily scaled and shared between teams by providing a standard way of building and distributing the automation environment.

  • They enable automation teams to define, build, and update their automation environments themselves.

  • Automation execution environments provide a common language to communicate automation dependencies.

Publishing an automation execution environment

Customizing an existing automation execution environments image

Ansible Controller ships with three default execution environments:

  • Ansible 2.9 - no collections are installed other than Controller modules

  • Minimal - contains the latest Ansible 2.13 release along with Ansible Runner, but contains no collections or other additional content

  • EE Supported - Minimal, plus all Red Hat-supported collections and dependencies

While these environments cover many automation use cases, you can add additional items to customize these containers for your specific needs. The following procedure adds the kubernetes.core collection to the ee-minimal default image:

Procedure
  1. Log in to registry.redhat.io via Podman:

    $ podman login -u="[username]" -p="[token/hash]" registry.redhat.io
  2. Ensure that you can pull the desired automation execution environment base image

    podman pull registry.redhat.io/ansible-automation-platform-22/ee-minimal-rhel8:latest
  3. Configure your Ansible Builder files to specify the desired base image and any additional content to add to the new execution environment image.

    1. For example, to add the Kubernetes Core Collection from Galaxy to the image, fill out the requirements.yml file as follows:

      collections:
        - kubernetes.core
    2. For more information on definition files and their content, refer to to definition file breakdown section.

  4. In the execution environment definition file, specify the original ee-minimal container’s URL and tag in the EE_BASE_IMAGE field. In doing so, your final execution-environment.yml file will look like the following:

    Example 1. A customized execution-environment.yml file
    version: 1
    
    build_arg_defaults:
      EE_BASE_IMAGE: 'registry.redhat.io/ansible-automation-platform-22/ee-minimal-rhel8:latest'
    
    dependencies:
      galaxy: requirements.yml
    Note

    Since this example uses the community version of kubernetes.core and not a certified collection from automation hub, we do not need to create an ansible.cfg file or reference that in our definition file.

  5. Build the new execution environment image using the following command:

    $ ansible-builder build -t registry.redhat.io/[username]/new-ee

    where [username] specifies your username, and new-ee specifies the name of your new container image.

Note

If you do not use -t with build, an image called ansible-execution-env is created and loaded into the local container registry.

  1. Use the podman images command to confirm that your new container image is in that list:

    Example 2. Output of a podman images command with the image new-ee
    REPOSITORY          TAG     IMAGE ID      CREATED        SIZE
    localhost/new-ee    latest  f5509587efbb  3 minutes ago  769 MB
    1. Verify that the collection is installed:

      $ podman run registry.redhat.io/[username]/new-ee ansible-doc -l kubernetes.core
    2. Tag the image for use in your automation hub:

      $ podman tag registry.redhat.io/[username]/new-ee [automation-hub-IP-address]/[username]/new-ee
    3. Log in to your automation hub using Podman:

      Note

      You must have admin or appropriate container repository permissions for automation hub to push a container. See Managing containers in private automation hub in the Red Hat Ansible Automation Platform documentation for more information.

      $ podman login -u="[username]" -p="[token/hash]" [automation-hub-IP-address]
    4. Push your image to the container registry in automation hub:

      $ podman push [automation-hub-IP-address]/[username]/new-ee
    5. Pull your new image into your automation controller instance:

  2. Navigate to automation controller.

  3. From the side-navigational bar, click menu:Administration[Execution Environments].

  4. Click btn:[Add].

  5. Enter the appropriate information then click btn:[Save] to pull in the new image.

    Note

    If your instance of automation hub is password or token protected, ensure that you have the appropriate container registry credential set up.

Breakdown of definition file content

A definition file is required for building automation execution environments with Ansible Builder, because it specifies the content that is included in the automation execution environment container image.

The following sections breaks down the different parts of a definition file.

Build args and base image

The build_arg_defaults section of the definition file is a dictionary whose keys can provide default values for arguments to Ansible Builder. See the following table for a list of values that can be used in build_arg_defaults:

Value Description

ANSIBLE_GALAXY_CLI_COLLECTION_OPTS

Allows the user to pass arbitrary arguments to the ansible-galaxy CLI during the collection installation phase. For example, the –pre flag to enable the installation of pre-release collections, or -c to disable verification of the server’s SSL certificate.

EE_BASE_IMAGE

Specifies the parent image for the automation execution environment, enabling a new image to be built that is based off of an already-existing image. This is typically a supported execution environment base image like ee-minimal or ee-supported, but it can also be an execution environment image that you’ve created previously and want to customize further.

The default image is registry.redhat.io/ansible-automation-platform-23/ee-minimal-rhel8:latest.

EE_BUILDER_IMAGE

Specifies the intermediate builder image used for Python dependency collection and compilation; must contain a matching Python version with EE_BASE_IMAGE and have ansible-builder installed.

The default image is registry.redhat.io/ansible-automation-platform-23/ansible-builder-rhel8:latest.

The values given inside build_arg_defaults will be hard-coded into the Containerfile, so these values will persist if podman build is called manually.

Note
If the same variable is specified in the CLI --build-arg flag, the CLI value will take higher precedence.

Ansible config file path

The ansible_config directive allows specifying the path to an ansible.cfg file to pass a token and other settings for a private account to an automation hub server during the Collection installation stage of the build. The config file path should be relative to the definition file location, and will be copied to the generated container build context.

The ansible.cfg file should be formatted like the following example:

Example 3. An ansible.cfg file
[galaxy]
server_list = automation_hub

[galaxy_server.automation_hub]
url=https://{Console}/api/automation-hub/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
token=my_ah_token

For more information on how to download a collection from automation hub, please see the related Ansible documentation page.

Dependencies

To avoid issues with your automation execution environment image, make sure that the entries for Galaxy, Python, and system point to a valid requirements file.

Galaxy

The galaxy entry points to a valid requirements file for the ansible-galaxy collection install -r …​ command.

The entry requirements.yml may be a relative path from the directory of the automation execution environment definition’s folder, or an absolute path.

The content of a requirements.yml file may look like the following:

Example 4. A requirements.yml file for Galaxy
collections:
  - community.aws
  - kubernetes.core

Python

The python entry in the definition file points to a valid requirements file for the pip install -r …​ command.

The entry requirements.txt is a file that installs extra Python requirements on top of what the Collections already list as their Python dependencies. It may be listed as a relative path from the directory of the automation execution environment definition’s folder, or an absolute path. The contents of a requirements.txt file should be formatted like the following example, similar to the standard output from a pip freeze command:

Example 5. A requirements.txt file for Python
boto>=2.49.0
botocore>=1.12.249
pytz
python-dateutil>=2.7.0
awxkit
packaging
requests>=2.4.2
xmltodict
azure-cli-core==2.11.1
python_version >= '2.7'
collection community.vmware
google-auth
openshift>=0.6.2
requests-oauthlib
openstacksdk>=0.13
ovirt-engine-sdk-python>=4.4.10

System

The system entry in the definition points to a bindep requirements file, which will install system-level dependencies that are outside of what the collections already include as their dependencies. It can be listed as a relative path from the directory of the automation execution environment definition’s folder, or an absolute path. A minimum expectation is that the collection(s) specify necessary requirements for [platform:rpm].

To demonstrate this, the following is an example bindep.txt file that adds the libxml2 and subversion packages to a container:

Example 6. A bindep.txt file
libxml2-devel [platform:rpm]
subversion [platform:rpm]

Entries from multiple collections are combined into a single file. This is processed by bindep and then passed to dnf. Only requirements with no profiles or no runtime requirements will be installed to the image.

Additional custom build steps

The prepend and append commands may be specified in the additional_build_steps section. These will add commands to the Containerfile which will run either before or after the main build steps are executed.

The syntax for additional_build_steps must be one of the following:

  • a multi-line string

    Example 7. A multi-line string entry
    prepend: |
       RUN whoami
       RUN cat /etc/os-release
  • a list

    Example 8. A list entry
    append:
    - RUN echo This is a post-install command!
    - RUN ls -la /etc

Using Ansible Builder

Ansible Builder is a command line tool that automates the process of building automation execution environments by using metadata defined in various Ansible Collections or created by the user.

Why use Ansible Builder?

Before Ansible Builder was developed, Red Hat Ansible Automation Platform users could run into dependency issues and errors when creating custom virtual environments or containers that included all of the required dependencies installed.

Now, with Ansible Builder, you can easily create a customizable automation execution environments definition file that specifies the content you want included in your automation execution environments such as, collections, requirements, and system level packages. This allows you to fulfill all of the necessary requirements and dependencies to get jobs running.

Installing Ansible Builder

You can install Ansible Builder using Red Hat Subscription Management (RHSM) to attach your Red Hat Ansible Automation Platform subscription. Attaching your Red Hat Ansible Automation Platform subscription allows you to access subscription-only resources necessary to install ansible-builder. Once you attach your subscription, the necessary repository for ansible-builder is automatically enabled.

Note
You must have valid subscriptions attached on the host before installing ansible-builder.
Procedure
  • In your terminal, run the following command to install Ansible Builder and activate your Ansible Automation Platform repo:

    #  dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-builder

Building a definition file

Once you have Ansible Builder installed, you can create a definition file that Ansible Builder uses to create your automation execution environment image. The high level process to build an automation execution environment image is for Ansible Builder to read and validate your definition file, then create a Containerfile, and finally pass the Containerfile to Podman which then packages and creates your automation execution environment image. The definition file created is in yaml format and contains different sections. For more information about the definition file content, see Breakdown of definition file content.

The following is an example of a definition file:

Example 9. A definition file
version: 1

build_arg_defaults: (1)
  ANSIBLE_GALAXY_CLI_COLLECTION_OPTS: "-v"

dependencies: (2)
  galaxy: requirements.yml
  python: requirements.txt
  system: bindep.txt

additional_build_steps: (3)
  prepend: |
    RUN whoami
    RUN cat /etc/os-release
  append:
    - RUN echo This is a post-install command!
    - RUN ls -la /etc
  1. Lists default values for build arguments

  2. Specifies the location of various requirements files

  3. Commands for additional custom build steps

For more information about these definition file parameters, see Breakdown of definition file content.

Executing the build and creating commands

Prerequisites
  • You have created a definition file

Procedure

To build an automation execution environment image, run:

$ ansible-builder build

By default, Ansible Builder will look for a definition file named execution-environment.yml but a different file path can be specified as an argument via the -f flag:

$ ansible-builder build -f definition-file-name.yml

where definition-file-name specifies the name of your definition file.

Breakdown of definition file content

A definition file is required for building automation execution environments with Ansible Builder, because it specifies the content that is included in the automation execution environment container image.

The following sections breaks down the different parts of a definition file.

Build args and base image

The build_arg_defaults section of the definition file is a dictionary whose keys can provide default values for arguments to Ansible Builder. See the following table for a list of values that can be used in build_arg_defaults:

Value Description

ANSIBLE_GALAXY_CLI_COLLECTION_OPTS

Allows the user to pass arbitrary arguments to the ansible-galaxy CLI during the collection installation phase. For example, the –pre flag to enable the installation of pre-release collections, or -c to disable verification of the server’s SSL certificate.

EE_BASE_IMAGE

Specifies the parent image for the automation execution environment, enabling a new image to be built that is based off of an already-existing image. This is typically a supported execution environment base image like ee-minimal or ee-supported, but it can also be an execution environment image that you’ve created previously and want to customize further.

The default image is registry.redhat.io/ansible-automation-platform-23/ee-minimal-rhel8:latest.

EE_BUILDER_IMAGE

Specifies the intermediate builder image used for Python dependency collection and compilation; must contain a matching Python version with EE_BASE_IMAGE and have ansible-builder installed.

The default image is registry.redhat.io/ansible-automation-platform-23/ansible-builder-rhel8:latest.

The values given inside build_arg_defaults will be hard-coded into the Containerfile, so these values will persist if podman build is called manually.

Note
If the same variable is specified in the CLI --build-arg flag, the CLI value will take higher precedence.

Ansible config file path

The ansible_config directive allows specifying the path to an ansible.cfg file to pass a token and other settings for a private account to an automation hub server during the Collection installation stage of the build. The config file path should be relative to the definition file location, and will be copied to the generated container build context.

The ansible.cfg file should be formatted like the following example:

Example 10. An ansible.cfg file
[galaxy]
server_list = automation_hub

[galaxy_server.automation_hub]
url=https://{Console}/api/automation-hub/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
token=my_ah_token

For more information on how to download a collection from automation hub, please see the related Ansible documentation page.

Dependencies

To avoid issues with your automation execution environment image, make sure that the entries for Galaxy, Python, and system point to a valid requirements file.

Galaxy

The galaxy entry points to a valid requirements file for the ansible-galaxy collection install -r …​ command.

The entry requirements.yml may be a relative path from the directory of the automation execution environment definition’s folder, or an absolute path.

The content of a requirements.yml file may look like the following:

Example 11. A requirements.yml file for Galaxy
collections:
  - community.aws
  - kubernetes.core
Python

The python entry in the definition file points to a valid requirements file for the pip install -r …​ command.

The entry requirements.txt is a file that installs extra Python requirements on top of what the Collections already list as their Python dependencies. It may be listed as a relative path from the directory of the automation execution environment definition’s folder, or an absolute path. The contents of a requirements.txt file should be formatted like the following example, similar to the standard output from a pip freeze command:

Example 12. A requirements.txt file for Python
boto>=2.49.0
botocore>=1.12.249
pytz
python-dateutil>=2.7.0
awxkit
packaging
requests>=2.4.2
xmltodict
azure-cli-core==2.11.1
python_version >= '2.7'
collection community.vmware
google-auth
openshift>=0.6.2
requests-oauthlib
openstacksdk>=0.13
ovirt-engine-sdk-python>=4.4.10
System

The system entry in the definition points to a bindep requirements file, which will install system-level dependencies that are outside of what the collections already include as their dependencies. It can be listed as a relative path from the directory of the automation execution environment definition’s folder, or an absolute path. A minimum expectation is that the collection(s) specify necessary requirements for [platform:rpm].

To demonstrate this, the following is an example bindep.txt file that adds the libxml2 and subversion packages to a container:

Example 13. A bindep.txt file
libxml2-devel [platform:rpm]
subversion [platform:rpm]

Entries from multiple collections are combined into a single file. This is processed by bindep and then passed to dnf. Only requirements with no profiles or no runtime requirements will be installed to the image.

Additional custom build steps

The prepend and append commands may be specified in the additional_build_steps section. These will add commands to the Containerfile which will run either before or after the main build steps are executed.

The syntax for additional_build_steps must be one of the following:

  • a multi-line string

    Example 14. A multi-line string entry
    prepend: |
       RUN whoami
       RUN cat /etc/os-release
  • a list

    Example 15. A list entry
    append:
    - RUN echo This is a post-install command!
    - RUN ls -la /etc

Optional build command arguments

The -t flag will tag your automation execution environment image with a specific name. For example, the following command will build an image named my_first_ee_image:

$ ansible-builder build -t my_first_ee_image
Note

If you do not use -t with build, an image called ansible-execution-env` is created and loaded into the local container registry.

If you have multiple definition files, you can specify which one to use by utilizing the -f flag:

$ ansible-builder build -f another-definition-file.yml -t another_ee_image

In the example above, Ansible Builder will use the specifications provided in the file another-definition-file.yml instead of the default execution-environment.yml to build an automation execution environment image named another_ee_image.

For other specifications and flags that are possible to use with the build command, enter ansible-builder build --help to see a list of additional options.

Containerfile

Once your definition file is created, Ansible Builder reads and validates it, then creates a Containerfile, and finally passes the Containerfile to Podman to package and create your automation execution environment image using the following instructions:

  1. Fetch base image

  2. In the ephemeral copy of base image, collections are downloaded and the list of declared Python and system dependencies, if any, are collected for later.

  3. In the ephemeral builder image, Python wheels for all Python dependencies listed in the definition file are downloaded and built (as needed), including all Python dependencies declared by collections listed in the definition file.

  4. prepend for additional_build_steps from the definition file are run.

  5. In the final automation execution environments image, system dependencies listed in the definition file are installed, including all system dependencies declared by collections listed in the definition file.

  6. In the final automation execution environments image, the downloaded collections are copied and the previously fetched Python dependencies are installed.

  7. append for additional_build_steps from the definition file are run.

Creating a Containerfile without building an image

To create a shareable Containerfile without building an image from it, run:

$ ansible-builder create

Managing group permissions with Ansible Automation Platform Central Authentication

You can manage user access on the Ansible Automation Platform by grouping specific permissions into roles, and then assigning those roles to groups. As you log in to the Ansible Automation Platform for the first time, Users, Groups, and Roles appear in the user access page in automation hub, then you can assign user access and roles to each group.

Automation hub includes a set of managed roles that are compatible with use cases you may encounter. You can create your own set of managed roles or use the predefined roles located in the Roles section of the User Access page.

Grouping permissions into Roles

You can group permissions into roles with specific user access to features in the system.

Prerequisites
  • You are signed in as a hubadmin user.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to the User Access drop-down menu.

  3. Click btn:[Roles].

  4. Click btn:[Add roles].

  5. Enter role name in the Name field.

  6. Enter role description in the Description field.

  7. Click the drop-down menu next to each Permissions type and select the appropriate permissions for the role.

  8. Click btn:[Save].

You have created a new role with specific permissions. You can now assign this role to groups.

Assigning roles to groups

You can assign roles to groups, giving users access to specific features in the system, from both the Groups menu and the Namespaces menu. Roles assigned to a group from the Groups menu have a global scope. For example, if a user is assigned a namespace owner role, that permission applies to all namespaces. However, roles assigned to a group from the Namespaces menu will only give a user access to a specific instance of an object.

Prerequisites
  • You are signed in as a hubadmin user.

Procedure

Assigning roles from the Groups menu.

  1. Log in to your local automation hub.

  2. Navigate to the User Access drop-down menu.

  3. Click btn:[Groups] and select a group name.

  4. Click btn:[Add roles].

  5. Click the checkbox next to the role that you want to add.

  6. Click btn:[Next] to preview the role that will be applied to the group.

  7. Click btn:[Add] to apply the selected role to the group.

Note
Click btn:[Back] to return to the roles menu, or click btn:[Cancel] to return to the previous page.
Procedure

Assigning roles from the Namespaces menu.

  1. Log in to your local automation hub.

  2. Navigate to the Collections drop-down menu.

  3. Click the My Namespaces tab, and select a namespace.

  4. Click the Namespace owners tab to edit.

Users can now access features in automation hub associated with their assigned permissions.

Automation Hub permissions

Permissions provide a defined set of actions each group performs on a given object. Determine the required level of access for your groups based on the following permissions:

Table 6. Permissions Reference Table
Object Permission Description

collection namespaces

Add namespace

Upload to namespace

Change namespace

Delete namespace

Groups with these permissions can create, upload collections, or delete a namespace.

collections

Modify Ansible repo content

Delete collections

Groups with this permission can move content between repositories using the Approval feature, certify or reject features to move content from the staging to published or rejected repositories, abd delete collections.

users

View user

Delete user

Add user

Change user

Groups with these permissions can manage user configuration and access in automation hub.

groups

View group

Delete group

Add group

Change group

Groups with these permissions can manage group configuration and access in automation hub.

collection remotes

Change collection remote

View collection remote

Groups with these permissions can configure remote repository by navigating to menu:Collections[Repo Management].

containers

Change container namespace permissions

Change containers

Change image tags

Create new containers

Push to existing containers

Delete container repository

Groups with these permissions can manage container repositories in automation hub.

remote registries

Add remote registry

Change remote registry

Delete remote registry

Groups with these permissions can add, change, or delete remote registries added to automation hub.

task management

Change task

Delete task

View all tasks

Groups with these permissions can manage tasks added to Task Management in automation hub.

Adding an identity broker to Ansible Automation Platform Central Authentication

Ansible Automation Platform Central Authentication supports both social and protocol-based providers. You can add an identity broker to central authentication to enable social authentication for your realm, allowing users to log in using an existing social network account, such as Google, Facebook, GitHub etc.

Note
For a list of supported social networks and for more information to enable them, please see this section.

Protocol-based providers are those that rely on a specific protocol in order to authenticate and authorize users. They allow you to connect to any identity provider compliant with a specific protocol. Ansible Automation Platform Central Authentication provides support for SAML v2.0 and OpenID Connect v1.0 protocols.

Procedure
  1. Log in to Ansible Automation Platform Central Authenticationas an admin user.

  2. Under the Configure section on the side navigation bar, click btn:[Identity Providers].

  3. Using the dropdown menu labeled Add provider, select your identity provider to proceed to the identity provider configuration page.

The following table lists the available options for your identity provider configuration:

Table 7. Identity Broker Configuration Options

Configuration Option

Description

Alias

The alias is a unique identifier for an identity provider. It is used to reference an identity provider internally. Some protocols such as OpenID Connect require a redirect URI or callback url in order to communicate with an identity provider. In this case, the alias is used to build the redirect URL.

Enabled

Turns the provider on/off.

Hide on Login Page

If enabled, this provider will not be shown as a login option on the login page. Clients can still request to use this provider by using the kc_idp_hint parameter in the URL they use to request a login.

Account Linking Only

If enabled, this provider cannot be used to login users and will not be shown as an option on the login page. Existing accounts can still be linked with this provider.

Store Tokens

Whether or not to store the token received from the identity provider.

Stored Tokens Readable

Whether or not users are allowed to retrieve the stored identity provider token. This also applies to the broker client-level role read token.

Trust Email

Whether an email address provided by the identity provider will be trusted. If the realm requires email validation, users that log in from this IDP will not have to go through the email verification process.

GUI Order

The order number that sorts how the available IDPs are listed on the login page.

First Login Flow

Select an authentication flow that will be triggered for users that log in to central authentication through this IDP for the first time.

Post Login Flow

Select an authentication flow that is triggered after the user finishes logging in with the external identity provider.

Assigning automation hub administrator permissions

Hub administrative users will need to be assigned the role of hubadmin in order to manage user permissions and groups. You can assign the role of hubadmin to a user through the Ansible Automation Platform Central Authentication client.

Prerequisites
  • A user storage provider (e.g., LDAP) has been added to your central authentication

Procedure
  1. Navigate to the ansible-automation-platform realm on your SSO client.

  2. From the navigation bar, select menu:Manage[Users].

  3. Select a user from the list by clicking their ID.

  4. Click the Role Mappings tab.

  5. Using the dropdown menu under Client Roles, select automation-hub.

  6. Click btn:[hubadmin] from the Available Roles field, then click btn:[Add selected >].

The user is now a hubadmin. Repeat steps 3-6 to assign any additional users the hubadmin role.

Configuring Ansible Automation Platform Central Authentication Generic OIDC settings and Red Hat SSO/keycloak for Red Hat SSO and Ansible Automation Platform

Ansible Automation Platform Central Authentication allows for the setting of generic OIDC settings and Red Hat SSO/keycloak for Red Hat SSO and AAP.

Prerequisites

  • You are able to log in as an admin user.

Configuring Central Authentication Generic OIDC settings

Procedure
  1. Log in to RH SSO as admin.

Note
If you have an existing realm you may go to step 4.

+ .Add Realm.

  1. Enter Name and click btn:[Create].

  2. Click the Clients tab.

  3. Enter Name and Click btn:[Create].

  4. From the Client Protocol menu, select menu:openid-connect.

  5. From the Access Type menu, select menu:confidential.

  6. In the Root URL field, enter your AAP server IP or hostname.

  7. In the Valid Redirect field, enter your AAP server IP or hostname. .If not in production, set to *.

  8. In the Web origins field, enter your AAP server IP or hostname. If not in production, set to *.

  9. Click on Credentials tab.

Note
Keep track of Secret to be used later.

+ . Log in to Ansible Automation Platform Controller as admin. . Click on Settings. . Click on Generic OIDC settings. . Click btn:[Edit]. . In the OIDC Key field, enter the name of your client from step 4. . In the OIDC Secret field, enter the secret saved from step 7. . In the OIDC Provider URL field, enter your keycloak server URL and port. . Click btn:[Save].

OIDC should appear as an option for login. Click on btn:[Sign in with OIDC] and it will redirect you to the SSO server for login and redirection back to AAP.

Adding a User Storage Provider (LDAP/Kerberos) to Ansible Automation Platform Central Authentication

Ansible Automation Platform Central Authentication comes with a built-in LDAP/AD provider. You can add your LDAP provider to central authentication to be able to import user attributes from your LDAP database.

Prerequisites
  • You are logged in as an SSO admin user.

Procedure
  1. Log in to Ansible Automation Platform Central Authentication as an SSO admin user.

  2. From the navigation bar, select menu:Configure section[User Federation].

Note

When using an LDAP User Federation in RH-SSO, a group mapper must be added to the client configuration, ansible-automation-platform, to expose the identity provider (IDP) groups to the SAML authentication. Refer to OIDC Token and SAML Assertion Mappings for more information on SAML assertion mappers.

  1. Using the dropdown menu labeled Add provider, select your LDAP provider to proceed to the LDAP configuration page.

The following table lists the available options for your LDAP configuration:

Configuration Option

Description

Storage mode

Set to On if you want to import users into the central authentication user database. See Storage Mode for more information.

Edit mode

Determines the types of modifications that admins can make on user metadata. See Edit Mode for more information.

Console Display Name

Name used when this provider is referenced in the admin console

Priority

The priority of this provider when looking up users or adding a user

Sync Registrations

Enable if you want new users created by Ansible Automation Platform Central Authentication in the admin console or the registration page to be added to LDAP

Allow Kerberos authentication

Enable Kerberos/SPNEGO authentication in the realm with users data provisioned from LDAP. See Kerberos for more information.

Installing Ansible Automation Platform Central Authentication for use with automation hub

The Ansible Automation Platform Central Authentication installation will be included with your Red Hat Ansible Automation Platform installer. Install the Ansible Automation Platform using the following procedures, then configure the necessary parameters in your inventory file to successfully install both the Ansible Automation Platform and central authentication.

Choosing and obtaining a Red Hat Ansible Automation Platform installer

Choose the Red Hat Ansible Automation Platform installer you need based on your Red Hat Enterprise Linux environment internet connectivity. Review the scenarios below and determine which Red Hat Ansible Automation Platform installer meets your needs.

Note

A valid Red Hat customer account is required to access Red Hat Ansible Automation Platform installer downloads on the Red Hat Customer Portal.

Installing with internet access

Choose the Red Hat Ansible Automation Platform (AAP) installer if your Red Hat Enterprise Linux environment is connected to the internet. Installing with internet access retrieves the latest required repositories, packages, and dependencies. Choose one of the following ways to set up your AAP installer.

Tarball install

  1. Navigate to the Red Hat Ansible Automation Platform download page.

  2. Click btn:[Download Now] for the Ansible Automation Platform <latest-version> Setup.

  3. Extract the files:

    $ tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz

RPM install

  1. Install Ansible Automation Platform Installer Package

    v.2.4 for RHEL 8 for x86_64

    $ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-automation-platform-installer

    v.2.4 for RHEL 9 for x86-64

    $ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-automation-platform-installer
Note
dnf install enables the repo as the repo is disabled by default.

When you use the RPM installer, the files are placed under the /opt/ansible-automation-platform/installer directory.

Installing without internet access

Use the Red Hat Ansible Automation Platform (AAP) Bundle installer if you are unable to access the internet, or would prefer not to install separate components and dependencies from online repositories. Access to Red Hat Enterprise Linux repositories is still needed. All other dependencies are included in the tar archive.

  1. Navigate to the Red Hat Ansible Automation Platform download page.

  2. Click btn:[Download Now] for the Ansible Automation Platform <latest-version> Setup Bundle.

  3. Extract the files:

    $ tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz

Configuring the Red Hat Ansible Automation Platform installer

Before running the installer, edit the inventory file found in the installer package to configure the installation of automation hub and Ansible Automation Platform Central Authentication.

Note
Provide a reachable IP address for the [automationhub] host to ensure users can sync content from Private Automation Hub from a different node and push new images to the container registry.
  1. Navigate to the installer directory:

    1. Online installer:

      $ cd ansible-automation-platform-setup-<latest-version>
    2. Bundled installer:

      $ cd ansible-automation-platform-setup-bundle-<latest-version>
  2. Open the inventory file using a text editor.

  3. Edit the inventory file parameters under [automationhub] to specify an installation of automation hub host:

    1. Add group host information under [automationhub] using an IP address or FQDN for the automation hub location.

    2. Enter passwords for automationhub_admin_password, automation_pg_password, and any additional parameters based on your installation specifications.

  4. Enter a password in the sso_keystore_password field.

  5. Edit the inventory file parameters under [SSO] to specify a host on which to install central authentication:

    1. Enter a password in the sso_console_admin_password field, and any additional parameters based on your installation specifications.

Running the Red Hat Ansible Automation Platform installer

With the inventory file updated, run the installer using the setup.sh playbook found in the installer package.

  1. Run the setup.sh playbook:

    $ ./setup.sh

Log in as a central authentication admin user

With Red Hat Ansible Automation Platform installed, log in as an admin user to the central authentication server using the admin credentials that you specified in your inventory file.

  1. Navigate to your Ansible Automation Platform Central Authentication instance.

  2. Login using the admin credentials you specified in your inventory file, in the sso_console_admin_username and sso_console_admin_password fields.

With Ansible Automation Platform Central Authentication successfully installed, and the admin user logged in, you can proceed by adding a user storage provider (such as LDAP) using the following procedures.

Ansible Automation Platform Central Authentication for automation hub

To enable Ansible Automation Platform Central Authentication for your automation hub, start by downloading the Red Hat Ansible Automation Platform installer then proceed with the necessary set up procedures as detailed in this guide.

Important
The installer in this guide will install central authentication for a basic standalone deployment. Standalone mode only runs one central authentication server instance, and thus will not be usable for clustered deployments. Standalone mode can be useful to test drive and play with the features of central authentication, but it is not recommended that you use standalone mode in production as you will only have a single point of failure.

To install central authentication in a different deployment mode, please see this guide for more deployment options.

System Requirements

There are several minimum requirements to install and run Ansible Automation Platform Central Authentication:

  • Any operating system that runs Java

  • Java 8 JDK

  • zip or gzip and tar

  • At least 512mb of RAM

  • At least 1gb of disk space

  • A shared external database like PostgreSQL, MySQL, Oracle, etc. if you want to run central authentication in a cluster. See the Database Configuration section of the Red Hat Single Sign-On Server Installation and Configuration guide for more information.

  • Network multicast support on your machine if you want to run in a cluster. central authentication can be clustered without multicast, but this requires some configuration changes. See the Clustering section of the Red Hat Single Sign-On Server Installation and Configuration guide for more information.

  • On Linux, it is recommended to use /dev/urandom as a source of random data to prevent central authentication hanging due to lack of available entropy, unless /dev/random usage is mandated by your security policy. To achieve that on Oracle JDK 8 and OpenJDK 8, set the java.security.egd system property on startup to file:/dev/urandom.

Installing Ansible Automation Platform Central Authentication for use with automation hub

The Ansible Automation Platform Central Authentication installation will be included with your Red Hat Ansible Automation Platform installer. Install the Ansible Automation Platform using the following procedures, then configure the necessary parameters in your inventory file to successfully install both the Ansible Automation Platform and central authentication.

Choosing and obtaining a Red Hat Ansible Automation Platform installer

Choose the Red Hat Ansible Automation Platform installer you need based on your Red Hat Enterprise Linux environment internet connectivity. Review the scenarios below and determine which Red Hat Ansible Automation Platform installer meets your needs.

Note

A valid Red Hat customer account is required to access Red Hat Ansible Automation Platform installer downloads on the Red Hat Customer Portal.

Installing with internet access

Choose the Red Hat Ansible Automation Platform (AAP) installer if your Red Hat Enterprise Linux environment is connected to the internet. Installing with internet access retrieves the latest required repositories, packages, and dependencies. Choose one of the following ways to set up your AAP installer.

Tarball install

  1. Navigate to the Red Hat Ansible Automation Platform download page.

  2. Click btn:[Download Now] for the Ansible Automation Platform <latest-version> Setup.

  3. Extract the files:

    $ tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz

RPM install

  1. Install Ansible Automation Platform Installer Package

    v.2.4 for RHEL 8 for x86_64

    $ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-automation-platform-installer

    v.2.4 for RHEL 9 for x86-64

    $ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-automation-platform-installer
Note
dnf install enables the repo as the repo is disabled by default.

When you use the RPM installer, the files are placed under the /opt/ansible-automation-platform/installer directory.

Installing without internet access

Use the Red Hat Ansible Automation Platform (AAP) Bundle installer if you are unable to access the internet, or would prefer not to install separate components and dependencies from online repositories. Access to Red Hat Enterprise Linux repositories is still needed. All other dependencies are included in the tar archive.

  1. Navigate to the Red Hat Ansible Automation Platform download page.

  2. Click btn:[Download Now] for the Ansible Automation Platform <latest-version> Setup Bundle.

  3. Extract the files:

    $ tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz

Configuring the Red Hat Ansible Automation Platform installer

Before running the installer, edit the inventory file found in the installer package to configure the installation of automation hub and Ansible Automation Platform Central Authentication.

Note
Provide a reachable IP address for the [automationhub] host to ensure users can sync content from Private Automation Hub from a different node and push new images to the container registry.
  1. Navigate to the installer directory:

    1. Online installer:

      $ cd ansible-automation-platform-setup-<latest-version>
    2. Bundled installer:

      $ cd ansible-automation-platform-setup-bundle-<latest-version>
  2. Open the inventory file using a text editor.

  3. Edit the inventory file parameters under [automationhub] to specify an installation of automation hub host:

    1. Add group host information under [automationhub] using an IP address or FQDN for the automation hub location.

    2. Enter passwords for automationhub_admin_password, automation_pg_password, and any additional parameters based on your installation specifications.

  4. Enter a password in the sso_keystore_password field.

  5. Edit the inventory file parameters under [SSO] to specify a host on which to install central authentication:

    1. Enter a password in the sso_console_admin_password field, and any additional parameters based on your installation specifications.

Running the Red Hat Ansible Automation Platform installer

With the inventory file updated, run the installer using the setup.sh playbook found in the installer package.

  1. Run the setup.sh playbook:

    $ ./setup.sh

Log in as a central authentication admin user

With Red Hat Ansible Automation Platform installed, log in as an admin user to the central authentication server using the admin credentials that you specified in your inventory file.

  1. Navigate to your Ansible Automation Platform Central Authentication instance.

  2. Login using the admin credentials you specified in your inventory file, in the sso_console_admin_username and sso_console_admin_password fields.

With Ansible Automation Platform Central Authentication successfully installed, and the admin user logged in, you can proceed by adding a user storage provider (such as LDAP) using the following procedures.

Understanding Ansible concepts

As a automation developer, review the following Ansible concepts to create successful Ansible playbooks and automation execution environments before beginning your Ansible development project.

Prerequisites

  • Ansible is installed. For information about installing Ansible, see Installing Ansible in the Ansible documentation.

About Ansible Playbooks

Playbooks are files written in YAML that contain specific sets of human-readable instructions, or “plays”, that you send to run on a single target or groups of targets.

Playbooks can be used to manage configurations of and deployments to remote machines, as well as sequence multi-tier rollouts involving rolling updates. Use playbooks to delegate actions to other hosts, interacting with monitoring servers and load balancers along the way. Once written, playbooks can be used repeatedly across your enterprise for automation.

About Ansible Roles

A role is Ansible’s way of bundling automation content as well as loading related vars, files, tasks, handlers, and other artifacts automatically by utilizing a known file structure. Instead of creating huge playbooks with hundreds of tasks, you can use roles to break the tasks apart into smaller, more discrete and composable units of work.

You can find roles for provisioning infrastructure, deploying applications, and all of the tasks you do every day on Ansible Galaxy. Filter your search by Type and select Role. Once you find a role that you’re interested in, you can download it by using the ansible-galaxy command that comes bundled with Ansible:

$ ansible-galaxy role install username.rolename

About Content Collections

An Ansible Content Collection is a ready-to-use toolkit for automation. It includes multiple types of content such as playbooks, roles, modules, and plugins all in one place. The diagram below shows the basic structure of a collection:

collection/
├── docs/
├── galaxy.yml
├── meta/
│   └── runtime.yml
├── plugins/
│   ├── modules/
│   │   └── module1.py
│   ├── inventory/
│   ├── lookup/
│   ├── filter/
│   └── .../
├── README.md
├── roles/
│   ├── role1/
│   ├── role2/
│   └── .../
├── playbooks/
│   ├── files/
│   ├── vars/
│   ├── templates/
│   ├── playbook1.yml
│   └── tasks/
└── tests/
    ├── integration/
    └── unit/

In Red Hat Ansible Automation Platform, automation hub serves as the source for Ansible Certified Content Collections.

About Execution Environments

Automation execution environments are consistent and shareable container images that serve as Ansible control nodes. Automation execution environments reduce the challenge of sharing Ansible content that has external dependencies.

Automation execution environments contain:

  • Ansible Core

  • Ansible Runner

  • Ansible Collections

  • Python libraries

  • System dependencies

  • Custom user needs

You can define and create an automation execution environment using Ansible Builder.

Additional resources

Migrating between Ansible Core versions

Migrating between versions of Ansible Core requires you to update your playbooks, plugins and other parts of your Ansible infrastructure to ensure they work with the latest version. This process requires that changes are validated against the updates made to each successive version of Ansible Core. If you intend to migrate from Ansible 2.9 to Ansible 2.11, you first need to verify that you meet the requirements of Ansible 2.10, and from there make updates to 2.11.

Ansible Porting Guides

The Ansible Porting Guide is a series of documents that provide information on the behavioral changes between consecutive Ansible versions. Refer to the guides when migrating from version of Ansible to a newer version.

Additional resources

  • Refer to the Ansible 2.9 for behavior changes between Ansible 2.8 and Ansible 2.9.

  • Refer to the Ansible 2.10 for behavior changes between Ansible 2.9 and Ansible 2.10.

Creating content

Use the guidelines in this section of the Creator Guide to learn more about the developing the content you will use in Red Hat Ansible Automation Platform.

Creating playbooks

Playbooks contain one or more plays. A basic play contains the following sections:

  • Name: a brief description of the overall function of the playbook, which assists in keeping it readable and organized for all users.

  • Hosts: identifies the target(s) for Ansible to run against.

  • Become statements: this optional statement can be set to true/yes to enable privilege escalation using a become plugin (such as sudo, su, pfexec, doas, pbrun, dzdo, ksu).

  • Tasks: this is the list actions that get executed against each host in the play.

Example playbook
- name: Set Up a Project and Job Template
  hosts: host.name.ip
  become: true

  tasks:
    - name: Create a Project
      ansible.controller.project:
        name: Job Template Test Project
        state: present
        scm_type: git
        scm_url: https://github.com/ansible/ansible-tower-samples.git

    - name: Create a Job Template
      ansible.controller.job_template:
        name: my-job-1
        project: Job Template Test Project
        inventory: Demo Inventory
        playbook: hello_world.yml
        job_type: run
        state: present

Creating collections

You can create your own Collections locally with the Ansible Galaxy CLI tool. All of the Collection-specific commands can be activated by using the collection subcommand.

Prerequisites
  • You have Ansible version 2.9 or newer installed in your development environment.

Procedure
  1. In your terminal, navigate to where you want your namespace root directory to be. For simplicity, this should be a path in COLLECTIONS_PATH but that is not required.

  2. Run the following command, replacing my_namespace and my_collection_name with the values you choose:

    $ ansible-galaxy collection init <my_namespace>.<my_collection_name>
    Note

    Make sure you have the proper permissions to upload to a namespace by checking under the "My Content" tab on galaxy.ansible.com or console.redhat.com/ansible/automation-hub

The above command will create a directory named from the namespace argument above (if one does not already exist) and then create a directory under that with the Collection name. Inside of that directory will be the default or "skeleton" Collection. This is where you can add your roles or plugins and start working on developing your own Collection.

In relation to execution environments, Collection developers can declare requirements for their content by providing the appropriate metadata via Ansible Builder.

Requirements from a Collection can be recognized in these ways:

  • A file meta/execution-environment.yml references the Python and/or bindep requirements files

  • A file named requirements.txt, which contains information on the Python dependencies and can sometimes be found at the root level of the Collection

  • A file named bindep.txt, which contains system-level dependencies and can be sometimes found in the root level of the Collection

  • If any of these files are in the build_ignore of the Collection, Ansible Builder will not pick up on these since this section is used to filter any files or directories that should not be included in the build artifact

Collection maintainers can verify that ansible-builder recognizes the requirements they expect by using the introspect command:

$ ansible-builder introspect --sanitize ~/.ansible/collections/
Additional resources
  • For more information on creating collections, see Creating collections in the Ansible Developer Guide.

Creating roles

You can create roles using the Ansible Galaxy CLI tool. Role-specific commands can be accessed from the roles subcommand.

ansible-galaxy role init <role_name>

Standalone roles outside of Collections are still supported, but new roles should be created inside of a Collection to take advantage of all the features Ansible Automation Platform has to offer.

Procedure
  1. In your terminal, navigate to the roles directory inside a collection.

  2. Create a role called role_name inside the collection created previously:

    $ ansible-galaxy role init my_role

    The collection now contains a role named my_role inside the roles directory:

        ~/.ansible/collections/ansible_collections/<my_namespace>/<my_collection_name>
        ...
        └── roles/
            └── my_role/
                ├── .travis.yml
                ├── README.md
                ├── defaults/
                │   └── main.yml
                ├── files/
                ├── handlers/
                │   └── main.yml
                ├── meta/
                │   └── main.yml
                ├── tasks/
                │   └── main.yml
                ├── templates/
                ├── tests/
                │   ├── inventory
                │   └── test.yml
                └── vars/
                    └── main.yml
  3. A custom role skeleton directory can be supplied using the --role-skeleton argument. This allows organizations to create standardized templates for new roles to suit their needs.

    ansible-galaxy role init my_role --role-skeleton ~/role_skeleton

This will create a role named my_role by copying the contents of ~/role_skeleton into my_role. The contents of role_skeleton can be any files or folders that are valid inside a role directory.

Additional resources
  • For more information on creating roles, see Creating roles in the Ansible Galaxy documentation.

Creating automation execution environments

An automation execution environments definition file will specify
  • An Ansible version

  • A Python version (defaults to system Python)

  • A set of required Python libraries

  • Zero or more Content Collections (optional)

  • Python dependencies for those specific Collections

The concept of specifying a set of Collections for an environment is to resolve and install their dependencies. The Collections themselves are not required to be installed on the machine that you are generating the automation execution environments on.

An automation execution environments is built from this definition, and results in a container image. Please read the Ansible Builder documentation to learn the steps involved in creating these images.

Introduction to content creator workflows and automation execution environments

About content workflows

Before Red Hat Ansible Automation Platform 2.0, an automation content developer may have needed so many Python virtual environments that they required their own automation in order to manage them. To reduce this level of complexity, Ansible Automation Platform 2.0 is moving away from virtual environments and using containers, referred to as automation execution environments, instead, as they are straightforward to build and manage and are more shareable across teams and orgs.

As automation controller shifts to using automation execution environments, tools like Automation content navigator and Ansible Builder ensure that you can take advantage of those automation execution environments locally within your own development system.

Additional resources

Architecture overview

The following list shows the arrangements and uses of tools available on Ansible Automation Platform 2.0, along with how they can be utilized:

  • Automation content navigator only — can be used today in Ansible Automation Platform 1.2

  • Automation content navigator + downloaded automation execution environments — used directly on laptop/workstation

  • Automation content navigator + downloaded automation execution environments + automation controller — for pushing/executing locally → remotely

  • Automation content navigator + automation controller + Ansible Builder + Layered custom EE — provides even more control over utilized content for how to execute automation jobs

Setting up your development environment

You can follow the procedures in this section to set up your development environment to create automation execution environments.

Installing Ansible Builder

You can install Ansible Builder using Red Hat Subscription Management (RHSM) to attach your Red Hat Ansible Automation Platform subscription. Attaching your Red Hat Ansible Automation Platform subscription allows you to access subscription-only resources necessary to install ansible-builder. Once you attach your subscription, the necessary repository for ansible-builder is automatically enabled.

Note
You must have valid subscriptions attached on the host before installing ansible-builder.
Procedure
  • In your terminal, run the following command to install Ansible Builder and activate your Ansible Automation Platform repo:

    #  dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-builder

Installing Automation content navigator on RHEL from an RPM

You can install Automation content navigator on Red Hat Enterprise Linux (RHEL) from an RPM.

Prerequisites
  • You have installed RHEL 8.6 or later.

  • You registered your system with Red Hat Subscription Manager.

Note

Ensure that you only install the navigator matching your current Red Hat Ansible Automation Platform environment.

Procedure
  1. Attach the Red Hat Ansible Automation Platform SKU:

    $ subscription-manager attach --pool=<sku-pool-id>
  2. Install Automation content navigator with the following command:

    v.2.4 for RHEL 8 for x86_64

    $ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-navigator

    v.2.4 for RHEL 9 for x86-64

    $ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-navigator
Verification
  • Verify your Automation content navigator installation:

    $ ansible-navigator --help

The following example demonstrates a successful installation:

Automation content navigator successful installation

Downloading base automation execution environments

Base images that ship with AAP 2.0 are hosted on the Red Hat Ecosystem Catalog (registry.redhat.io).

Prerequisites
  • You have a valid Red Hat Ansible Automation Platform subscription.

Procedure
  1. Log in to registry.redhat.io

    $ podman login registry.redhat.io
  2. Pull the base images from the registry

    $ podman pull registry.redhat.io/aap/<image name>

Tools and components

Learn more about the Red Hat Ansible Automation Platform tools and components you will use in creating automation execution environments.

About Ansible Builder

Ansible Builder is a command line tool that automates the process of building automation execution environments by using the metadata defined in various Ansible Collections or by the user.

Before Ansible Builder was developed, Red Hat Ansible Automation Platform users could run into dependency issues and errors when creating custom virtual environments or containers that included all of the required dependencies installed.

Now, with Ansible Builder, you can easily create a customizable automation execution environments definition file that specifies the content you want to be included in your automation execution environments such as collections, third-party Python requirements, and system-level packages. This allows you to fulfill all of the necessary requirements and dependencies to get jobs running.

Note

Red Hat currently does not support users who choose to provide their own container images when building automation automation execution environments.

Uses for Automation content navigator

Automation content navigator is a command line, content-creator-focused tool with a text-based user interface. You can use Automation content navigator to:

  • Launch and watch jobs and playbooks.

  • Share stored, completed playbook and job run artifacts in JSON format.

  • Browse and introspect automation execution environments.

  • Browse your file-based inventory.

  • Render Ansible module documentation and extract examples you can use in your playbooks.

  • View a detailed command output on the user interface.

About Automation Hub

Automation Hub provides a place for Red Hat subscribers to quickly find and use content that is supported by Red Hat and our technology partners to deliver additional reassurance for the most demanding environments.

At a high level, Automation Hub provides an overview of all partners participating and providing certified, supported content.

From a central view, users can dive deeper into each partner and check out the collections.

Additionally, a searchable overview of all available collections is available.

About the Ansible command line interface

Using Ansible on the command line is a useful way to run tasks that you do not repeat very often. The recommended way to handle repeated tasks is to write a playbook.

An ad hoc command for Ansible on the command line follows this structure:

$ ansible [pattern] -m [module] -a "[module options]"

Additional resources

Migrating virtual envs to automation execution environments

Use the following sections to assist with additional steps in the migration process once you have upgraded to Red Hat Ansible Automation Platform 2.0 and automation controller 4.0.

Listing custom virtual environments

You can list the virtual environments on your automation controller instance using the awx-manage command.

Procedure
  1. SSH into your automation controller instance and run:

    $ awx-manage list_custom_venvs

A list of discovered virtual environments will appear.

# Discovered virtual environments:
/var/lib/awx/venv/testing
/var/lib/venv/new_env

To export the contents of a virtual environment, re-run while supplying the path as an argument:
awx-manage export_custom_venv /path/to/venv

Viewing objects associated with a custom virtual environment

View the organizations, jobs, and inventory sources associated with a custom virtual environment using the awx-manage command.

Procedure
  1. SSH into your automation controller instance and run:

    $ awx-manage custom_venv_associations /path/to/venv

A list of associated objects will appear.

inventory_sources:
- id: 15
  name: celery
job_templates:
- id: 9
  name: Demo Job Template @ 2:40:47 PM
- id: 13
  name: elephant
organizations
- id: 3
  name: alternating_bongo_meow
- id: 1
  name: Default
projects: []

Selecting the custom virtual environment to export

Select the custom virtual environment you wish to export using awx-manage export_custom_venv command.

Procedure
  1. SSH into your automation controller instance and run:

    $ awx-manage export_custom_venv /path/to/venv

The output from this command will show a pip freeze of what is in the specified virtual environment. This information can be copied into a requirements.txt file for Ansible Builder to use for creating a new automation execution environments image.

numpy==1.20.2
pandas==1.2.4
python-dateutil==2.8.1
pytz==2021.1
six==1.16.0

To list all available custom virtual environments run:
awx-manage list_custom_venvs
Note

Pass the -q flag when running awx-manage list_custom_venvs to reduce output.

Migrating existing content

Use the following sections learn how to use the awx-manage command to assist with additional steps in the migration process once you have upgraded to Red Hat Ansible Automation Platform 2.0 and automation controller 4.0. Additionally, learn more about migrating between versions of Ansible.

Migrating virtual envs to automation execution environments

Use the following sections to assist with additional steps in the migration process once you have upgraded to Red Hat Ansible Automation Platform 2.0 and automation controller 4.0.

Listing custom virtual environments

You can list the virtual environments on your automation controller instance using the awx-manage command.

Procedure
  1. SSH into your automation controller instance and run:

    $ awx-manage list_custom_venvs

A list of discovered virtual environments will appear.

# Discovered virtual environments:
/var/lib/awx/venv/testing
/var/lib/venv/new_env

To export the contents of a virtual environment, re-run while supplying the path as an argument:
awx-manage export_custom_venv /path/to/venv

Viewing objects associated with a custom virtual environment

View the organizations, jobs, and inventory sources associated with a custom virtual environment using the awx-manage command.

Procedure
  1. SSH into your automation controller instance and run:

    $ awx-manage custom_venv_associations /path/to/venv

A list of associated objects will appear.

inventory_sources:
- id: 15
  name: celery
job_templates:
- id: 9
  name: Demo Job Template @ 2:40:47 PM
- id: 13
  name: elephant
organizations
- id: 3
  name: alternating_bongo_meow
- id: 1
  name: Default
projects: []

Selecting the custom virtual environment to export

Select the custom virtual environment you wish to export using awx-manage export_custom_venv command.

Procedure
  1. SSH into your automation controller instance and run:

    $ awx-manage export_custom_venv /path/to/venv

The output from this command will show a pip freeze of what is in the specified virtual environment. This information can be copied into a requirements.txt file for Ansible Builder to use for creating a new automation execution environments image.

numpy==1.20.2
pandas==1.2.4
python-dateutil==2.8.1
pytz==2021.1
six==1.16.0

To list all available custom virtual environments run:
awx-manage list_custom_venvs
Note

Pass the -q flag when running awx-manage list_custom_venvs to reduce output.

Migrating between Ansible Core versions

Migrating between versions of Ansible Core requires you to update your playbooks, plugins and other parts of your Ansible infrastructure to ensure they work with the latest version. This process requires that changes are validated against the updates made to each successive version of Ansible Core. If you intend to migrate from Ansible 2.9 to Ansible 2.11, you first need to verify that you meet the requirements of Ansible 2.10, and from there make updates to 2.11.

Ansible Porting Guides

The Ansible Porting Guide is a series of documents that provide information on the behavioral changes between consecutive Ansible versions. Refer to the guides when migrating from version of Ansible to a newer version.

Additional resources

  • Refer to the Ansible 2.9 for behavior changes between Ansible 2.8 and Ansible 2.9.

  • Refer to the Ansible 2.10 for behavior changes between Ansible 2.9 and Ansible 2.10.

Rule Audit

Rule audit allows the auditing of rules which have been triggered by all the rules that were activated at some point.

The Rule Audit list view shows you a list of every time an event came in that matched a condition within a rulebook and triggered an action. The list shows you rules within your rulebook and each heading matches up to a rule that has been executed.

Viewing rule audit details

From the Rule Audit list view you can check the event that triggered specific actions.

Rule audit list view
Procedure
  1. From the navigation panel select btn:[Rule Audit].

  2. Select the desired rule, this brings you to the Details tab.

From here you can view when it was created, when it was last fired, and the rulebook activation that it corresponds to.

Viewing rule audit events

Procedure
  1. From the navigation panel select btn:[Rule Audit].

  2. Select the desired rule, this brings you to the Details tab. To view all the events that triggered an action, select the Events tab. This shows you the event that triggered actions.

  3. Select an event to view the Event log, along with the Source type and Timestamp.

Event details

Viewing rule audit actions

Procedure
  1. From the navigation panel select btn:[Rule Audit].

  2. Select the desired rule, this brings you to the Actions tab.

From here you can view executed actions that were taken. Some actions are linked out to automation controller where you can view the output.

Ansible rulebooks

Event-Driven Ansible controller provides the interface in which Event-Driven Ansible automation performs. Ansible rulebook provides the framework for Event-Driven Ansible automation. Ansible rulebook is essentially a collection of rulesets, which in turn, consists of one or more sources, rules, and conditions.

Decision Environments

Event-Driven Ansible includes, by default, an ansible.eda collection, which contains sample sources, event filters and rulebooks. All the collections, ansible rulebooks and their dependencies use a Decision Environment, which is an image that can be run on either Podman or Kubernetes.

In Decision Environments, sources, which are typically Python code, are distributed through ansible-collections. They inject external events into a rulebook for processing. The rulebook consists of the following:

  • The python interpreter

  • Java Runtime Environment for Drools rule engine

  • ansible-rulebook python package

  • ansible.eda collection

You can use the base Decision Environment and build your own customized Decision Environments with additional collections and collection dependencies. You can build a Decision Environment using a Dockerfile or optionally you can deploy your CA certificate into the image.

Rulebook actions

A rulebook specifies actions to be performed when a rule is triggered. A rule gets triggered when the events match the conditions for the rules. The following actions are currently supported:

  • run_job_template

  • run_playbook (only supported with ansible-rulebook CLI)

  • debug

  • print_event

  • set_fact

  • post_event

  • retract_fact

  • shutdown

Additional resources

Using Event-Driven Ansible controller

After you have successfully installed the Event-Driven Ansible controller, you can access the interface to manage your IT responses across all event sources. Since Event-Driven Ansible controller is integrated with automation controller, you can automate a combination of processes, including issue remediation, user administration tasks, operational logic, and the like.

Depending on your role, you can use Event-Driven Ansible controller for any of the following tasks:

  • Configuring a new project

  • Setting up a new decision environment

  • Creating a new authentication token

  • Setting up a rulebook activation

Additional resources (or Next steps)

Installation of Event-Driven Ansible controller

The installation of Event-Driven Ansible controller occurs at the same time as the installation for other Ansible Automation Platform components. Like automation controller and automation hub, the setup includes default settings for specific variables in the inventory files.

Installing Event-Driven Ansible controller on Red Hat Ansible Automation Platform

To prepare for installation of Event-Driven Ansible controller, review the planning steps, including the components list, system requirements, and other valuable information in the Red Hat Ansible Automation Platform Planning Guide.

For detailed instructions on deploying the Event-Driven Ansible controller with Ansible Automation Platform, follow the instructions in the Red Hat Ansible Automation Platform Installation Guide, specifically the examples in Installing Red Hat Ansible Automation Platform.

To further assist you in getting started with Event-Driven Ansible, review the following existing sections that have been updated with code examples and variables that are required for successful installation of Event-Driven Ansible controller:

Lastly, refer to the following Event-Driven Ansible controller-specific code example and appendix section that have been added to the Red Hat Ansible Automation Platform Installation Guide:

Deploying Event-Driven Ansible controller with Ansible Automation Platform Operator on OpenShift Container Platform

Event-Driven Ansible is not limited to Ansible Automation Platform on VMs. You can also access this feature on Ansible Automation Platform Operator on OpenShift Container Platform. To deploy Event-Driven Ansible with Ansible Automation Platform Operator, follow the instructions in Deploying Event-Driven Ansible controller with Ansible Automation Platform Operator on OpenShift Container Platform.

After successful deployment, you can connect to event sources and resolve issues more efficiently.

Additional resources

Rulebook activations

A rulebook activation is a process running in the background defined by a decision environment executing a specific rulebook.

Setting up a rulebook activation

Prerequisites
  • You are logged in to the Event-Driven Ansible controller Dashboard as a Content Consumer.

  • You have set up a project.

  • You have set up a decision environment.

  • You have set up an automation controller token.

Procedure
  1. Navigate to the Event-Driven Ansible controller Dashboard.

  2. From the navigation panel, select menu:Rulebook Activations[Create rulebook activation].

  3. Insert the following:

    Name

    Insert the name.

    Description

    This field is optional.

    Project

    Projects are a logical collection of rulebooks.

    Rulebook

    Rulebooks will be shown according to the project selected.

    Decision environment

    Decision environments are a container image to run Ansible rulebooks.

    Restart policy

    This is a policy to decide when to restart a rulebook.

    • Policies:

      1. Always: Restarts when a rulebook finishes

      2. Never: Never restarts a rulebook when it finishes

      3. On failure: Only restarts when it fails

    Rulebook activation enabled?

    This automatically enables the rulebook activation to run.

    Variables

    The variables for the rulebook are in a JSON/YAML format. The content would be equivalent to the file passed through the --vars flag of ansible-rulebook command.

  4. Select btn:[Create rulebook activation].

Your rulebook activation is now created and can be managed in the Rulebook Activations screen.

After saving the new rulebook activation, the rulebook activation’s details page is displayed. From there or the Rulebook Activations list view you can edit or delete it.

Rulebook activation list view

On the Rulebook Activations page, you can view the rulebook activations that you have created along with the Activation status, Number of rules associated with the rulebook, the Fire count, and Restart count.

If the Activation Status is Running, it means that the rulebook activation is running in the background and executing the required actions according to the rules declared in the rulebook.

You can view more details by selecting the activation from the Rulebook Activations list view.

Rulebook activation][width=25px

For all activations that have run, you can view the Details and History tabs to get more information about what happened.

Viewing activation output

You can view the output of the activations in the History tab.

Procedure
  1. Select the btn:[History] tab to access the list of all the activation instances. An activation instance represents a single execution of the activation.

  2. Then select the activation instance in question, this will show you the Output produced by that specific execution.

Rulebook activation history

To view events that came in and triggered an action, you can use the Rule Audit section in the Event-Driven Ansible controller Dashboard.

Enabling and disabling rulebook activations

  1. Select the switch on the row level to enable or disable your chosen rulebook.

  2. In the popup window, select btn:[Yes, I confirm that I want to enable/disable these X rulebook activations].

  3. Select btn:[Enable/Disable rulebook activation].

Restarting rulebook activations

Note

You can only restart a rulebook activation if it is currently enabled and the restart policy was set to Always when it was created.

  1. Select the btn:[More Actions] icon next to Rulebook Activation enabled/disabled toggle.

  2. Select btn:[Restart rulebook activation].

  3. In the popup window, select btn:[Yes, I confirm that I want to restart these X rulebook activations].

  4. Select btn:[Restart rulebook activations].

Deleting rulebook activations

  1. Select the btn:[More Actions] icon next to the Rulebook Activation enabled/disabled toggle.

  2. Select btn:[Delete rulebook activation].

  3. In the popup window, select btn:[Yes, I confirm that I want to delete these X rulebook activations].

  4. Select btn:[Delete rulebook activations].

Activating webhook rulebooks

In Openshift environments, you can allow webhooks to reach an activation-job-pod over a given port by creating a Route that exposes that rulebook activation’s Kubernetes service.

Prerequisites
  • You have created a rulebook activation in the Event-Driven Ansible controller Dashboard.

Note

The following is an example of rulebook with a given webhook:

- name: Listen for storage-monitor events
  hosts: all
  sources:
    - ansible.eda.webhook:
        host: 0.0.0.0
        port: 5000
  rules:
    - name: Rule - Print event information
    condition: event.meta.headers is defined
    action:
      run_job_template:
        name: StorageRemediation
        organization: Default
        job_args:
          extra_vars:
             message: from eda
             sleep: 1
Procedure
  1. Create a Route (on OpenShift Container Platform) to expose the service. The following is an example Route for an ansible-rulebook source that expects POST’s on port 5000 on the decision environment pod:

    kind: Route
    apiVersion: route.openshift.io/v1
    metadata:
      name: test-sync-bug
      namespace: dynatrace
      labels:
        app: eda
        job-name: activation-job-1-5000
    spec:
      host: test-sync-bug-dynatrace.apps.aap-dt.ocp4.testing.ansible.com
      to:
        kind: Service
        name: activation-job-1-5000
        weight: 100
      port:
        targetPort: 5000
      tls:
        termination: edge
        insecureEdgeTerminationPolicy: Redirect
      wildcardPolicy: None
  2. When you create the Route, test it with a Post to the Route URL:

    curl -H "Content-Type: application/json" -X POST
    test-sync-bug-dynatrace.apps.aap-dt.ocp4.testing.ansible.com -d
    '{}'
    Note

    You do not need the port as it is specified on the Route (targetPort).

Testing with Kubernetes

With Kubernetes you can create an Ingress, or expose the port, but not for production.

Procedure
  1. Run the following command to expose the port on the cluster for a given service:

    kubectl port-forward svc/<ACTIVATION_SVC_NAME> 5000:5000
  2. Make the HTTP requests against the localhost:5000 to trigger the rulebook:

    curl -H "Content-Type: application/json" -X POST test-sync-bug-dynatrace.apps.aap-dt.ocp4.testing.ansible.com -d '{}'

Event-Driven Ansible controller overview

Event-Driven Ansible is a highly scalable, flexible automation capability that works with event sources such as other software vendors' monitoring tools. These tools watch IT solutions and identify events and automatically implement the documented changes or response in a rulebook to handle that event.

The following procedures form the user configuration:

Event-Driven Ansible Automation

Event-Driven Ansible is a new way to connect to sources of events and act on those events using rulebooks. This technology improves IT speed and agility, and enables consistency and resilience.

Event-Driven Ansible benefits

Event-Driven Ansible is designed for simplicity and flexibility. With these enhancements, you can:

  • Automate decision making

  • Use numerous event sources

  • Implement event-driven automation within and across multiple IT use cases

  • Achieve new milestones in efficiency, service delivery excellence and cost savings

Event-Driven Ansible minimizes human error and automates processes to increase efficiency in troubleshooting and information gathering.

This guide helps you get started with Event-Driven Ansible by providing links to information on understanding, installing, and using Event-Driven Ansible controller.

Setting up a token

Automation controller must contain a project based on a repository with certain playbooks designed to work with the Event-Driven Ansible rulebooks. Automation controller must also have corresponding job templates set up based on the playbooks in that project.

Setting up a token to authenticate to Ansible Automation Platform Controller

Prerequisites
  • You are logged in to the Event-Driven Ansible controller Dashboard as a Content Consumer.

  • You have created a user.

  • You can log in to the Event-Driven Ansible controller Dashboard or you are added as a user in the organization.

Procedure
  1. Navigate to the Event-Driven Ansible controller Dashboard.

  2. From the top navigation panel, select your profile.

  3. Go to User details.

  4. Select menu:Controller Tokens[Create controller token].

  5. Insert the following:

    Name

    Insert the name.

    Description

    This field is optional.

    Token

    Create the token in automation controller. For more information on creating the token, refer to the Users - Tokens section of the Automation controller User Guide.

    Note

    The token must be in write-scope.

  6. Select btn:[Create controller token].

After saving the new token, you are brought to the Controller Tokens tab where you can delete the token.

Decision environments

Decision environments are a container image to run Ansible rulebooks. They create a common language for communicating automation dependencies, and provide a standard way to build and distribute the automation environment. The default decision environment is found in the Ansible-Rulebook.

Setting up a new decision environment

The following steps describe how to import a decision environment into your Event-Driven Ansible controller Dashboard.

Prerequisites
  • You are logged in to the Event-Driven Ansible controller Dashboard as a Content Consumer.

  • You have set up a credential, if necessary. For more information, refer to the Credentials section of the Automation controller documentation.

  • You have pushed a decision environment image to an image repository or you chose to use the image de-supported provided at registry.redhat.io.

Procedure
  1. Navigate to the Event-Driven Ansible controller Dashboard.

  2. From the navigation panel, select menu:Decision Environments[Create decision environment].

  3. Insert the following:

    Name

    Insert the name.

    Description

    This field is optional.

    Image

    This is the full image location, including the container registry, image name, and version tag.

    Credential

    This field is optional. This is the token needed to utilize the decision environment image.

  4. Select btn:[Create decision environment].

Your decision environment is now created and can be managed on the Decision Environments screen.

After saving the new decision environment, the decision environment’s details page is displayed. From there or the Decision Environments list view, you can edit or delete it.

Building a custom decision environment for Event-Driven Ansible within Ansible Automation Platform

Refer to this section if you need a custom decision environment to provide a custom maintained or third-party event source plugin that is not available in the default decision environment.

Prerequisites
  • Ansible Automation Platform > = 2.4

  • Event-Driven Ansible

  • Ansible Builder > = 3.0

Procedure
  • Add the de-supported decision environment. This image is built from a base image provided by Red Hat called de-minimal.

    Note

    Red Hat recommends using de-minimal as the base image with Ansible Builder to build your custom decision environments.

The following is an example of the Ansible builder definition file using de-minimal as a base image to build a custom decision environment with the ansible.eda collection:

version: 3

images:
  base_image:
    name: 'registry.redhat.io/ansible-automation-platform-24/de-minimal-rhel8:latest'

dependencies:
  galaxy:
    collections:
      - ansible.eda
  python_interpreter:
    package_system: "python39"

options:
  package_manager_path: /usr/bin/microdnf

Additionally, if other python packages or RPM are needed, you can add the following to a single definition file:

version: 3

images:
  base_image:
    name: 'registry.redhat.io/ansible-automation-platform-24/de-minimal-rhel8:latest'

dependencies:
  galaxy:
    collections:
      - ansible.eda
  python:
    - six
    - psutil
  system:
    - iputils [platform:rpm]
  python_interpreter:
    package_system: "python39"

options:
  package_manager_path: /usr/bin/microdnf

Projects

Projects are a logical collection of rulebooks. They must be a git repository and only http protocol is supported. The rulebooks of a project must be located in the /rulebooks folder at the root of the project or at the path defined for Event-Driven Ansible content in Ansible collections: /extensions/eda/rulebooks.

Setting up a new project

Prerequisites
  • You are logged in to the Event-Driven Ansible controller Dashboard as a Content Consumer.

  • You have set up a credential, if necessary. For more information, refer to the Credentials section of the automation controller documentation.

  • You have an existing repository containing rulebooks that are integrated with playbooks contained in a repository to be used by automation controller.

Procedure
  1. Log in to the Event-Driven Ansible controller Dashboard.

  2. From the navigation panel, select menu:Projects[Create project].

  3. Insert the following:

    Name

    Enter project name.

    Description

    This field is optional.

    SCM type

    Git is the only SCM type available for use.

    SCM URL

    HTTP[S] protocol address of a repository, such as GitHub or GitLab.

    Note

    You cannot edit the SCM URL after you create the project.

    Credential

    This field is optional. This is the token needed to utilize the SCM URL.

  4. Select btn:[Create project].

Your project is now created and can be managed in the Projects screen.

After saving the new project, the project’s details page is displayed. From there or the Projects list view, you can edit or delete it.

Projects list view

On the Projects page, you can view the projects that you have created along with the Status and the Git hash.

Note

If a rulebook changes in source control you can re-sync a project by selecting the sync icon next to the project from the Projects list view. The Git hash updates represent the latest commit on that repository. An activation must be restarted or recreated if you want to use the updated project.

Editing a project

Procedure
  1. From the Projects list view, select the btn:[More Actions] icon next to the desired project.

  2. Select btn:[Edit project].

  3. Enter the required changes and select btn:[Save project].

Edit project

Deleting a project

Procedure
  1. From the Projects list view, select the btn:[More Actions] icon next to the desired project.

  2. Select btn:[Delete project].

  3. In the popup window, select btn:[Yes, I confirm that I want to delete this project].

  4. Select btn:[Delete project].

Working with signed containers

Deploying your system for container signing

Execution Environments are container images used by Ansible automation controller to run jobs. This content can be downloaded to the private automation hub, and published within your organization.

Automation hub is now implementing image signing so the user can rely on better security for the EE container images.

As an Ansible Automation Platform user, you can now confirm if an EE/Container is already signed, or how to use the proper tools to sign and verify the signature yourself. This section details how to deploy your system so it is ready for container signing.

Procedure
  1. Deploy your system with support for container signing activated.

    _automation_hub:
        automationhub_create_default_container_signing_service: true
        automationhub_container_signing_service_key: _path/to/gpg.key_
        automationhub_container_signing_service_script: _path/to/executable_
  2. Navigate to automation hub.

  3. In the navigation pane, select menu:Signature Keys[].

  4. Ensure you have a key titled container-default, or container-anyname.

Note
The 'container-default' service is created by the Ansible Automation Platform installer.

Adding containers remotely to automation hub

There are two ways to add containers to automation hub:

  • Create Remotes

  • Execution Environment

Procedure

To add a remote registry:

  1. In automation hub, click btn:[Execution Environments] in the main menu pane. Two choices become available, Execution Environments, and Remote Registries.

  2. Click btn:[Remote Registries].

  3. Click btn:[Add Remote Registry] in the main window.

    • In the Name field, enter the name of the registry where the container resides.

    • In the URL field, enter the URL of the registry where the container resides.

    • In the Username field, enter the username if necessary.

    • In the Password field, enter the password if necessary.

    • Click btn:[Save].

Adding an execution environment

Procedure
  1. Navigate to menu:Execution Environments[].

  2. Enter the name of the execution environment.

  3. Optional: Enter the upstream name.

  4. Under the Registry, select the name of the registry from the drop-down menu.

  5. Enter tags in the Add tags to include field. If the field is left blank, all of the tags will be passed. So it is important to pass repository specific tags.

  6. The remaining fields are optional:

    • Currently included tags

    • Add tag(s) to exclude

    • Currently excluded tag(s)

    • Description

    • Groups with access

  7. Click btn:[Save].

  8. Synchronize the image.

Pushing container images from your local

Procedure
  1. From a terminal, log into podman, or any container client currently in use.

    > podman pull <__container-name__>
  2. After the image is pulled, add tags:

    > podman tag <container-name> _<server-address>_/<container-name>:<tag name>
  3. Sign the image after changes have been made, and push it back up:

    > podman push _<server-address>_/<container-name>:<tag name>
    --tls-verify=false --sign-by<reference to the gpg key on your local>

    If the image is not signed, it can only be pushed with any current signature embedded.

  4. Push the image without signing it:

    > podman push _<server-address>_/<container-name>:<tag name>
    --tls-verify=false
  5. Navigate to the automation hub and click on Execution Environments if that window is not open.

  6. Click the Refresh icon to refresh the page to show the new execution environment.

  7. Click the name of the image.

In the details page, below the image name, will be displayed whether or not the image has been signed. In this case, it displays "Unsigned."

To sign the image from automation hub:

  1. Click the image name to open the details page.

  2. Click the three dots in the upper right hand corner of the details page. Three options are available:

    • Use in Controller

    • Delete

    • Sign

  3. Click sign from the drop-down menu.

The signing service signs the image. Once the image is signed, the status changes to "signed".

Using policies with signed images

Policies can be used by podman or other image clients to ensure the validity of the image by assigning specific policies to that signature.

Using podman to ensure an image is signed by a specific signature

When ensuring a signature is signed by a specific signature(s), the signature must be on your local.

Procedure
  1. In the navigation pane, select menu:Signature Keys[].

  2. Click the three dots on the right hand side of the signature that you are using.

  3. Select Download key from the drop-down menu. A new window opens.

  4. In the Name field, enter the name of the key.

  5. Click btn:[Save].

Configuring the client to verify signatures

Prerequsites
  • The client must have sudo privileges configured to verify signatures.

Procedure
  1. In a terminal type:

    > sudo <name of editor> __/etc/containers/policy.json__

The file may look similar to this:

    {
        "default": [{"type": "reject"}],
        "transports": {
            "docker": {
              "quay.io": [{"type": "insecureAcceptAnything}],
              "docker.io": [{"type": "insecureAcceptAnything}],
              "_<server-address>_": [
                {
                    "type": "signedBy",
                    "keyType": "GPGKeys",
                    "keyPath": "/tmp/containersig.txt"
    }

This shows there will be no verification from either quay.io, or docker.io since the type is insecureAcceptAnything which overrides the default type of reject. However, there will be verification from <server-address> as the parameter type has been set to "signedBy"`.

Note
The only keyType currently supported is GPG keys.
  1. Under the <server-address> entry, modify the keyPath <1> to reflect the name of your key file.

        {
            "default": [{"type": "reject"}],
            "transports": {
                "docker": {
                  "quay.io": [{"type": "insecureAcceptAnything}],
                  "docker.io": [{"type": "insecureAcceptAnything}],
                  "_<server-address>_1": [
                    {
                        "type": "signedBy",
                        "keyType": "GPGKeys",
                        "keyPath": "/tmp/<key file name", (1)
                        "signedIdentity": {
                          "type": "remapIdentity",
                          "prefix": "_<server-address>_",
                          "signedPrefix": "0.0.0.0:8002"
        }
  2. Save and close the file.

  3. Pull the file using podman, or your client of choice:

> podman pull _<server-address>_/<container-name>:<tag name>
--tls-verify=false

This verifies the signature with no errors.

Configuring user access for container repositories in private automation hub

Configure user access for container repositories in your private automation hub to provide permissions that determine who can access and manage images in your Ansible Automation Platform.

Prerequisites

  • You can create groups and assign permissions in private automation hub.

Container registry group permissions

User access provides granular controls to how users can interact with containers managed in private automation hub. Use the following list of permissions to create groups with the right privileges for your container registries.

Table 8. List of group permissions used to manage containers in private automation hub
Permission name Description

Create new containers

Users can create new containers

Change container namespace permissions

Users can change permissions on the container repository

Change container

Users can change information on a container

Change image tags

Users can modify image tags

Pull private containers

Users can pull images from a private container

Push to existing container

Users can push an image to an existing container

View private containers

Users can view containers marked as private

Creating a new group

You can create and assign permissions to a group in automation hub that enables users to access specified features in the system. By default, there is an admins group in automation hub that has all permissions assigned and is available on initial login with credentials created when installing automation hub.

Prerequisites
  • You have groups permissions and can create and manage group configuration and access in automation hub.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to menu:User Access[Groups].

  3. Click btn:[Create].

  4. Provide a Name and click btn:[Create].

You can now assign permissions and add users on the group edit page.

Assigning permissions to groups

You can assign permissions to groups in automation hub that enable users to access specific features in the system. By default, new groups do not have any assigned permissions. You can add permissions upon initial group creation or edit an existing group to add or remove permissions

Prerequisites
  • You have Change group permissions and can edit group permissions in automation hub.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to menu:User Access[Roles].

  3. Click btn:[Add roles].

  4. Click in the name field and fill in the role name.

  5. Click in the description field and fill in the description.

  6. Complete the Permissions section.

  7. Click in the field for each permission type and select permissions that appear in the list.

  8. Click btn:[Save] when finished assigning permissions.

  9. Navigate to menu:User Access[Groups].

  10. Click on a group name.

  11. Click on the Access tab.

  12. Click btn:[Add roles].

  13. Select the role created in step 8.

  14. Click btn:[Next] to confirm the selected role.

  15. Click btn:[Add] to complete adding the role.

The group can now access features in automation hub associated with their assigned permissions.

Additional resources

Adding users to groups

You can add users to groups when creating a group or manually add users to existing groups. This section describes how to add users to an existing group.

Prerequisites
  • You have groups permissions and can create and manage group configuration and access in automation hub.

Procedure
  1. Log in to automation hub.

  2. Navigate to menu:User Access[Groups].

  3. Click on a Group name.

  4. Navigate to the menu:Users[] tab, then click btn:[Add].

  5. Select users to add from the list and click btn:[Add].

You have added the users you selected to the group. These users now have permissions to use automation hub assigned to the group.

Configuring user access for your local Automation Hub

About user access

You can manage user access to content and features in Automation Hub by creating groups of users that have specific permissions.

How to implement user access

User access is based on managing permissions to system objects (users, groups, namespaces) rather than by assigning permissions individually to specific users.

You assign permissions to the groups you create. You can then assign users to these groups. This means that each user in a group has the permissions assigned to that group.

Groups created in Automation Hub can range from system administrators responsible for governing internal collections, configuring user access, and repository management to groups with access to organize and upload internally developed content to Automation Hub.

Default user access

When you install Automation hub, the default admin user is created in the Admin group. This group is assigned all permissions in the system.

Getting started

Log in to your local Automation Hub using credentials for the admin user configured during installation.

The following sections describe the workflows associated with organizing your users who will access Automation Hub and providing them with required permissions to reach their goals. See the permissions reference table for a full list and description of all permissions available.

Creating a new group

You can create and assign permissions to a group in automation hub that enables users to access specified features in the system. By default, there is an admins group in automation hub that has all permissions assigned and is available on initial login with credentials created when installing automation hub.

Prerequisites
  • You have groups permissions and can create and manage group configuration and access in automation hub.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to menu:User Access[Groups].

  3. Click btn:[Create].

  4. Provide a Name and click btn:[Create].

You can now assign permissions and add users on the group edit page.

Assigning permissions to groups

You can assign permissions to groups in automation hub that enable users to access specific features in the system. By default, new groups do not have any assigned permissions. You can add permissions upon initial group creation or edit an existing group to add or remove permissions

Prerequisites
  • You have Change group permissions and can edit group permissions in automation hub.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to menu:User Access[Roles].

  3. Click btn:[Add roles].

  4. Click in the name field and fill in the role name.

  5. Click in the description field and fill in the description.

  6. Complete the Permissions section.

  7. Click in the field for each permission type and select permissions that appear in the list.

  8. Click btn:[Save] when finished assigning permissions.

  9. Navigate to menu:User Access[Groups].

  10. Click on a group name.

  11. Click on the Access tab.

  12. Click btn:[Add roles].

  13. Select the role created in step 8.

  14. Click btn:[Next] to confirm the selected role.

  15. Click btn:[Add] to complete adding the role.

The group can now access features in automation hub associated with their assigned permissions.

Creating a new user

You can create a user in Automation Hub and add them to groups that can access features in the system associated by the level of assigned permissions.

Prerequisites
  • You have user permissions and can create users in Automation Hub.

Procedure
  1. Log in to your local Automation Hub.

  2. Navigate to menu:User Access[].

  3. Click btn:[Create user].

  4. Provide information in each of the fields. Username and Password are required.

  5. [Optional] Assign the user to a group by clicking in the Groups field and selecting from the list of groups.

  6. Click btn:[Save].

The new user will now appear in the list on the Users page.

Creating a super user

You can create a super user in automation hub and spread administration work across your team.

Prerequisites
  • You have Super user permissions and can create users in automation hub.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to menu:User Access[].

  3. Click btn:[Users].

  4. Select the user you want to be a super user to see the User details page.

  5. Select Super User under User type.

The user now has Super user permissions.

Adding users to groups

You can add users to groups when creating a group or manually add users to existing groups. This section describes how to add users to an existing group.

Prerequisites
  • You have groups permissions and can create and manage group configuration and access in automation hub.

Procedure
  1. Log in to automation hub.

  2. Navigate to menu:User Access[Groups].

  3. Click on a Group name.

  4. Navigate to the menu:Users[] tab, then click btn:[Add].

  5. Select users to add from the list and click btn:[Add].

You have added the users you selected to the group. These users now have permissions to use automation hub assigned to the group.

Creating a new group for content curators

You can create a new group in automation hub designed to support content curation in your organization that contributes internally developed collections for publication in automation hub.

This section shows you how to create a new group and assign the required permissions to help content developers create namespaces and upload their collections to automation hub.

Prerequisites
  • You have administrative permissions in automation hub and create groups.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to menu:User Access[Groups] and click btn:[Create].

  3. Enter Content Engineering as a Name for the group in the modal and click btn:[Create]. You have created the new group and the Groups page appears.

  4. On the Permissions tab, click btn:[Edit].

  5. Under Namespaces, add permissions for Add Namespace, Upload to Namespace and Change Namespace.

  6. Click btn:[Save].

    The new group is created with the permissions you assigned. You can then add users to the group.

  7. Click the Users tab on the Groups page.

  8. Click btn:[Add].

  9. Select users from the modal and click btn:[Add].

Conclusion

You now have a new group who can use automation hub to:

  • Create a namespace.

  • Edit the namespace details and resources page.

  • Upload internally developed collections to the namespace.

Automation Hub permissions

Permissions provide a defined set of actions each group performs on a given object. Determine the required level of access for your groups based on the following permissions:

Table 9. Permissions Reference Table
Object Permission Description

collection namespaces

Add namespace

Upload to namespace

Change namespace

Delete namespace

Groups with these permissions can create, upload collections, or delete a namespace.

collections

Modify Ansible repo content

Delete collections

Groups with this permission can move content between repositories using the Approval feature, certify or reject features to move content from the staging to published or rejected repositories, abd delete collections.

users

View user

Delete user

Add user

Change user

Groups with these permissions can manage user configuration and access in automation hub.

groups

View group

Delete group

Add group

Change group

Groups with these permissions can manage group configuration and access in automation hub.

collection remotes

Change collection remote

View collection remote

Groups with these permissions can configure remote repository by navigating to menu:Collections[Repo Management].

containers

Change container namespace permissions

Change containers

Change image tags

Create new containers

Push to existing containers

Delete container repository

Groups with these permissions can manage container repositories in automation hub.

remote registries

Add remote registry

Change remote registry

Delete remote registry

Groups with these permissions can add, change, or delete remote registries added to automation hub.

task management

Change task

Delete task

View all tasks

Groups with these permissions can manage tasks added to Task Management in automation hub.

Deleting a user from automation hub

When you delete a user account, the name and email of the user are permanently removed from automation hub.

Prerequisites
  • You have user permissions in automation hub.

Procedure
  1. Log in to automation hub.

  2. Navigate to menu:User Access[].

  3. Click btn:[Users] to display a list of the current users.

  4. Click the btn:[More Actions] icon icon beside the user that you want to remove, then click btn:[Delete].

  5. Click btn:[Delete] in the warning message to permanently delete the user.

Synchronizing repositories in automation hub

You can distribute relevant automation content collections to your users by synchronizing repositories from one automation hub to another. You should periodically synchronize your custom repository with the remote to ensure you have the latest collection updates.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Repositories].

  3. Locate your repository in the list and click Sync.

    All collections in the configured remote are downloaded to your custom repository. To check the status of the collection sync, select menu:Task Management[] from the Navigation panel.

    Note

    To limit repository synchronization to specific collections within a remote, you can identify specific collections to be pulled using a requirements.yml file. See Create a remote for more information.

Additional resources

For more information about using requirements files, see Install multiple collections with a requirements file in the Galaxy User Guide.

Remote management in automation hub

You can set up remote configurations to any server that is running automation hub. Remote configurations allow you to sync content to your custom repositories from an external collection source.

Creating a remote configuration in automation hub

You can use Red Hat Ansible Automation Platform to create a remote configuration to an external collection source and sync the content from those collections to your custom repositories.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Remotes].

  3. Click btn:[Add Remote].

  4. Enter a Name for the remote configuration.

  5. Enter the URL for the remote server, including the path for the specific repository.

    Note

    You can obtain this information by navigating to menu:Automation Hub[Repositories], selecting your repository, and clicking btn:[Copy CLI configuration].

  6. Configure the credentials to the remote server by entering a Token or Username and Password required to access the external collection.

    Note

    A Token can be generated by navigating to menu:Automation Hub[API token], clicking btn:[Load token] and copying the token that is loaded.

  7. To access collections from console.redhat.com, enter the SSO URL to sign in to the identity provider (IdP).

  8. Select or create a YAML requirements file to identify the collections and version ranges to synchronize with your custom repository. For example, to only download the kubernetes and AWS collection versions 5.0.0 or later the requirements file would look like this:

    Collections:
     	  - name: community.kubernetes
    	  - name: community.aws
     		version:”>=5.0.0”
    Note

    All collection dependencies are automatically downloaded during the Sync process.

  9. To configure your remote further, use the options available under Advanced configuration:

    1. If there is a corporate proxy in place for your organization, enter a Proxy URL, Proxy Username and Proxy Password.

    2. Enable or disable transport layer security using the TLS validation checkbox.

    3. If digital certificates are required for authentication, enter a Client key and Client certificate.

    4. If you are using a self-signed SSL certificate for your server, enter the PEM encoded client certificate used for authentication in the CA certificate field.

    5. To accelerate the speed at which collections in this remote can be downloaded, specify the number of collections that can be downloaded in tandem in the Download concurrency field.

    6. To limit the number of queries per second on this remote, specify a Rate Limit.

      Note

      Some servers can have a specific rate limit set and if exceeded, synchronization will fail.

Providing access to a remote configuration

After a remote configuration is created, you can provide access to it by doing the following.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Remotes].

  3. Locate your repository in the list and click the btn:[More Actions] icon , then select Edit.

  4. Select the Access tab.

  5. Select a group for Repository owners. See Configuring user access for your local Automation Hub for information about implementing user access.

  6. Select the appropriate roles for the selected group.

  7. Click btn:[Save].

Managing your private automation hub container registry

Manage container image repositories in your Ansible Automation Platform infrastructure using the automation hub container registry. Automation hub provides features to govern who can access individual container repositories, change tags on images, view activity and image layers, and provide additional information related to each container repository.

Container registries

The automation hub container registry is used for storing and managing container images. Once you have built or sourced a container image, you can push that container image to the registry portion of private automation hub to create a container repository.

Next steps
  • Push a container image to the automation hub container registry.

  • Create a group with access to the container repository in the registry.

  • Add the new group to the container repository.

  • Add a README to the container repository to provide users with information and relevant links.

Installing a high availability automation hub

Configure the Ansible Automation Platform installer to install automation hub in a highly available (HA) configuration. Install HA automation hub on SELinux by creating mount points and adding the appropriate SELinux contexts to your Ansible Automation Platform environment.

Highly available automation hub installation

Install a highly available automation hub by making the following changes to the inventory file in the Ansible Automation Platform installer, then running the ./setup.sh script:

Specify database host IP

Specify the IP address for your database host, using the automation_pg_host and automation_pg_port inventory variables. For example:

automationhub_pg_host='192.0.2.10'
automationhub_pg_port='5432'

also specify the IP address for your database host in the [database] section, using the value in the automationhub_pg_host inventory variable:

[database]
192.0.2.10
List all instances in a clustered setup

If installing a clustered setup, replace localhost ansible_connection=local in the [automationhub] section with the hostname or IP of all instances. For example:

[automationhub]
automationhub1.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.18
automationhub2.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.20
automationhub3.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.22
Red Hat Single Sign-On requirements

If you are implementing Red Hat Single Sign-On on your automation hub environment, specify the main automation hub URL that clients will connect to, using the automationhub_main_url inventory variable. For example:

automationhub_main_url = 'https://automationhub.ansible.com'
Post installation

Check to ensure the directives below are present in /etc/pulp/settings.py in each of the Private Automation Hub servers:

USE_X_FORWARDED_PORT = True
USE_X_FORWARDED_HOST = True
Note
If automationhub_main_url is not specified, the first node in the [automationhub] group will be used as default.

Install a high availability (HA) deployment of automation hub on SELinux

To set up a high availability (HA) deployment of automation hub on SELinux, create two mount points for /var/lib/pulp and /var/lib/pulp/pulpcore_static, then assign the appropriate SELinux contexts to each. You must add the context for /var/lib/pulp/pulpcore_static and run the Ansible Automation Platform installer before adding the context for /var/lib/pulp.

Prerequisites
  • You have already configured a NFS export on your server.

Pre-installation procedure
  1. Create a mount point at /var/lib/pulp:

    $ mkdir /var/lib/pulp/
  2. Open /etc/fstab using a text editor, then add the following values:

    srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache 0 0
    srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context="system_u:object_r:httpd_sys_content_rw_t:s0" 0 0
  3. Run the following command:

    $ systemctl daemon-reload
  4. Run the mount command for /var/lib/pulp:

    $ mount /var/lib/pulp
  5. Create a mount point at /var/lib/pulp/pulpcore_static:

    $ mkdir /var/lib/pulp/pulpcore_static
  6. Run the mount command:

    $ mount -a
  7. With the mount points set up, run the Ansible Automation Platform installer:

    $ setup.sh -- -b --become-user root

Once the installation is complete, unmount the /var/lib/pulp/ mount point then apply the appropriate SELinux context:

Post-installation procedure
  1. Shut down the Pulp service:

    $ systemctl stop pulpcore.service
  2. Unmount /var/lib/pulp/pulpcore_static:

    $ umount /var/lib/pulp/pulpcore_static
  3. Unmount /var/lib/pulp/:

    $ umount /var/lib/pulp/
  4. Open /etc/fstab using a text editor, then replace the existing value for /var/lib/pulp with the following:

    srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:pulpcore_var_lib_t:s0" 0 0
  5. Run the mount command:

    $ mount -a
Configure pulpcore.service:
  1. With the two mount points set up, shut down the Pulp service to configure pulpcore.service:

    $ systemctl stop pulpcore.service
  2. Edit pulpcore.service using systemctl:

    $ systemctl edit pulpcore.service
  3. Add the following entry to pulpcore.service to ensure that automation hub services starts only after starting the network and mounting the remote mount points:

    [Unit]
    After=network.target var-lib-pulp.mount
  4. Enable remote-fs.target:

    $ systemctl enable remote-fs.target
  5. Reboot the system:

    $ systemctl reboot
Troubleshooting

A bug in the pulpcore SELinux policies can cause the token authentication public/private keys in etc/pulp/certs/ to not have the proper SELinux labels, causing the pulp process to fail. When this occurs, run the following command to temporarily attach the proper labels:

$ chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem
Note
You must repeat this command to reattach the proper SELinux labels whenever you relabel your system.
Additional Resources

Using namespaces to manage collections in automation hub

You can use namespaces in automation hub to organize collections developed within your organization for internal distribution and use.

Working with namespaces requires a group that has permissions to create, edit and upload collections to namespaces. Collections uploaded to a namespace can require administrative approval before you can publish them and make them available for use.

About namespaces

Namespaces are unique locations in automation hub to which you can upload and publish content collections. Access to namespaces in automation hub is governed by groups with permission to manage the content and related information that appears there.

Formatting collections for your namespaces

You can upload internally developed collections to automation hub in tar.gz file format that meet the following name convention:

<my_namespace-my_collection-1.0.0.tar.gz>

Creating a new group for content curators

You can create a new group in automation hub designed to support content curation in your organization that contributes internally developed collections for publication in automation hub.

This section shows you how to create a new group and assign the required permissions to help content developers create namespaces and upload their collections to automation hub.

Prerequisites
  • You have administrative permissions in automation hub and create groups.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to menu:User Access[Groups] and click btn:[Create].

  3. Enter Content Engineering as a Name for the group in the modal and click btn:[Create]. You have created the new group and the Groups page appears.

  4. On the Permissions tab, click btn:[Edit].

  5. Under Namespaces, add permissions for Add Namespace, Upload to Namespace and Change Namespace.

  6. Click btn:[Save].

    The new group is created with the permissions you assigned. You can then add users to the group.

  7. Click the Users tab on the Groups page.

  8. Click btn:[Add].

  9. Select users from the modal and click btn:[Add].

Conclusion

You now have a new group who can use automation hub to:

  • Create a namespace.

  • Edit the namespace details and resources page.

  • Upload internally developed collections to the namespace.

Creating a namespace

You can create a namespace to organize collections that your content developers upload to automation hub. When creating a namespace, you can assign a group in automation hub as owners of that namespace.

Prerequisites
  • You have Add Namespaces and Upload to Namespaces permissions.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to menu:Automation Hub[Namespaces].

  3. Click btn:[Create] and provide a namespace name and assign a group of Namespace owners.

  4. Click btn:[Create].

Your content developers can now proceed to upload collections to your new namespace, or allow users in groups assigned as owners to upload collections.

Adding additional information and resources to a namespace

You can add information and provide resources for your users to accompany collections included in the namespace. Add a logo and a description, and link users to your GitHub repository, issue tracker, or other online assets. You can also enter markdown text in the Edit resources tab to include more information. This is helpful to end users who use your collection in their automation tasks.

Prerequisites
  • You have Change Namespaces permissions.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to menu:Automation Hub[Namespaces].

  3. Click the btn:[More Actions] icon and select Edit namespace.

  4. In the Edit details tab, provide information in the fields to enhance your namespace experience.

  5. Click the edit resources tab to enter markdown in the text field.

  6. Click btn:[Save] when finished.

Your content developers can now proceed to upload collections to your new namespace, or allow users in groups assigned as owners to upload collections.

When you create a namespace, groups with permissions to upload to it can start adding their collections for approval. Collections in the namespace will appear in the Published repository after approval.

Uploading collections to your namespaces

You can upload internally developed collections to your local automation hub namespace for review and approval by an automation hub administrator. When approved, the collection moves to the Published content repository where automation hub users can view and download it.

Note

Format your collection file name as follows: <NAMESPACE-COLLECTION-NAME.tar.gz>

Prerequisites
  • You have a namespace to which you can upload the collection.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to menu:Automation Hub[Namespaces] and select a namespace.

  3. Click btn:[Upload collection].

  4. Click btn:[Select file] from the New collection modal.

  5. Select the collection to upload.

  6. Click btn:[Upload].

The My Imports screen provides a summary of tests and notifies you if the collection uploaded successfully or failed.

Reviewing your namespace import logs

You can review the status of collections uploaded to your namespaces to evaluate success or failure of the process.

Imported collections information includes:

Status

completed or failed

Approval status

waiting for approval or approved

Version

the version of the uploaded collection

Import log

activities executed during the collection import

Prerequisites
  • You have access to a namespace to which you can upload collections.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to menu:Automation Hub[Namespaces].

  3. Select a namespace.

  4. Click the btn:[More Actions] icon and select My imports.

  5. Use the search field or locate an imported collection from the list.

  6. Click on the imported collection.

Review collection import details to determine the status of the collection in your namespace.

Deleting a namespace

You can delete unwanted namespaces to manage storage on your automation hub server. To do so, ensure that the namespace does not contain a collection with dependencies.

Prerequisites
  • The namespace you are deleting does not have a collection with dependencies.

  • You have Delete namespace permissions.

Procedure
  1. Log in to your local automation hub.

  2. Navigate to menu:Collections[Namespaces].

  3. Click the namespace to be deleted.

  4. Click the btn:[More Actions] icon , then click btn:[Delete namespace].

    Note
    If the btn:[Delete namespace] button is disabled, it means that this namespace contains a collection with dependencies. Review the collections in this namespace and delete any dependencies to proceed with the namespace deletion. See Deleting a collection on automation hub for information about deleting collections.

The namespace that you deleted, as well as its associated collections, is now deleted and removed from the namespace list view.

Private automation hub

Ansible automation hub is the central repository place for the certified collections, and functions as the main source of trusted, tested and supported content.

With private automation hub, automation developers can collaborate and publish their own automation content and deliver Ansible code more easily within their organization. It is also the central repository for Ansible validated content, which is not supported, but is trusted and tested by Red Hat and our partners.

Required shared filesystem

A high availability automation hub requires you to have a shared file system, such as NFS, already installed in your environment. Before you run the Red Hat Ansible Automation Platform installer, verify that you installed the /var/lib/pulp directory across your cluster as part of the shared file system installation. The Red Hat Ansible Automation Platform installer returns an error if /var/lib/pulp is not detected in one of your nodes, causing your high availability automation hub setup to fail.

Setting up the shared filesystem

You must mount the shared file system on each automation hub node:

Procedure
  1. Create the /var/lib/pulp directory.

    # mkdir /var/lib/pulp
  2. Mount the shared filesystem (this reference environment uses an NFS share).

    # mount -t nfs4 <nfs_share_ip_address>:/ /var/lib/pulp
  3. Confirm that the shared filesystem is successfully mounted:

    $ df -h

Enabling firewall services

Because of the requirement of using a shared filesystem as part of a highly available Ansible automation hub environment, the following firewall services must be enabled to ensure that the filesystem is successfully mounted.

On each Ansible automation hub node, you must:

  1. Ensure the following firewalld services (nfs, mountd, rpc-bind) are enabled.

    # firewall-cmd --zone=public --add-service=nfs
    # firewall-cmd --zone=public --add-service=mountd
    # firewall-cmd --zone=public --add-service=rpc-bind
  2. Reload firewalld for changes to take effect.

    # firewall-cmd --reload
  3. Verify the firewalld services are enabled.

    # firewall-cmd --get-services

Repository management with automation hub

As an automation hub administrator, you can create, edit, delete, and move automation content collections between repositories.

Automation hub includes two types of repositories where you can publish collections:

Staging repositories

Any user with permission to upload to a namespace can publish collections into these repositories. Collections in these repositories are not available in the search page, but rather, are displayed on the approval dashboard for an administrator to verify.

Custom repositories

Any user with write permissions on the repository can publish collections to these repositories. Custom repositories can be private repositories where only users with view permissions can see them, or public where all users can see them. These repositories are not displayed on the approval dashboard. If search is enabled by the repository owner, they can appear in search results.

Staging repositories are marked with the pipeline=staging label. By default, automation hub ships with one staging repository that is automatically used when a repository is not specified for uploading collections. However, users can create new staging repositories during repository creation.

Approval pipeline for custom repositories in automation hub

Automation hub allows you to approve collections into any repository that is marked with the pipeline=approved label. By default, automation hub ships with one repository for approved content, but you have the option to add more from the repository creation screen. Repositories marked with this label are not eligible for direct publishing and collections must come from one of the staging repositories.

Auto approval

When auto approve is enabled, any collection uploaded to a staging repository is automatically promoted to all of the repositories marked as pipeline=approved.

Approval required

From the approval dashboard, the administrator can see collections that have been uploaded into any of the staging repositories. Clicking btn:[Approve] displays a list of approved repositories. From this list, the administrator can select one or more repositories to which the content should be promoted.

If only one approved repository exists, the collection is automatically promoted into it and the administrator is not prompted to select a repository.

Rejection

Rejected collections are automatically placed into the pre-installed rejected repository.

Role Based Access Control

Role Based Access Control (RBAC) restricts user access to custom repositories based on their defined role. By default, users can view all public repositories in their automation hub, but they cannot modify them unless they have explicit permission to do so. This is the same for other operations on the repository. For example, removing a user’s rights revokes their ability to download content from a custom repository. See Configuring user access for your local automation hub for information about managing user access in automation hub.

Creating a custom repository in automation hub

You can use Red Hat Ansible Automation Platform to create a repository and configure it to make it private or hide it from search results.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Repositories].

  3. Click btn:[Add repository].

  4. Enter a Repository name.

  5. Enter a Description that indicates the purpose of the repository.

  6. To retain previous versions of your repository each time you make a change, select Retained number of versions. The number of retained versions can range anywhere between 0 and unlimited. To save all versions, leave this set to null.

    Note

    If you have a problem with a change to your custom repository, you can revert to a different repository version that you have retained.

  7. Select the Pipeline for the repository. This option defines who can publish a collection into the repository.

    Staging

    Anyone is allowed to publish automation content into the repository.

    Approved

    Collections added to this repository are required to go through the approval process by way of the staging repository. When auto approve is enabled, any collection uploaded to a staging repository is automatically promoted to all of the approved repositories.

    None

    Any user with permissions on the repository can publish to the repository directly and it is not part of the approval pipeline.

  8. Optional: To hide the repository from search results, select Hide from search. This is selected by default.

  9. Optional: To make the repository private, select Make private. This hides the repository from anyone who does not have permissions to view the repository.

  10. To sync the content from a remote into this repository, select Remote and select the remote that contains the collections you want included in your custom repository. For more information, see Repository sync.

  11. Click btn:[Save].

Next steps
  • After the repository is created, the details page is displayed.

    From here, you can provide access to your repository, review or add collections, and work with the saved versions of your custom repository.

Providing access to a custom automation hub repository

By default, private repositories and the automation content collections are hidden from all users in the system. Public repositories can be viewed by all users, but cannot be modified. Use this procedure to provide access to your custom repository.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Repositories].

  3. Locate your repository in the list and click the btn:[More Actions] icon , then select Edit.

  4. Select the Access tab.

  5. Select a group for Repository owners.

    See Configuring user access for your local automation hub for information about implementing user access.

  6. Select the roles you want assigned for the selected group.

  7. Click btn:[Save].

Adding collections to a automation hub repository

After you create your repository, you can begin adding automation content collections to it.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Repositories].

  3. Locate your repository in the list and click the btn:[More Actions] icon , then select Edit.

  4. Select the Collections version tab.

  5. Click btn:[Add Collection] and select the collections you want added to your repository.

  6. Click btn:[Select].

Revert to a different automation hub repository version

When automation content collections are added or removed from a repository, a new version is created. If there are issues with a change to your repository, you can revert to a previous version. Reverting is a safe operation and does not delete collections from the system, but rather, changes the content associated with the repository. The number of versions saved is defined in the Retained number of versions setting when a repository is created.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Repositories].

  3. Locate your repository in the list and click the btn:[More Actions] icon , then select Edit.

  4. Locate the version you want to rollback to and click the btn:[More Actions] icon , then select Revert to this version.

  5. Click btn:[Revert].

Configuring Ansible automation hub remote repositories to synchronize content

You can configure your private automation hub to synchronize with Ansible Certified Content Collections hosted on console.redhat.com or to your choice of collections in Ansible Galaxy, using remote configurations.

Important

As of the 2.4 release you can still synchronize content, but synclists are deprecated, and will be removed in a future version.

From Ansible Automation Platform 2.4 a private automation hub administrator can go to the rh-certified remote and upload a manually-created requirements file.

Remotes are configurations that allow you to synchronize content to your custom repositories from an external collection source.

Reasons to create remote configurations

Each remote configuration located in menu:Automation Hub[Remotes] provides information for both the community and rh-certified repository about when the repository was last updated. You can add new content to Ansible automation hub at any time using the Edit and Sync features included on the menu:Automation Hub[Repositories] page.

Retrieving the API token for your Red Hat Certified Collection

You can synchronize Ansible Certified Content Collections curated by your organization from console.redhat.com to your private automation hub.

Prerequisites
  • You have organization administrator permissions to create the synclist on console.redhat.com.

Procedure
  1. Log in to console.redhat.com as an organization administrator.

  2. Navigate to menu:Automation Hub[Connect to Hub].

  3. Under Offline token, click btn:[Load token].

  4. Click btn:[Copy to clipboard] to copy the API token.

  5. Paste the API token into a file and store in a secure location.

Important

The API token is a secret token used to protect your content.

Configuring the rh-certified remote repository and synchronizing Red Hat Ansible Certified Content Collection.

You can edit the rh-certified remote repository to synchronize collections from automation hub hosted on console.redhat.com to your private automation hub. By default, your private automation hub rh-certified repository includes the URL for the entire group of Ansible Certified Content Collections.

To use only those collections specified by your organization, a private automation hub administrator can upload manually-created requirements files from the rh-certified remote.

For more information about using requirements files, see Install multiple collections with a requirements file in the Ansible Galaxy User Guide.

If you have collections A, B, and C in your requirements file, and a new collection X is added to console.redhat.com that you want to use, you must add X to your requirements file for private automation hub to synchronize it.

Prerequisites
  • You have valid Modify Ansible repo content permissions. See Managing user access in Automation Hub for more information on permissions.

  • You have retrieved the Sync URL and API Token from the automation hub hosted service on console.redhat.com.

  • You have configured access to port 443. This is required for synchronizing certified collections. For more information, see the automation hub table in the Network ports and protocols chapter of the Red Hat Ansible Automation Platform Planning Guide.

Procedure
  1. Log in to your private automation hub.

  2. Navigate to menu:Automation Hub[Remotes].

  3. In the rh-certified remote repository, click the btn:[More Actions] icon and click btn:[Edit].

  4. In the modal, paste the Sync URL and Token you acquired from console.redhat.com.

  5. Click btn:[Save].

    The modal closes and returns you to the Remotes page. You can now synchronize collections between your organization synclist on console.redhat.com and your private automation hub.

  6. Click the btn:[More Actions] icon and select Sync.

The Sync status notification updates to notify you of completion of the Red Hat Certified Content Collections synchronization.

Verification
  • Select Red Hat Certified from the collections content drop-down list to confirm that your collections content has synchronized successfully.

Configuring the community remote repository and syncing Ansible Galaxy collections

You can edit the community remote repository to synchronize chosen collections from Ansible Galaxy to your private automation hub. By default, your private automation hub community repository directs to galaxy.ansible.com/api/.

Prerequisites
  • You have Modify Ansible repo content permissions. See Managing user access in Automation Hub for more information on permissions.

  • You have a requirements.yml file that identifies those collections to synchronize from Ansible Galaxy as in the following example:

Requirements.yml example
collections:
  # Install a collection from Ansible Galaxy.
  - name: community.aws
    version: 5.2.0
    source: https://galaxy.ansible.com
Procedure
  1. Log in to your Ansible automation hub.

  2. Navigate to menu:Automation Hub[Remotes].

  3. In the Community remote, click the btn:[More Actions] icon and select Edit.

  4. In the modal, click btn:[Browse] and locate the requirements.yml file on your local machine.

  5. Click btn:[Save].

    The modal closes and returns you to the Remotes page. You can now synchronize collections identified in your requirements.yml file from Ansible Galaxy to your private automation hub.

  6. Click the btn:[More Actions] icon and select Sync to sync collections from Ansible Galaxy and Ansible automation hub.

The Sync status notification updates to notify you of completion or failure of Ansible Galaxy collections synchronization to your Ansible automation hub.

Verification
  • Select Community from the collections content drop-down list to confirm successful synchronization.

Frequently asked questions about Red Hat Ansible Certified Content

The following is a list of Frequently Asked Questions for the Red Hat Ansible Automation Platform Certification Program. If you have any questions regarding the following items, email ansiblepartners@redhat.com.

Why certify Ansible collections?

The Ansible certification program enables a shared statement of support for Red Hat Ansible Certified Content between Red Hat and the ecosystem partner. An end customer, experiencing trouble with Ansible and certified partner content, can open a support ticket, for example, a request for information, or a problem with Red Hat, and expect the ticket to be resolved by Red Hat and the ecosystem partner.

Red Hat offers go-to-market benefits for Certified Partners to grow market awareness, demand generation and collaborative selling.

Red Hat Ansible Certified Content Collections are distributed through Ansible automation hub (subscription required), a centralized repository for jointly supported Ansible Content. As a certified partner, publishing collections to Ansible automation hub provides end customers the power to manage how trusted automation content is used in their production environment with a well-known support life cycle.

For more information about getting started with certifying a solution, see Red Hat Partner Connect.

How do I get a collection certified?

Refer to Red Hat Partner Connect for the Ansible certification policy guide to understand how to certify your collection.

What’s the difference between Ansible Galaxy and Ansible automation hub?

Collections published to Ansible Galaxy are the latest content published by the Ansible community and have no joint support claims associated. Ansible Galaxy is the recommended frontend directory for the Ansible community accessing all content.

Collections published to Ansible automation hub are targeted for joint customers of Red Hat and selected partners. Customers need an Ansible subscription to access and download collections on Ansible automation hub. A certified collection means that Red Hat and partners have a strategic relationship in place and are ready to support joint customers, and may have had additional testing and validation done against them.

How do I request a namespace on Ansible Galaxy?

After you request a namespace through an Ansible Galaxy GitHub issue, send an email to ansiblepartners@redhat.com You must provide us with the GitHub username that you used to sign up on Ansible Galaxy, and you must have logged in at least once for the system to validate. When users are added as administrators of the namespace, then additional administrators can be added by the self-serve process.

Are there any restrictions for Ansible Galaxy namespace naming?

Collection namespaces must follow python module name convention. This means collections should have short, all lowercase names. You can use underscores in the collection name if it improves readability.

Are there any recommendations for collection naming?

A general suggestion is to create a collection with company_name.product format. This way multiple products may have different collections under the company namespace.

How do I get a namespace on Ansible automation hub?

By default namespaces used on Ansible Galaxy are also used on Ansible automation hub by the Ansible partner team. For any queries and clarifications contact ansiblepartners@redhat.com.

How do I run sanity tests on my collection?

Ansible sanity tests are made up of scripts and tools used to perform static code analysis. The primary purpose of these tests is to enforce Ansible coding standards and requirements. Ansible collections must be in a specific path, such as the following example:

{...}/ansible_collections/{namespace}/{collection}/

Ensure that your collection is in that specific path, and that you have three directories:

  • An empty directory named ansible_collections

  • A directory for the namespace

  • A directory for the collection itself

Does Ansible Galaxy house the source code for my collection?

No, Ansible Galaxy does not house the source for the collections. The actual collection source must be housed outside of Ansible Galaxy, for example, in GitHub. Ansible Galaxy contains the collection build tarball to publish the collection. You can include the link to the source for community users in the galaxy.yml file contained in the collection. This shows users where they should go if they want to contribute to the collection or even file issues against it.

Does Red Hat officially support collections downloaded and installed from Ansible Galaxy

No, collections downloaded from Galaxy do not have any support claims associated and are 100% community supported. Users and contributors of any such collection must contact the collection developers directly.

How does the joint support agreement on certified collections work?

If a customer raises an issue with the Red Hat support team about a certified collection, Red Hat support assesses the issue and checks whether the problem exists within Ansible or Ansible usage. They also check whether the issue is with a certified collection. If there is a problem with the certified collection, support teams transfer the issue to the vendor owner of the certified collection through an agreed upon tool such as TSANet.

Can I create and certify a collection containing only Ansible Roles?

You can create and certify collections that contain only roles. Current testing requirements are focused on collections containing modules, and additional resources are currently in progress for testing collections only containing roles. Please contact ansiblepartners@redhat.com for more information.

Basic remote management

With basic remote management, you can create a remote configuration to an external collection source and sync the content from those collections to your custom repositories.

Creating a remote configuration in automation hub

You can use Red Hat Ansible Automation Platform to create a remote configuration to an external collection source and sync the content from those collections to your custom repositories.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Remotes].

  3. Click btn:[Add Remote].

  4. Enter a Name for the remote configuration.

  5. Enter the URL for the remote server, including the path for the specific repository.

    Note

    You can obtain this information by navigating to menu:Automation Hub[Repositories], selecting your repository, and clicking btn:[Copy CLI configuration].

  6. Configure the credentials to the remote server by entering a Token or Username and Password required to access the external collection.

    Note

    A Token can be generated by navigating to menu:Automation Hub[API token], clicking btn:[Load token] and copying the token that is loaded.

  7. To access collections from console.redhat.com, enter the SSO URL to sign in to the identity provider (IdP).

  8. Select or create a YAML requirements file to identify the collections and version ranges to synchronize with your custom repository. For example, to only download the kubernetes and AWS collection versions 5.0.0 or later the requirements file would look like this:

    Collections:
     	  - name: community.kubernetes
    	  - name: community.aws
     		version:”>=5.0.0”
    Note

    All collection dependencies are automatically downloaded during the Sync process.

  9. To configure your remote further, use the options available under Advanced configuration:

    1. If there is a corporate proxy in place for your organization, enter a Proxy URL, Proxy Username and Proxy Password.

    2. Enable or disable transport layer security using the TLS validation checkbox.

    3. If digital certificates are required for authentication, enter a Client key and Client certificate.

    4. If you are using a self-signed SSL certificate for your server, enter the PEM encoded client certificate used for authentication in the CA certificate field.

    5. To accelerate the speed at which collections in this remote can be downloaded, specify the number of collections that can be downloaded in tandem in the Download concurrency field.

    6. To limit the number of queries per second on this remote, specify a Rate Limit.

      Note

      Some servers can have a specific rate limit set and if exceeded, synchronization will fail.

Providing access to a remote configuration

After a remote configuration is created, you can provide access to it by doing the following.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Remotes].

  3. Locate your repository in the list and click the btn:[More Actions] icon , then select Edit.

  4. Select the Access tab.

  5. Select a group for Repository owners. See Configuring user access for your local Automation Hub for information about implementing user access.

  6. Select the appropriate roles for the selected group.

  7. Click btn:[Save].

Populating your private automation hub container registry

By default, private automation hub does not include container images. To populate your container registry, you need to push a container image to it. The procedures in this section describe how to pull images from the Red Hat Ecosystem Catalog (registry.redhat.io), tag them, and push them to your private automation hub container registry.

Important

Image manifests and filesystem blobs are served directly from registry.redhat.io and registry.access.redhat.com. However, from 1st of May 2023, filesystem blobs are served from quay.io instead. To avoid problems pulling container images, you must enable outbound connections to the following hostnames:

  • cdn.quay.io

  • cdn01.quay.io

  • cdn02.quay.io

  • cdn03.quay.io

This change should be made to any firewall configuration that specifically enables outbound connections to registry.redhat.io or registry.access.redhat.com.

Use the hostnames instead of IP addresses when configuring firewall rules.

After making this change you can continue to pull images from registry.redhat.io and registry.access.redhat.com. You do not require a quay.io login, or need to interact with the quay.io registry directly in any way to continue pulling Red Hat container images.

Prerequisites

  • You have permissions to create new containers and push containers to private automation hub.

Obtaining images for use in automation hub

Before you can push container images to your private automation hub, you must first pull them from an existing registry and tag them for use. This example details how to pull an image from the Red Hat Ecosystem Catalog (registry.redhat.io).

Prerequisites

You have permissions to pull images from registry.redhat.io.

Procedure
  1. Log in to Podman using your registry.redhat.io credentials:

    $ podman login registry.redhat.io
  2. Enter your username and password at the prompts.

  3. Pull a container image:

    $ podman pull registry.redhat.io/<container_image_name>:<tag>
Verification
  1. List the images in local storage:

    $ podman images
  2. Verify that the image you recently pulled is contained in the list.

  3. Verify that the tag is correct.

Additional resources

Tagging images for use in automation hub

After you pull images from a registry, tag them for use in your private automation hub container registry.

Prerequisites
  • You have pulled a container image from an external registry.

  • You have the FQDN or IP address of the automation hub instance.

Procedure
  • Tag a local image with the automation hub container repository

    $ podman tag registry.redhat.io/<container_image_name>:<tag> <automation_hub_URL>/<container_image_name>
Verification
  1. List the images in local storage:

    $ podman images
  2. Verify that the image you recently tagged with your automation hub information is contained in the list.

Pushing a container image to private automation hub

You can push tagged container images to private automation hub to create new containers and populate the container registry.

Prerequisites
  • You have permissions to create new containers.

  • You have the FQDN or IP address of the automation hub instance.

Procedure
  1. Log in to Podman using your automation hub location and credentials:

    $ podman login -u=<username> -p=<password> <automation_hub_url>
  2. Push your container image to your automation hub container registry:

    $ podman push <automation_hub_url>/<container_image_name>
    Note

    The --remove-signatures flag is required when signed images from registry.redhat.io are pushed to the automation hub container registry. The push operation re-compresses image layers during the upload, which is not guaranteed to be reproducible and is client implementation dependent. This may lead to image-layer digest changes and a failed push operation, resulting in Error: Copying this image requires changing layer representation, which is not possible (image is signed or the destination specifies a digest).

Verification
  1. Log in to your automation hub.

  2. Navigate to menu:Container Registry[].

  3. Locate the container in the container repository list.

Setting up your container repository

You can setup your container repository to add a description, include a README, add groups who can access the repository, and tag images.

Prerequisites

  • You are logged in to a private automation hub with permissions to change the repository.

Adding a README to your container repository

Add a README to your container repository to provide instructions to your users for how to work with the container. Automation hub container repositories support Markdown for creating a README. By default, the README will be empty.

Prerequisites
  • You have permissions to change containers.

Procedure
  1. Navigate to menu:Execution Environments[].

  2. Select your container repository.

  3. On the Detail tab, click btn:[Add].

  4. In the Raw Markdown text field, enter your README text in Markdown.

  5. Click btn:[Save] when you are finished.

Once you add a README, you can edit it at any time by clicking btn:[Edit] and repeating steps 4 and 5.

Providing access to your container repository

Provide access to your container repository for users who need to work the images. Adding a group allows you to modify the permissions the group can have to the container repository. You can use this option to extend or restrict permissions based on what the group is assigned.

Prerequisites
  • You have change container namespace permissions.

Procedure
  1. Navigate to menu:Execution Environments[].

  2. Select your container repository.

  3. Click btn:[Edit] at the top right of your window.

  4. Under Groups with access, select a group or groups to grant access to.

    • Optional: Add or remove permissions for a specific group using the drop down under that group name.

  5. Click btn:[Save].

Tagging container images

Tag images to add an additional name to images stored in your automation hub container repository. If no tag is added to an image, automation hub defaults to latest for the name.

Prerequisites
  • You have change image tags permissions.

Procedure
  1. Navigate to menu:Execution Environments[].

  2. Select your container repository.

  3. Click the Images tab.

  4. Click the btn:[More Actions] icon , then click btn:[Manage tags].

  5. Add a new tag in the text field and click btn:[Add].

    • Optional: Remove current tags by clicking btn:[x] on any of the tags for that image.

  6. Click btn:[Save].

Verification
  • Click the Activity tab and review the latest changes.

Creating a credential in automation controller

Previously, you were required to deploy a registry to store execution environment images. On Ansible Automation Platform 2.0 and later, it is assumed that you already have a container registry up and running. Therefore, you are only required to add the credentials of a container registry of your choice to store execution environment images.

To pull container images from a password or token-protected registry, create a credential in automation controller:

Procedure
  1. Navigate to automation controller.

  2. In the side-menu bar, click menu:Resources[Credentials].

  3. Click btn:[Add] to create a new credential.

  4. Enter an authorization Name, Description, and Organization.

  5. Select the Credential Type.

  6. Enter the Authentication URL. This is the container registry address.

  7. Enter the Username and Password or Token required to log in to the container registry.

  8. Optionally, select Verify SSL to enable SSL verification.

  9. Click btn:[Save].

Managing the publication process of internal collections in Automation Hub

Use automation hub to manage and publish content collections developed within your organization. You can upload and group collections in namespaces. They need administrative approval to appear in the Published content repository. After you publish a collection, your users can access and download it for use.

Additionally, you can reject submitted collections that do not meet organizational certification criteria.

About Approval

You can manage uploaded collections in automation hub using the Approval feature located in the left navigation.

Approval Dashboard

By default, the Approval dashboard lists all collections with Needs Review status. You can check these for inclusion in your Published repository.

Viewing collection details

You can view more information about the collection by clicking the Version number.

Filtering collections

Filter collections by Namespace, Collection Name or Repository, to locate content and update its status.

Approving collections for internal publication

You can approve collections uploaded to individual namespaces for internal publication and use. All collections awaiting review are located under the Approval tab in the Staging repository.

Collections requiring approval have the status Needs review. Click the Version to view the contents of the collection.

Prerequisites
  • You have Modify Ansible repo content permissions.

Procedure
  1. From the sidebar, navigate to menu:Approval[].

  2. Select a collection to review.

  3. Click btn:[Certify] to approve the collection.

Approved collections are moved to the Published repository where users can view and download them for use.

Rejecting collections uploaded for review

You can reject collections uploaded to individual namespaces. All collections awaiting review are located under the Approval tab in the Staging repository.

Collections requiring approval have the status Needs review. Click the Version to view the contents of the collection.

Prerequisites
  • You have Modify Ansible repo content permissions.

Procedure
  1. From the sidebar, navigate to menu:Approval[].

  2. Locate the collection to review.

  3. Click btn:[Reject] to decline the collection.

Collections you decline for publication are moved to the Rejected repository.

Requirements for a high availability automation hub

Before deploying a high availability (HA) automation hub, ensure that you have a shared filesystem installed in your environment and that you have configured your network storage system, if applicable.

Required shared filesystem

A high availability automation hub requires you to have a shared file system, such as NFS, already installed in your environment. Before you run the Red Hat Ansible Automation Platform installer, verify that you installed the /var/lib/pulp directory across your cluster as part of the shared file system installation. The Red Hat Ansible Automation Platform installer returns an error if /var/lib/pulp is not detected in one of your nodes, causing your high availability automation hub setup to fail.

Network Storage Installation Requirements

If you intend to install a HA automation hub using a network storage on the automation hub nodes itself, you must first install and use firewalld to open the necessary ports as required by your shared storage system before running the Ansible Automation Platform installer.

Install and configure firewalld by executing the following commands:

  1. Install the firewalld daemon:

    $ dnf install firewalld
  2. Add your network storage under <service> using the following command:

    $ firewall-cmd --permanent --add-service=<service>
    Note
    For a list of supported services, use the $ firewall-cmd --get-services command
  3. Reload to apply the configuration:

    $ firewall-cmd --reload

Connecting automation hub to a Red Hat Single Sign-On environment

To connect automation hub to a Red Hat Single Sign-On installation, configure inventory variables in the inventory file before you run the installer setup script.

You must configure a different set of variables when connecting to a Red Hat Single Sign-On installation managed by Ansible Automation Platform than when connecting to an external Red Hat Single Sign-On installation.

Inventory file variables for connecting automation hub to a Red Hat Single Sign-On instance

If you are installing automation hub and Red Hat Single Sign-On together for the first time or you have an existing Ansible Automation Platform managed Red Hat Single Sign-On, configure the variables for Ansible Automation Platform managed Red Hat Single Sign-On.

If you are installing automation hub and you intend to connect it to an existing externally managed Red Hat Single Sign-On instance, configure the variables for externally managed Red Hat Single Sign-On.

For more information about these inventory variables, refer to Ansible automation hub variables in the Red Hat Ansible Automation Platform Installation Guide.

The following variables can be configured for both Ansible Automation Platform managed and external Red Hat Single Sign-On:

Variable Required or optional

sso_console_admin_password

Required

sso_console_admin_username

Optional

sso_use_https

Optional

sso_redirect_host

Optional

sso_ssl_validate_certs

Optional

sso_automation_platform_realm

Optional

sso_automation_platform_realm_displayname

Optional

sso_automation_platform_login_theme

Optional

The following variables can be configured for Ansible Automation Platform managed Red Hat Single Sign-On only:

Variable Required or optional

sso_keystore_password

Required only if sso_use_https = true

sso_custom_keystore_file

Optional

sso_keystore_file_remote

Optional

sso_keystore_name

Optional

The following variable can be configured for external Red Hat Single Sign-On only:

Variable Description

sso_host

Required

Ansible validated content

Red Hat Ansible Automation Platform includes Ansible validated content, which complements existing Red Hat Ansible Certified Content.

Ansible validated content provides an expert-led path for performing operational tasks on a variety of platforms including both Red Hat and our trusted partners.

Configuring validated collections with the installer

When you download and run the bundle installer, certified and validated collections are automatically uploaded. Certified collections are uploaded into the rh-certified repository. Validated collections are uploaded into the validated repository.

You can change to default configuration by using two variables:

Name Description

automationhub_seed_collections

A boolean that defines whether or not preloading is enabled.

automationhub_collection_seed_repository

If automationhub_seed_collections is set to true, this variable enables you to specify the type of content to upload. Possible values are certified or validated. If missing both content sets will be uploaded.

Installing validated content using the tarball

If you are not using the bundle installer, a standalone tarball, ansible-validated-content-bundle-1.tar.gz can be used instead. The standalone tarball can also be used later to update validated contents later in any environment, when a newer tarball becomes available, and without having to re-run the bundle installer.

To obtain the tarball, navigate to the Red Hat Ansible Automation Platform download page and select Ansible Validated Content.

You require the following variables to run the playbook.

Name Description

automationhub_admin_password

Your administration password.

automationhub_api_token

The API token generated for your automation hub.

automationhub_main_url

For example, https://automationhub.example.com

automationhub_require_content_approval

Boolean (true or false)

This must match the value used during automation hub deployment.

This variable is set to true by the installer.

Note

Use either automationhub_admin_password or automationhub_api_token, not both.

Upload the content and define the variables (this example uses automationhub_api_token):

ansible-playbook collection_seed.yml
-e automationhub_api_token=<api_token>
-e automationhub_main_url=https://automationhub.example.com
-e automationhub_require_content_approval=true

For more information on running ansible playbooks, see ansible-playbook.

When complete, the collections are visible in the validated collection section of private automation hub.

Enabling view-only access for your private automation hub

By enabling view-only access, you can grant access for users to view collections or namespaces on your private automation hub without the need for them to log in. View-only access allows you to share content with unauthorized users while restricting their ability to only view or download source code, without permissions to edit anything on your private automation hub.

Enable view-only access for your private automation hub by editing the inventory file found on your Red Hat Ansible Automation Platform installer.

  • If you are installing a new instance of Ansible Automation Platform, follow these steps to add the automationhub_enable_unauthenticated_collection_access and automationhub_enable_unauthenticated_collection_download parameters to your inventory file along with your other installation configurations:

  • If you are updating an existing Ansible Automation Platform installation to include view-only access, add the automationhub_enable_unauthenticated_collection_access and automationhub_enable_unauthenticated_collection_download parameters to your inventory file then run the setup.sh script to apply the updates:

Procedure
  1. Navigate to the installer.

    Bundled installer
    $ cd ansible-automation-platform-setup-bundle-<latest-version>
    Online installer
    $ cd ansible-automation-platform-setup-<latest-version>
  2. Open the inventory file with a text editor.

  3. Add the automationhub_enable_unauthenticated_collection_access and automationhub_enable_unauthenticated_collection_download parameters to the inventory file and set both to True, following the example below:

    [all:vars]
    
    automationhub_enable_unauthenticated_collection_access = True (1)
    automationhub_enable_unauthenticated_collection_download = True (2)
    1. Allows unauthorized users to view collections

    2. Allows unathorized users to download collections

  4. Run the setup.sh script. The installer will now enable view-only access to your automation hub.

Verification

Once the installation completes, you can verify that you have view-only access on your private automation hub by attempting to view content on your automation hub without logging in.

  1. Navigate to your private automation hub.

  2. On the login screen, click btn:[View only mode].

Verify that you are able to view content on your automation hub, such as namespaces or collections, without having to log in.

Collections and content signing in private automation hub

As an automation administrator for your organization, you can configure private automation hub for signing and publishing Ansible content collections from different groups within your organization.

For additional security, automation creators can configure Ansible-Galaxy CLI to verify these collections to ensure they have not been changed after they were uploaded to automation hub.

Configuring content signing on private automation hub

To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing.

Prerequisites
  • Your GnuPG key pairs have been securely set up and managed by your organization.

  • Your public/private key pair has proper access for configuring content signing on private automation hub.

Procedure
  1. Create a signing script that accepts only a filename.

    Note

    This script acts as the signing service and must generate an ascii-armored detached gpg signature for that file using the key specified through the PULP_SIGNING_KEY_FINGERPRINT environment variable.

    The script then prints out a JSON structure with the following format.

    {"file": "filename", "signature": "filename.asc"}

    All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature, as shown.

    The following example shows a script that produces signatures for content:

    #!/usr/bin/env bash
    
    FILE_PATH=$1
    SIGNATURE_PATH="$1.asc"
    
    ADMIN_ID="$PULP_SIGNING_KEY_FINGERPRINT"
    PASSWORD="password"
    
    # Create a detached signature
    gpg --quiet --batch --pinentry-mode loopback --yes --passphrase \
       $PASSWORD --homedir ~/.gnupg/ --detach-sign --default-key $ADMIN_ID \
       --armor --output $SIGNATURE_PATH $FILE_PATH
    
    # Check the exit status
    STATUS=$?
    if [ $STATUS -eq 0 ]; then
       echo {\"file\": \"$FILE_PATH\", \"signature\": \"$SIGNATURE_PATH\"}
    else
       exit $STATUS
    fi

    After you deploy a private automation hub with signing enabled to your Ansible Automation Platform cluster, new UI additions display when you interact with collections.

  2. Review the Ansible Automation Platform installer inventory file for options that begin with automationhub_*.

    [all:vars]
    .
    .
    .
    automationhub_create_default_collection_signing_service = True
    automationhub_auto_sign_collections = True
    automationhub_require_content_approval = True
    automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg
    automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh

    The two new keys (automationhub_auto_sign_collections and automationhub_require_content_approval) indicate that the collections must be signed and require approval after they are uploaded to private automation hub.

Using content signing services in private automation hub

After you have configured content signing on your private automation hub, you can manually sign a new collection or replace an existing signature with a new one so that users who want to download a specific collection have the assurance that the collection is intended for them and has not been modified after certification.

Content signing on private automation hub provides solutions for the following scenarios:

  • Your system does not have automatic signing configured and you must use a manual signing process to sign collections.

  • The current signatures on the automatically configured collections are corrupted and must be replaced with new signatures.

  • Additional signatures are required for previously signed content.

  • You want to rotate signatures on your collections.

Procedure
  1. Log in to your private automation hub instance in the automation hub UI.

  2. In the left navigation, click menu:Collections[Approval]. The Approval dashboard is displayed with a list of collections.

  3. Click btn:[Sign and approve] for each collection you want to sign.

  4. Verify that the collections you signed and manually approved are displayed in the Collections tab.

Downloading signature public keys

After you sign and approve collections, download the signature public keys from the automation hub UI. You must download the public key before you add it to the local system keyring.

Procedure
  1. Log in to your private automation hub instance in the automation hub UI.

  2. In the navigation pane, select menu:Signature Keys[]. The Signature Keys dashboard displays a list of multiple keys: collections and container images.

    • To verify collections, download the key prefixed with collections-.

    • To verify container images, download the key prefixed with container-.

  3. Choose one of the following methods to download your public key:

    • Select the menu icon and click btn:[Download Key] to download the public key.

    • Select the public key from the list and click the Copy to clipboard icon.

    • Click the drop-down menu under the Public Key tab and copy the entire public key block.

Use the public key that you copied to verify the content collection that you are installing.

Configuring Ansible-Galaxy CLI to verify collections

You can configure Ansible-Galaxy CLI to verify collections. This ensures that collections you download are approved by your organization and have not been changed after they were uploaded to automation hub.

If a collection has been signed by automation hub, the server provides ASCII armored, GPG-detached signatures to verify the authenticity of MANIFEST.json before using it to verify the collection’s contents. You must opt into signature verification by configuring a keyring for ansible-galaxy or providing the path with the --keyring option.

Prerequisites
  • Signed collections are available in automation hub to verify signature.

  • Certified collections can be signed by approved roles within your organization.

  • Public key for verification has been added to the local system keyring.

Procedure
  1. To import a public key into a non-default keyring for use with ansible-galaxy, run the following command.

    gpg --import --no-default-keyring --keyring ~/.ansible/pubring.kbx my-public-key.asc
    Note

    In addition to any signatures provided by the automation hub, signature sources can also be provided in the requirements file and on the command line. Signature sources should be URIs.

  2. Use the --signature option to verify the collection name provided on the CLI with an additional signature.

    ansible-galaxy collection install namespace.collection
    --signature https://examplehost.com/detached_signature.asc
    --signature file:///path/to/local/detached_signature.asc --keyring ~/.ansible/pubring.kbx

    You can use this option multiple times to provide multiple signatures.

  3. Confirm that the collections in a requirements file list any additional signature sources following the collection’s signatures key, as in the following example.

    # requirements.yml
    collections:
      - name: ns.coll
        version: 1.0.0
        signatures:
          - https://examplehost.com/detached_signature.asc
          - file:///path/to/local/detached_signature.asc
    
    ansible-galaxy collection verify -r requirements.yml --keyring ~/.ansible/pubring.kbx

    When you install a collection from automation hub, the signatures provided by the server are saved along with the installed collections to verify the collection’s authenticity.

  4. (Optional) If you need to verify the internal consistency of your collection again without querying the Ansible Galaxy server, run the same command you used previously using the --offline option.

Basic repository management

With basic repository management, you can create, edit, delete, and move content between repositories.

Creating a custom repository in automation hub

You can use Red Hat Ansible Automation Platform to create a repository and configure it to make it private or hide it from search results.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Repositories].

  3. Click btn:[Add repository].

  4. Enter a Repository name.

  5. Enter a Description that indicates the purpose of the repository.

  6. To retain previous versions of your repository each time you make a change, select Retained number of versions. The number of retained versions can range anywhere between 0 and unlimited. To save all versions, leave this set to null.

    Note

    If you have a problem with a change to your custom repository, you can revert to a different repository version that you have retained.

  7. Select the Pipeline for the repository. This option defines who can publish a collection into the repository.

    Staging

    Anyone is allowed to publish automation content into the repository.

    Approved

    Collections added to this repository are required to go through the approval process by way of the staging repository. When auto approve is enabled, any collection uploaded to a staging repository is automatically promoted to all of the approved repositories.

    None

    Any user with permissions on the repository can publish to the repository directly and it is not part of the approval pipeline.

  8. Optional: To hide the repository from search results, select Hide from search. This is selected by default.

  9. Optional: To make the repository private, select Make private. This hides the repository from anyone who does not have permissions to view the repository.

  10. To sync the content from a remote into this repository, select Remote and select the remote that contains the collections you want included in your custom repository. For more information, see Repository sync.

  11. Click btn:[Save].

Next steps
  • After the repository is created, the details page is displayed.

    From here, you can provide access to your repository, review or add collections, and work with the saved versions of your custom repository.

Providing access to a custom automation hub repository

By default, private repositories and the automation content collections are hidden from all users in the system. Public repositories can be viewed by all users, but cannot be modified. Use this procedure to provide access to your custom repository.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Repositories].

  3. Locate your repository in the list and click the btn:[More Actions] icon , then select Edit.

  4. Select the Access tab.

  5. Select a group for Repository owners.

    See Configuring user access for your local automation hub for information about implementing user access.

  6. Select the roles you want assigned for the selected group.

  7. Click btn:[Save].

Adding collections to a automation hub repository

After you create your repository, you can begin adding automation content collections to it.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Repositories].

  3. Locate your repository in the list and click the btn:[More Actions] icon , then select Edit.

  4. Select the Collections version tab.

  5. Click btn:[Add Collection] and select the collections you want added to your repository.

  6. Click btn:[Select].

Revert to a different automation hub repository version

When automation content collections are added or removed from a repository, a new version is created. If there are issues with a change to your repository, you can revert to a previous version. Reverting is a safe operation and does not delete collections from the system, but rather, changes the content associated with the repository. The number of versions saved is defined in the Retained number of versions setting when a repository is created.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Repositories].

  3. Locate your repository in the list and click the btn:[More Actions] icon , then select Edit.

  4. Locate the version you want to rollback to and click the btn:[More Actions] icon , then select Revert to this version.

  5. Click btn:[Revert].

Exporting and importing collections in automation hub

Ansible automation hub stores automation content collections within repositories. These collections are versioned by the automation content creator; therefore, many versions of the same collection can exist in the same or different repositories at the same time.

Collections are stored as tar files that can be imported and exported. This ensures the collection you are importing to a new repository is the same one that was originally created and exported.

Exporting an automation content collection in automation hub

After collections are finalized, you can import them to a location where they can be distributed to others across your organization.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Collections]. The Collections page displays all of the collections across all of the repositories and allows you to search for a specific collection.

  3. Select the collection you want to export. The collection details page is displayed.

  4. From the Installation tab, select Download tarball. The tar file is downloaded to your default browser downloads folder and available to be imported to the location of your choosing.

Importing an automation content collection in automation hub

As an automation content creator, you can import a collection for use in a custom repository. Collections must be imported into your namespace and approved by the automation hub administrator before they can be accessed.

Procedure
  1. Log in to Red Hat Ansible Automation Platform.

  2. Navigate to menu:Automation Hub[Namespaces]. The Namepaces page displays all of the namespaces available.

  3. Click btn:[View Collections].

  4. Click btn:[Upload Collection].

  5. Navigate to the collection tarball file, select the file and click btn:[Open].

  6. Click btn:[Upload].

    The My Imports screen provides a summary of tests and notifies you whether the collection uploaded successfully or failed.

    Note

    If the collection is not approved, it is not displayed in the published repository.

Additional resources
  • See Approval pipeline for more information about collection and repository approvals.

Installing automation hub with the setup script

Run the installer setup script after you have configured the appropriate inventory variables.

Running the setup script

You can run the setup script when you finish updating the inventory file with the required parameters for installing your private automation hub.

  • Run the setup.sh script to begin installation:

    $ ./setup.sh

Verifying Red Hat Single Sign-On connection

The installer uses the Red Hat Single Sign-On variables to setup a Keycloak realm and client.

To verify that you have successfully connected to the existing Red Hat Single Sign-On installation, check that settings.py contains the Red Hat Single Sign-On host information, the realm name, the key, and the secret.

Pulling images from a container repository

Pull images from the automation hub container registry to make a copy to your local machine. Automation hub provides the podman pull command for each latest image in the container repository. You can copy and paste this command into your terminal, or use podman pull to copy an image based on an image tag.

Prerequisites

You must have permission to view and pull from a private container repository.

Pulling an image

You can pull images from the automation hub container registry to make a copy to your local machine. Automation hub provides the podman pull command for each latest image in the container repository.

Note

If you need to pull container images from a password or token-protected registry, you must create a credential in automation controller before pulling the image.

Procedure
  1. Navigate to menu:Execution Environments[].

  2. Select your container repository.

  3. In the Pull this image entry, click btn:[Copy to clipboard].

  4. Paste and run the command in your terminal.

Verification
  • Run podman images to view images on your local machine.

Syncing images from a container repository

You can pull images from the automation hub container registry to sync an image to your local machine.

Prerequisites

You must have permission to view and pull from a private container repository.

Procedure

To sync an image from a remote container registry, you need to configure a remote registry.

  1. Navigate to menu:Execution Environments[Remote Registries].

  2. Add https://registry.redhat.io to the registry.

  3. Add any required credentials to authenticate.

Note
Some container registries are aggressive with rate limiting. It is advisable to set a rate limit under Advanced Options.
  1. Navigate to menu:Execution Environments[Execution Environments].

  2. Click Add execution environment in the page header.

  3. Select the registry you wish to pull from. The "name" field displays the name of the image that will show up as on your local registry.

Note
The "Upstream name" field is the name of the image on the remote server. For example if the upstream name is set to "alpine" and the “name” field to "local/alpine", the alpine image will be downloaded from the remote and renamed to local/alpine.

It is advisable to set a list of tags to include or exclude. Syncing images with a large number of tags is time consuming and will use a lot of disk space.

Additional resources

Additional resources

  • See the What is Podman? documentation for options to use when pulling images.

Deleting a container repository

Delete a container repository from your local automation hub to manage your disk space. You can delete repositories from the Red Hat Ansible Automation Platform interface in the Container Repository list view.

Deleting a container repository

Prerequisites
  • You have permissions to manage repositories.

Procedure
  1. Navigate to menu:Execution Environments[].

  2. On the container repository you would like to delete, click the btn:[More Actions] icon , then click btn:[Delete].

  3. When presented with the confirmation message, click the checkbox, then click btn:[Delete].

Verification
  • Return to the Execution Environments list view. The container repository should be removed from the list.

Managing Ansible Content Collections in automation hub

Important

As of the 2.4 release you can still synchronize content, but synclists are deprecated, and will be removed in a future version.

From Ansible Automation Platform 2.4 a private automation hub administrator can go to the rh-certified remote and upload a manually-created requirements file.

Remotes are configurations that allow you to synchronize content to your custom repositories from an external collection source.

You can use Ansible automation hub to distribute the relevant Red Hat Certified collections content to your users by creating synclists or a requirements file. For more information about using requirements files, see Install multiple collections with a requirements file in the Ansible Galaxy User Guide.

About Red Hat Ansible Certified Content Collections synclists

A synclist is a curated group of Red Hat Certified collections that is assembled by your organization administrator that synchronizes with your local Ansible automation hub. You can use synclists to manage only the content that you want and exclude unnecessary collections. You can design and manage your synclist from the content available as part of Red Hat content on console.redhat.com

Each synclist has its own unique repository URL that you can use to designate as a remote source for content in automation hub and is securely accessed using an API token.

Creating a synclist of Red Hat Ansible Certified Content Collections

You can create a synclist of curated Red Hat Ansible Certified Content in Ansible automation hub on console.redhat.com. Your synclist repository is located under menu:Automation Hub[Repositories], which is updated whenever you choose to manage content within Ansible Certified Content Collections.

All Ansible Certified Content Collections are included by default in your initial organization synclist.

Prerequisites
  • You have a valid Ansible Automation Platform subscription.

  • You have Organization Administrator permissions for console.redhat.com.

  • The following domain names are part of either the firewall or the proxy’s allowlist for successful connection and download of collections from automation hub or Galaxy server:

    • galaxy.ansible.com

    • cloud.redhat.com

    • console.redhat.com

    • sso.redhat.com

  • Ansible automation hub resources are stored in Amazon Simple Storage and the following domain name is in the allow list:

    • automation-hub-prd.s3.us-east-2.amazonaws.com

    • ansible-galaxy.s3.amazonaws.com

  • SSL inspection is disabled either when using self signed certificates or for the Red Hat domains.

Procedure
  1. Log in to console.redhat.com.

  2. Navigate to menu:Automation Hub[Collections].

  3. Use the toggle switch on each collection to determine whether to exclude it from your synclist.

  4. When you finish managing collections for your synclist, navigate to menu:Automation Hub[Repositories] to initiate the remote repository synchronization to your private automation hub.

  5. Optional: If your remote repository is already configured, you can manually synchronize Red Hat Ansible Certified Content Collections to your private automation hub to update the collections content that you made available to local users.

Browsing collections with Automation content navigator

As a content creator, you can browse your Ansible collections with Automation content navigator and interactively delve into each collection developed locally or within Automation execution environments.

Automation content navigator collections display

Automation content navigator displays information about your collections with the following details for each collection:

SHADOWED

Indicates that an additional copy of the collection is higher in the search order, and playbooks prefer that collection.

TYPE

Shows if the collection is contained within an automation execution environment or volume mounted on onto the automation execution environment as a bind_mount.

PATH

Reflects the collections location within the automation execution environment or local file system based on the collection TYPE field.

Automation content navigator collections display

Browsing collections from Automation content navigator

You can browse Ansible collections with the Automation content navigator text-based user interface in interactive mode and delve into each collection. Automation content navigator shows collections within the current project directory and those available in the automation execution environments

Prerequisites
  • A locally accessible collection or installed automation execution environments.

Procedure
  1. Start Automation content navigator

    $ ansible-navigator
  2. Browse the collection. Alternately, you can type ansible-navigator collections to directly browse the collections.

    $ :collections
    A list of Ansible collections shown in the Automation content navigator
  3. Type the number of the collection you want to explore.

    :4
    A collection shown in the Automation content navigator
  4. Type the number corresponding to the module you want to delve into.

    ANSIBLE.UTILS.IP_ADDRESS: Test if something in an IP address
     0│---
     1│additional_information: {}
     2│collection_info:
     3│  authors:
     4│  - Ansible Community
     5│  dependencies: {}
     6│  description: Ansible Collection with utilities to ease the management, manipulation,
     7│    and validation of data within a playbook
     8│  documentation: null
     9│  homepage: null
    10│  issues: null
    11│  license: []
    12│  license_file: LICENSE
    13│  name: ansible.utils
    14│  namespace: ansible
    15│  path:/usr/share/ansible/collections/ansible_collections/ansible/utils/
    16│  readme: README.md
    <... output truncated...>
  5. Optional: jump to the documentation examples for this module.

    :{{ examples }}
    
    0│
    1│
    2│#### Simple examples
    3│
    4│- name: Check if 10.1.1.1 is a valid IP address
    5│  ansible.builtin.set_fact:
    6│    data: "{{ '10.1.1.1' is ansible.utils.ip_address }}"
    7│
    8│# TASK [Check if 10.1.1.1 is a valid IP address] *********************
    9│# ok: [localhost] => {
    10│#     "ansible_facts": {
    11│#         "data": true
    12│#     },
    13│#     "changed": false
    14│# }
    15│
  6. Optional: open the example in your editor to copy it into a playbook.

    :open
    Documentation example shown in the editing tool
Verification
  • Browse the collection list.

    Collection list

Review documentation from Automation content navigator

You can review Ansible documentation for collections and plugins with the Automation content navigator text-based user interface in interactive mode. Automation content navigator shows collections within the current project directory and those available in the automation execution environments

Prerequisites
  • A locally accessible collection or installed automation execution environments.

Procedure
  1. Start Automation content navigator

    $ ansible-navigator
  2. Review the module you are interested in. Alternately, you can type ansible-navigator doc to access the documentation.

    :doc ansible.utils.ip_address
    ANSIBLE.UTILS.IP_ADDRESS: Test if something in an IP address
     0│---
     1│additional_information: {}
     2│collection_info:
     3│  authors:
     4│  - Ansible Community
     5│  dependencies: {}
     6│  description: Ansible Collection with utilities to ease the management, manipulation,
     7│    and validation of data within a playbook
     8│  documentation: null
     9│  homepage: null
    10│  issues: null
    11│  license: []
    12│  license_file: LICENSE
    13│  name: ansible.utils
    14│  namespace: ansible
    15│  path:/usr/share/ansible/collections/ansible_collections/ansible/utils/
    16│  readme: README.md
    <... output truncated...>
  3. Jump to the documentation examples for this module.

    :{{ examples }}
    
    0│
    1│
    2│#### Simple examples
    3│
    4│- name: Check if 10.1.1.1 is a valid IP address
    5│  ansible.builtin.set_fact:
    6│    data: "{{ '10.1.1.1' is ansible.utils.ip_address }}"
    7│
    8│# TASK [Check if 10.1.1.1 is a valid IP address] *********************
    9│# ok: [localhost] => {
    10│#     "ansible_facts": {
    11│#         "data": true
    12│#     },
    13│#     "changed": false
    14│# }
    15│
  4. Optional: open the example in your editor to copy it into a playbook.

    :open
    Documentation example in editor

    See Automation content navigator general settings for details on how to set up your editor.

Installing Automation content navigator on RHEL

As a content creator, you can install Automation content navigator on Red Hat Enterprise Linux (RHEL) 8.6 or later.

Installing Automation content navigator on RHEL from an RPM

You can install Automation content navigator on Red Hat Enterprise Linux (RHEL) from an RPM.

Prerequisites
  • You have installed RHEL 8.6 or later.

  • You registered your system with Red Hat Subscription Manager.

Note

Ensure that you only install the navigator matching your current Red Hat Ansible Automation Platform environment.

Procedure
  1. Attach the Red Hat Ansible Automation Platform SKU:

    $ subscription-manager attach --pool=<sku-pool-id>
  2. Install Automation content navigator with the following command:

    v.2.4 for RHEL 8 for x86_64

    $ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-navigator

    v.2.4 for RHEL 9 for x86-64

    $ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-navigator
Verification
  • Verify your Automation content navigator installation:

    $ ansible-navigator --help

The following example demonstrates a successful installation:

Automation content navigator successful installation

Introduction to Automation content navigator

As a content creator, you can use Automation content navigator to develop Ansible playbooks, collections, and roles that are compatible with the Red Hat Ansible Automation Platform. You can use Automation content navigator in the following environments, with seamless and predictable results across them all:

  • Local development machines

  • Automation execution environments

Automation content navigator also produces an artifact file you can use to help you develop your playbooks and troubleshoot problem areas.

Uses for Automation content navigator

Automation content navigator is a command line, content-creator-focused tool with a text-based user interface. You can use Automation content navigator to:

  • Launch and watch jobs and playbooks.

  • Share stored, completed playbook and job run artifacts in JSON format.

  • Browse and introspect automation execution environments.

  • Browse your file-based inventory.

  • Render Ansible module documentation and extract examples you can use in your playbooks.

  • View a detailed command output on the user interface.

Automation content navigator modes

Automation content navigator operates in two modes:

stdout mode

Accepts most of the existing Ansible commands and extensions at the command line.

text-based user interface mode

Provides an interactive, text-based interface to the Ansible commands. Use this mode to evaluate content, run playbooks, and troubleshoot playbooks after they run using artifact files.

stdout mode

Use the -m stdout subcommand with Automation content navigator to use the familiar Ansible commands, such as ansible-playbook within automation execution environments or on your local development environment. You can use commands you are familiar with for quick tasks.

Automation content navigator also provides extensive help in this mode:

--help

Accessible from ansible-navigator command or from any subcommand, such as ansible-navigator config --help.

subcommand help

Accessible from the subcommand, for example ansible-navigator config --help-config. This help displays the details of all the parameters supported from the related Ansible command.

Text-based user interface mode

The text-based user interface mode provides enhanced interaction with automation execution environments, collections, playbooks, and inventory. This mode is compatible with integrated development environments (IDE), such as Visual Studio Code.

Text-based user interface mode

This mode includes a number of helpful user interface options:

colon commands

You can access all the Automation content navigator commands with a colon, such as :run or :collections

navigating the text-based interface

The screen shows how to page up or down, scroll, escape to a prior screen or access :help.

output by line number

You can access any line number in the displayed output by preceding it with a colon, for example :12.

color-coded output

With colors enabled, Automation content navigator displays items, such as deprecated modules, in red.

pagination and scrolling

You can page up or down, scroll, or escape by using the options displayed at the bottom of each Automation content navigator screen.

You cannot switch between modes after Automation content navigator is running.

This document uses the text-based user interface mode for most procedures.

Automation content navigator commands

The Automation content navigator commands run familiar Ansible CLI commands in -m stdout mode. You can use all the subcommands and options from the related Ansible CLI command. Use ansible-navigator --help for details.

Table 10. Automation content navigator commands
Command Description CLI example

collections

Explore available collections

ansible-navigator collections --help

config

Explore the current ansible configuration

ansible-navigator config --help

doc

Review documentation for a module or plugin

ansible-navigator doc --help

images

Explore execution environment images

ansible-navigator images --help

inventory

Explore an inventory

ansible-navigator inventory --help

replay

Explore a previous run using a playbook artifact

ansible-navigator replay --help

run

Run a playbook

ansible-navigator run --help

welcome

Start at the welcome page

ansible-navigator welcome --help

Relationship between Ansible and Automation content navigator commands

The Automation content navigator commands run familiar Ansible CLI commands in -m stdout mode. You can use all the subcommands and options available in the related Ansible CLI command. Use ansible-navigator --help for details.

Table 11. Comparison of Automation content navigator and Ansible CLI commands
Automation content navigator command Ansible CLI command

ansible-navigator collections

ansible-galaxy collection

ansible-navigator config

ansible-config

ansible-navigator doc

ansible-doc

ansible-navigator inventory

ansible-inventory

ansible-navigator run

ansible-playbook

Reviewing your Ansible configuration with Automation content navigator

As a content creator, you can review your Ansible configuration with Automation content navigator and interactively delve into settings.

Reviewing your Ansible configuration from Automation content navigator

You can review your Ansible configuration with the Automation content navigator text-based user interface in interactive mode and delve into the settings. Automation content navigator pulls in the results from an accessible Ansible configuration file, or returns the defaults if no configuration file is present.

Prerequisites
Procedure
  1. Start Automation content navigator

    $ ansible-navigator

    Optional: type ansible-navigator config from the command line to access the Ansible configuration settings.

  2. Review the Ansible configuration.

     :config
    Ansible configuration

    Some values reflect settings from within the automation execution environments needed for the automation execution environments to function. These display as non-default settings you cannot set in your Ansible configuration file.

  3. Type the number corresponding to the setting you want to delve into, or type :<number> for numbers greater than 9.

    ANSIBLE COW ACCEPTLIST (current: ['bud-frogs', 'bunny', 'cheese']) (default:
     0│---
     1│current:
     2│- bud-frogs
     3│- bunny
     4│- cheese
     5│default:
     6│- bud-frogs
     7│- bunny
     8│- cheese
     9│- daemon

The output shows the current setting as well as the default. Note the source in this example is env since the setting comes from the automation execution environments.

Verification
  • Review the configuration output.

    Configuration output

Running Ansible playbooks with Automation content navigator

As a content creator, you can execute your Ansible playbooks with Automation content navigator and interactively delve into the results of each play and task to verify or troubleshoot the playbook. You can also execute your Ansible playbooks inside an execution environment and without an execution environment to compare and troubleshoot any problems.

Executing a playbook from Automation content navigator

You can run Ansible playbooks with the Automation content navigator text-based user interface to follow the execution of the tasks and delve into the results of each task.

Prerequisites
  • A playbook.

  • A valid inventory file if not using localhost or an inventory plugin.

Procedure
  1. Start Automation content navigator

    $ ansible-navigator
  2. Run the playbook.

    $ :run
  3. Optional: type ansible-navigator run simple-playbook.yml -i inventory.yml to run the playbook.

  4. Verify or add the inventory and any other command line parameters.

    INVENTORY OR PLAYBOOK NOT FOUND, PLEASE CONFIRM THE FOLLOWING
    ─────────────────────────────────────────────────────────────────────────
       Path to playbook: /home/ansible-navigator_demo/simple_playbook.yml
       Inventory source: /home/ansible-navigator-demo/inventory.yml
      Additional command line parameters: Please provide a value (optional)
    ──────────────────────────────────────────────────────────────────────────
                                                               Submit Cancel
  5. Tab to Submit and hit Enter. You should see the tasks executing.

    Executing playbook tasks
  6. Type the number next to a play to step into the play results, or type :<number> for numbers above 9.

    Task list

    Notice failed tasks show up in red if you have colors enabled for Automation content navigator.

  7. Type the number next to a task to review the task results, or type :<number> for numbers above 9.

    Failed task results
  8. Optional: type :doc bring up the documentation for the module or plugin used in the task to aid in troubleshooting.

    ANSIBLE.BUILTIN.PACKAGE_FACTS (MODULE)
      0│---
      1│doc:
      2│  author:
      3│  - Matthew Jones (@matburt)
      4│  - Brian Coca (@bcoca)
      5│  - Adam Miller (@maxamillion)
      6│  collection: ansible.builtin
      7│  description:
      8│  - Return information about installed packages as facts.
    <... output omitted ...>
     11│  module: package_facts
     12│  notes:
     13│  - Supports C(check_mode).
     14│  options:
     15│    manager:
     16│      choices:
     17│      - auto
     18│      - rpm
     19│      - apt
     20│      - portage
     21│      - pkg
     22│      - pacman
    
    <... output truncated ...>
Additional resources

Reviewing playbook results with an Automation content navigator artifact file

Automation content navigator saves the results of the playbook run in a JSON artifact file. You can use this file to share the playbook results with someone else, save it for security or compliance reasons, or review and troubleshoot later. You only need the artifact file to review the playbook run. You do not need access to the playbook itself or inventory access.

Prerequisites
  • A Automation content navigator artifact JSON file from a playbook run.

Procedure
  • Start Automation content navigator with the artifact file.

    $ ansible-navigator replay simple_playbook_artifact.json
    1. Review the playbook results that match when the playbook ran.

      Playbook results

You can now type the number next to the plays and tasks to step into each to review the results, as you would after executing the playbook.

Additional resources

Reviewing inventories with Automation content navigator

As a content creator, you can review your Ansible inventory with Automation content navigator and interactively delve into the groups and hosts.

Reviewing inventory from Automation content navigator

You can review Ansible inventories with the Automation content navigator text-based user interface in interactive mode and delve into groups and hosts for more details.

Prerequisites
  • A valid inventory file or an inventory plugin.

Procedure
  1. Start Automation content navigator.

    $ ansible-navigator

    Optional: type ansible-navigator inventory -i simple_inventory.yml from the command line to view the inventory.

  2. Review the inventory.

     :inventory -i simple_inventory.yml
    
       TITLE            DESCRIPTION
    0│Browse groups    Explore each inventory group and group members members
    1│Browse hosts     Explore the inventory with a list of all hosts
  3. Type 0 to brows the groups.

      NAME               TAXONOMY                      TYPE
    0│general            all                           group
    1│nodes              all                           group
    2│ungrouped          all                           group

    The TAXONOMY field details the hierarchy of groups the selected group or node belongs to.

  4. Type the number corresponding to the group you want to delve into.

      NAME              TAXONOMY                        TYPE
    0│node-0            all▸nodes                       host
    1│node-1            all▸nodes                       host
    2│node-2            all▸nodes                       host
  5. Type the number corresponding to the host you want to delve into, or type :<number> for numbers greater than 9.

    [node-1]
    0│---
    1│ansible_host: node-1.example.com
    2│inventory_hostname: node-1
Verification
  • Review the inventory output.

      TITLE            DESCRIPTION
    0│Browse groups   Explore each inventory group and group members members
    1│Browse hosts    Explore the inventory with a list of all hosts

Troubleshooting Ansible content with Automation content navigator

As a content creator, you can troubleshoot your Ansible content (collections, automation execution environments, and playbooks) with Automation content navigator and interactively troubleshoot the playbook. You can also compare results inside or outside an automation execution environment and troubleshoot any problems.

Reviewing playbook results with an Automation content navigator artifact file

Automation content navigator saves the results of the playbook run in a JSON artifact file. You can use this file to share the playbook results with someone else, save it for security or compliance reasons, or review and troubleshoot later. You only need the artifact file to review the playbook run. You do not need access to the playbook itself or inventory access.

Prerequisites
  • A Automation content navigator artifact JSON file from a playbook run.

Procedure
  • Start Automation content navigator with the artifact file.

    $ ansible-navigator replay simple_playbook_artifact.json
    1. Review the playbook results that match when the playbook ran.

      Playbook results

You can now type the number next to the plays and tasks to step into each to review the results, as you would after executing the playbook.

Additional resources

Automation content navigator Frequently asked questions

Use the following Automation content navigator FAQ to help you troubleshoot problems in your environment.

Where should the ansible.cfg file go when using an automation execution environment?

The easiest place to have the ansible.cfg is in the project directory adjacent to the playbook. The playbook directory is automatically mounted in the execution environment and the ansible.cfg file will be found. If the ansible.cfg file is in another directory, the ANSIBLE_CONFIG variable needs to be set and the directory specified as a custom volume mount. (See Automation content navigator settings for execution-environment-volume-mounts)

Where should the ansible.cfg file go when not using an automation execution environment Ansible looks for the ansible.cfg in the typical locations when not using an automation execution environment. See Ansible configuration settings for details.

Where should Ansible collections be placed when using an automation execution environment?

The easiest place to have Ansible collections is in the project directory, in a playbook adjacent collections directory (for example, ansible-galaxy collections install ansible.utils -p ./collections). The playbook directory is automatically mounted in the automation execution environment and Automation content navigator finds the collections there. Another option is to build the collections into an automation execution environment using Ansible Builder. This helps content creators author playbooks that are production ready, since automation controller supports playbook adjacent collection directories. If the collections are in another directory, set the ANSIBLE_COLLECTIONS_PATHS variable and configure a custom volume mount for the directory. (See Automation content navigator general settings for execution-environment-volume-mounts).

Where should ansible collections be placed when not using an automation execution environment?

When not using an automation execution environment, Ansible looks in the default locations for collections. See the Using Ansible collections guide.

Why does the playbook hang when vars_prompt or pause/prompt is used?

By default, Automation content navigator runs the playbook in the same manner that automation controller runs the playbook. This was done to help content creators author playbooks that would be ready for production. If the use of vars_prompt or pause\prompt can not be avoided, disabling playbook-artifact creation causes Automation content navigator to run the playbook in a manner that is compatible with ansible-playbook and allows for user interaction.

Why does Automation content navigator change the terminal colors or look terrible?

Automation content navigator queries the terminal for its OSC4 compatibility. OSC4, 10, 11, 104, 110, 111 indicate the terminal supports color changing and reverting. It is possible that the terminal is misrepresenting its ability. OSC4 detection can be disabled by setting --osc4 false. (See Automation content navigator general settings for how to handle this with an environment variable or in the settings file).

How can I change the colors used by Automation content navigator?

Use --osc4 false to force Automation content navigator to use the terminal defined colors. (See Automation content navigator general settings for how to handle this with an environment variable or in the settings file).

What’s with all these site-artifact-2021-06-02T16:02:33.911259+00:00.json files in the playbook directory?

Automation content navigator creates a playbook artifact for every playbook run. These can be helpful for reviewing the outcome of automation after it is complete, sharing and troubleshooting with a colleague, or keeping for compliance or change-control purposes. The playbook artifact file contains the detailed information about every play and task, as well as the stdout from the playbook run. UYou can review playbook artifacts with ansible-navigator replay <filename> or :replay <filename> while in an Automation content navigator session. All playbook artifacts can be reviewed with both --mode stdout and --mode interactive, depending on the desired view. You can disable playbook artifacts writing and the default file naming convention. (See Automation content navigator general settings for how to handle this with an environment variable or in the settings file).

Why does vi open when I use :open?

Automation content navigator opens anything showing in the terminal in the default editor. The default is set to either vi +{line_number} {filename} or the current value of the EDITOR environment variable. Related to this is the editor-console setting which indicates if the editor is console/terminal based. Here are examples of alternate settings that may be useful:

# emacs
ansible-navigator:
  editor:
    command: emacs -nw +{line_number} {filename}
    console: true
# vscode
ansible-navigator:
  editor:
    command: code -g {filename}:{line_number}
    console: false
#pycharm
ansible-navigator:
  editor:
    command: charm --line {line_number} {filename}
    console: false
What is the order in which configuration settings are applied?

The Automation content navigator configuration system pulls in settings from various sources and applies them hierarchically in the following order (where the last applied changes are the most prevalent):

  1. Default internal values

  2. Values from a settings file

  3. Values from environment variables

  4. Flags and arguments specified on the command line

  5. While issuing : commands within the text-based user interface

Something didn’t work, how can I troubleshoot it?

Automation content navigator has reasonable logging messages. You can enable debug logging with --log-level debug. If you think you might have found a bug, please log an issue and include the details from the log file.

Executing your content with Automation content navigator

Now that you have your automation execution environments built, you can use automation content navigator to validate that the content will be run in the same manner as the automation controller will run it.

Running Ansible playbooks with Automation content navigator

As a content creator, you can execute your Ansible playbooks with Automation content navigator and interactively delve into the results of each play and task to verify or troubleshoot the playbook. You can also execute your Ansible playbooks inside an execution environment and without an execution environment to compare and troubleshoot any problems.

Executing a playbook from Automation content navigator

You can run Ansible playbooks with the Automation content navigator text-based user interface to follow the execution of the tasks and delve into the results of each task.

Prerequisites
  • A playbook.

  • A valid inventory file if not using localhost or an inventory plugin.

Procedure
  1. Start Automation content navigator

    $ ansible-navigator
  2. Run the playbook.

    $ :run
  3. Optional: type ansible-navigator run simple-playbook.yml -i inventory.yml to run the playbook.

  4. Verify or add the inventory and any other command line parameters.

    INVENTORY OR PLAYBOOK NOT FOUND, PLEASE CONFIRM THE FOLLOWING
    ─────────────────────────────────────────────────────────────────────────
       Path to playbook: /home/ansible-navigator_demo/simple_playbook.yml
       Inventory source: /home/ansible-navigator-demo/inventory.yml
      Additional command line parameters: Please provide a value (optional)
    ──────────────────────────────────────────────────────────────────────────
                                                               Submit Cancel
  5. Tab to Submit and hit Enter. You should see the tasks executing.

    Executing playbook tasks
  6. Type the number next to a play to step into the play results, or type :<number> for numbers above 9.

    Task list

    Notice failed tasks show up in red if you have colors enabled for Automation content navigator.

  7. Type the number next to a task to review the task results, or type :<number> for numbers above 9.

    Failed task results
  8. Optional: type :doc bring up the documentation for the module or plugin used in the task to aid in troubleshooting.

    ANSIBLE.BUILTIN.PACKAGE_FACTS (MODULE)
      0│---
      1│doc:
      2│  author:
      3│  - Matthew Jones (@matburt)
      4│  - Brian Coca (@bcoca)
      5│  - Adam Miller (@maxamillion)
      6│  collection: ansible.builtin
      7│  description:
      8│  - Return information about installed packages as facts.
    <... output omitted ...>
     11│  module: package_facts
     12│  notes:
     13│  - Supports C(check_mode).
     14│  options:
     15│    manager:
     16│      choices:
     17│      - auto
     18│      - rpm
     19│      - apt
     20│      - portage
     21│      - pkg
     22│      - pacman
    
    <... output truncated ...>
Additional resources

Reviewing playbook results with an Automation content navigator artifact file

Automation content navigator saves the results of the playbook run in a JSON artifact file. You can use this file to share the playbook results with someone else, save it for security or compliance reasons, or review and troubleshoot later. You only need the artifact file to review the playbook run. You do not need access to the playbook itself or inventory access.

Prerequisites
  • A Automation content navigator artifact JSON file from a playbook run.

Procedure
  • Start Automation content navigator with the artifact file.

    $ ansible-navigator replay simple_playbook_artifact.json
    1. Review the playbook results that match when the playbook ran.

      Playbook results

You can now type the number next to the plays and tasks to step into each to review the results, as you would after executing the playbook.

Additional resources

Reviewing Automation execution environments with Automation content navigator

As a content developer, you can review your automation execution environment with Automation content navigator and display the packages and collections included in the automation execution environments. Automation content navigator runs a playbook to extract and display the results.

Reviewing Automation execution environments from Automation content navigator

You can review your Automation execution environments with the Automation content navigator text-based user interface.

Prerequisites
  • automation execution environments

Procedure
  1. Review the automation execution environments included in your Automation content navigator configuration.

    $ ansible-navigator images
    List of automation execution environments
  2. Type the number of the automation execution environment you want to delve into for more details.

    Automation execution environment details

    You can review the packages and versions of each installed automation execution environment and the Ansible version any included collections.

  3. Optional: pass in the automation execution environment that you want to use. This becomes the primary and is the automation execution environment that Automation content navigator uses.

    $ ansible-navigator images --eei  registry.example.com/example-enterprise-ee:latest
Verification
  • Review the automation execution environment output.

    Automation execution environment output

Automation content navigator configuration settings

As a content creator, you can configure Automation content navigator to suit your development environment.

Creating an Automation content navigator settings file

You can alter the default Automation content navigator settings through:

  • The command line

  • Within a settings file

  • As an environment variable

Automation content navigator checks for a settings file in the following order and uses the first match:

  • ANSIBLE_NAVIGATOR_CONFIG - The settings file path environment variable if set.

  • ./ansible-navigator.<ext> - The settings file within the current project directory, with no dot in the file name.

  • \~/.ansible-navigator.<ext> - Your home directory, with a dot in the file name.

Consider the following when you create an Automation content navigator settings file:

  • The settings file can be in JSON or YAML format.

  • For settings in JSON format, the extension must be .json.

  • For settings in YAML format, the extension must be .yml or .yaml.

  • The project and home directories can only contain one settings file each.

  • If Automation content navigator finds more than one settings file in either directory, it results in an error.

You can copy the example settings file below into one of those paths to start your ansible-navigator settings file.

    ---
    ansible-navigator:
    #   ansible:
    #     config: /tmp/ansible.cfg
    #     cmdline: "--forks 15"
    #     inventories:
    #     - /tmp/test_inventory.yml
    #     playbook: /tmp/test_playbook.yml
    #   ansible-runner:
    #     artifact-dir: /tmp/test1
    #     rotate-artifacts-count: 10
    #     timeout: 300
    #   app: run
    #   collection-doc-cache-path: /tmp/cache.db
    #   color:
    #     enable: False
    #     osc4: False
    #   editor:
    #     command: vim_from_setting
    #     console: False
    #   documentation:
    #     plugin:
    #       name: shell
    #       type: become
    #   execution-environment:
    #     container-engine: podman
    #     enabled: False
    #     environment-variables:
    #       pass:
    #         - ONE
    #         - TWO
    #         - THREE
    #       set:
    #         KEY1: VALUE1
    #         KEY2: VALUE2
    #         KEY3: VALUE3
    #     image: test_image:latest
    #     pull-policy: never
    #     volume-mounts:
    #     - src: "/test1"
    #       dest: "/test1"
    #       label: "Z"
    #   help-config: True
    #   help-doc: True
    #   help-inventory: True
    #   help-playbook: False
    #   inventory-columns:
    #     - ansible_network_os
    #     - ansible_network_cli_ssh_type
    #     - ansible_connection
      logging:
    #     append: False
        level: critical
    #     file: /tmp/log.txt
    #   mode: stdout
    #   playbook-artifact:
    #     enable: True
    #     replay: /tmp/test_artifact.json
    #     save-as: /tmp/test_artifact.json

Automation content navigator general settings

The following table describes each general parameter and setting options for Automation content navigator.

Table 12. Automation content navigator general parameters settings
Parameter Description Setting options

ansible-runner-artifact-dir

The directory path to store artifacts generated by ansible-runner.

Default: No default value set

CLI: --rad or --ansible-runner-artifact-dir

ENV: ANSIBLE_NAVIGATOR_ANSIBLE_RUNNER_ARTIFACT_DIR

Settings file:

ansible-navigator:
  ansible-runner:
    artifact-dir:

ansible-runner-rotate-artifacts-count

Keep ansible-runner artifact directories, for last n runs. If set to 0, artifact directories are not deleted.

Default: No default value set

CLI: --rac or --ansible-runner-rotate-artifacts-count

ENV: ANSIBLE_NAVIGATOR_ANSIBLE_RUNNER_ROTATE_ARTIFACTS_COUNT

Settings file:

ansible-navigator:
  ansible-runner:
    rotate-artifacts-count:

ansible-runner-timeout

The timeout value after which ansible-runner force stops the execution.

Default: No default value set

CLI: --rt or --ansible-runner-timeout

ENV: ANSIBLE_NAVIGATOR_ANSIBLE_RUNNER_TIMEOUT

Settings file:

ansible-navigator:
  ansible-runner:
    timeout:

app

Entry point for Automation content navigator.

Choices: collections, config, doc, images, inventory, replay, run or welcome

Default: welcome

CLI example: ansible-navigator collections

ENV: ANSIBLE_NAVIGATOR_APP

Settings file:

ansible-navigator:
  app:

cmdline

Extra parameters passed to the corresponding command.

Default: No default value

CLI: positional

ENV: ANSIBLE_NAVIGATOR_CMDLINE

Settings file:

ansible-navigator:
  ansible:
    cmdline:

collection-doc-cache-path

The path to the collection doc cache.

Default: $HOME/.cache/ansible-navigator/collection_doc_cache.db

CLI: --cdcp or --collection-doc-cache-path

ENV: ANSIBLE_NAVIGATOR_COLLECTION_DOC_CACHE_PATH

Settings file:

ansible-navigator:
  collection-doc-cache-path:

container-engine

Specify the container engine (auto=podman then docker).

Choices: auto, podman or docker

Default: auto

CLI: --ce or --container-engine

ENV: ANSIBLE_NAVIGATOR_CONTAINER_ENGINE

Settings file:

ansible-navigator:
  execution-environment:
    container-engine:

display-color

Enable the use of color in the display.

Choices: True or False

Default: True

CLI: --dc or --display-color

ENV: NO_COLOR

Settings file:

ansible-navigator:
  color:
    enable:

editor-command

Specify the editor used by Automation content navigator

Default:* vi +{line_number} {filename}

CLI: --ecmd or --editor-command

ENV: ANSIBLE_NAVIGATOR_EDITOR_COMMAND

Settings file:

ansible-navigator:
  editor:
    command:

editor-console

Specify if the editor is console based.

Choices: True or False

Default: True

CLI: --econ or --editor-console

ENV: ANSIBLE_NAVIGATOR_EDITOR_CONSOLE

Settings file:

ansible-navigator:
  editor:
    console:

execution-environment

Enable or disable the use of an automation execution environment.

Choices: True or False

Default: True

CLI: --ee or --execution-environment

ENV:* ANSIBLE_NAVIGATOR_EXECUTION_ENVIRONMENT

Settings file:

ansible-navigator:
  execution-environment:
    enabled:

execution-environment-image

Specify the name of the automation execution environment image.

Default: quay.io/ansible/ansible-runner:devel

CLI: --eei or --execution-environment-image

ENV: ANSIBLE_NAVIGATOR_EXECUTION_ENVIRONMENT_IMAGE

Settings file:

ansible-navigator:
  execution-environment:
    image:

execution-environment-volume-mounts

Specify volume to be bind mounted within an automation execution environment (--eev /home/user/test:/home/user/test:Z)

Default: No default value set

CLI: --eev or --execution-environment-volume-mounts

ENV: ANSIBLE_NAVIGATOR_EXECUTION_ENVIRONMENT_VOLUME_MOUNTS

Settings file:

ansible-navigator:
  execution-environment:
    volume-mounts:

log-append

Specify if log messages should be appended to an existing log file, otherwise a new log file is created per session.

Choices: True or False

Default: True

CLI: --la or --log-append

ENV: ANSIBLE_NAVIGATOR_LOG_APPEND

Settings file:

ansible-navigator:
  logging:
    append:

log-file

Specify the full path for the Automation content navigator log file.

Default: $PWD/ansible-navigator.log

CLI: --lf or --log-file

ENV: ANSIBLE_NAVIGATOR_LOG_FILE

Settings file:

ansible-navigator:
  logging:
    file:

log-level

Specify the Automation content navigator log level.

Choices: debug, info, warning, error or critical

Default: warning

CLI: --ll or --log-level

ENV: ANSIBLE_NAVIGATOR_LOG_LEVEL

Settings file:

ansible-navigator:
  logging:
    level:

mode

Specify the user-interface mode.

Choices: stdout or interactive

Default: interactive

CLI: -m or --mode

ENV: ANSIBLE_NAVIGATOR_MODE

Settings file:

ansible-navigator:
  mode:

osc4

Enable or disable terminal color changing support with OSC 4.

Choices: True or False

Default: True

CLI: --osc4 or --osc4

ENV: ANSIBLE_NAVIGATOR_OSC4

Settings file:

ansible-navigator:
  color:
    osc4:

pass-environment-variable

Specify an exiting environment variable to be passed through to and set within the automation execution environment (--penv MY_VAR)

Default: No default value set

CLI: --penv or --pass-environment-variable

ENV: ANSIBLE_NAVIGATOR_PASS_ENVIRONMENT_VARIABLES

Settings file:

ansible-navigator:
  execution-environment:
    environment-variables:
      pass:

pull-policy

Specify the image pull policy.

always - Always pull the image

missing - Pull if not locally available

never - Never pull the image

tag - If the image tag is latest always pull the image, otherwise pull if not locally available

Choices: always, missing, never, or tag

Default: tag

CLI: --pp or --pull-policy

ENV: ANSIBLE_NAVIGATOR_PULL_POLICY

Settings file:

ansible-navigator:
  execution-environment:
    pull-policy:

set-environment-variable

Specify an environment variable and a value to be set within the automation execution environment (--senv MY_VAR=42)

Default: No default value set

CLI: --senv or --set-environment-variable

ENV: ANSIBLE_NAVIGATOR_SET_ENVIRONMENT_VARIABLES

Settings file:

ansible-navigator:
  execution-environment:
    environment-variables:
      set:

Automation content navigator config subcommand settings

The following table describes each parameter and setting options for the Automation content navigator config subcommand.

Table 13. Automation content navigator config subcommand parameters settings
Parameter Description Setting options

config

Specify the path to the Ansible configuration file.

Default: No default value set

CLI: -c or --config

ENV: ANSIBLE_CONFIG

Settings file:

ansible-navigator:
  ansible:
    config:
      path:

help-config

Help options for the ansible-config command in stdout mode.

Choices:* True or False

Default: False

CLI: --hc or --help-config

ENV: ANSIBLE_NAVIGATOR_HELP_CONFIG

Settings file:

ansible-navigator:
  help-config:

Automation content navigator doc subcommand settings

The following table describes each parameter and setting options for the Automation content navigator doc subcommand.

Table 14. Automation content navigator doc subcommand parameters settings
Parameter Description Setting options

help-doc

Help options for the ansible-doc command in stdout mode.

Choices: True or False

Default: False

CLI: --hd or --help-doc

ENV: ANSIBLE_NAVIGATOR_HELP_DOC

Settings file:

ansible-navigator:
  help-doc:

plugin-name

Specify the plugin name.

Default: No default value set

CLI: positional

ENV: ANSIBLE_NAVIGATOR_PLUGIN_NAME

Settings file:

ansible-navigator:
  documentation:
    plugin:
      name:

plugin-type

Specify the plugin type.

Choices: become, cache, callback, cliconf, connection, httpapi, inventory, lookup, module, netconf, shell, strategy, or vars

Default: module

CLI: -t or ----type

ENV: ANSIBLE_NAVIGATOR_PLUGIN_TYPE

Settings file:

ansible-navigator:
  documentation:
    plugin:
      type:

Automation content navigator inventory subcommand settings

The following table describes each parameter and setting options for the Automation content navigator inventory subcommand.

Table 15. Automation content navigator inventory subcommand parameters settings
Parameter Description Setting options

help-inventory

Help options for the ansible-inventory command in stdout mode.

Choices: True or False

Default: False

CLI: --hi or --help-inventory

ENV: ANSIBLE_NAVIGATOR_INVENTORY_DOC

Settings file:

ansible-navigator:
  help-inventory:

inventory

Specify an inventory file path or comma separated host list.

Default: no default value set

CLI: --i or --inventory

ENV: ANSIBLE_NAVIGATOR_INVENTORIES

Settings file:

ansible-navigator:
  inventories:

inventory-column

Specify a host attribute to show in the inventory view.

Default: No default value set

CLI: --ic or --inventory-column

ENV:* ANSIBLE_NAVIGATOR_INVENTORY_COLUMNS Settings file:

ansible-navigator:
  inventory-columns:

Automation content navigator replay subcommand settings

The following table describes each parameter and setting options for the Automation content navigator replay subcommand.

Table 16. Automation content navigator replay subcommand parameters settings
Parameter Description Setting options

playbook-artifact-replay

Specify the path for the playbook artifact to replay.

Default: No default value set

CLI: positional

ENV: ANSIBLE_NAVIGATOR_PLAYBOOK_ARTIFACT_REPLAY

Settings file:

ansible-navigator:
  playbook-artifact:
    replay:

Automation content navigator run subcommand settings

The following table describes each parameter and setting options for the Automation content navigator run subcommand.

Table 17. Automation content navigator run subcommand parameters settings
Parameter Description Setting options

playbook-artifact-replay

Specify the path for the playbook artifact to replay.

Default: No default value set

CLI: positional

ENV: ANSIBLE_NAVIGATOR_PLAYBOOK_ARTIFACT_REPLAY

Settings file:

ansible-navigator:
  playbook-artifact:
    replay:

help-playbook

Help options for the ansible-playbook command in stdout mode.

Choices: True or False

Default: False

CLI: --hp or --help-playbook

ENV: ANSIBLE_NAVIGATOR_HELP_PLAYBOOK

Settings file:

ansible-navigator:
  help-playbook:

inventory

Specify an inventory file path or comma separated host list.

Default: no default value set

CLI: --i or --inventory

ENV: ANSIBLE_NAVIGATOR_INVENTORIES

Settings file:

ansible-navigator:
  inventories:

inventory-column

Specify a host attribute to show in the inventory view.

Default: No default value set

CLI: --ic or --inventory-column

ENV:* ANSIBLE_NAVIGATOR_INVENTORY_COLUMNS Settings file:

ansible-navigator:
  inventory-columns:

playbook

Specify the playbook name.

Default: No default value set

CLI: positional

ENV: ANSIBLE_NAVIGATOR_PLAYBOOK

Settings file:*

ansible-navigator:
  ansible:
    playbook:

playbook-artifact-enable

Enable or disable the creation of artifacts for completed playbooks. Note: not compatible with --mode stdout when playbooks require user input.

Choices: True or False

Default: True

CLI: --pae or --playbook-artifact-enable ENV: ANSIBLE_NAVIGATOR_PLAYBOOK_ARTIFACT_ENABLE Settings file:

ansible-navigator:
  playbook-artifact:
    enable:

playbook-artifact-save-as

Specify the name for artifacts created from completed playbooks.

Default: {playbook_dir}/{playbook_name}-artifact-{ts_utc}.json

CLI: --pas or --playbook-artifact-save-as

ENV: ANSIBLE_NAVIGATOR_PLAYBOOK_ARTIFACT_SAVE_AS

Settings file:

ansible-navigator:
  playbook-artifact:
    save-as:

Using the automation controller Dashboard for IT orchestration

The Dashboard offers a graphical framework for your IT orchestration needs. Use the navigation menu to complete the following tasks:

  • Display different views

  • Navigate to your resources

  • Grant users access

  • Administer automation controller features in the UI

About the installer inventory file

Red Hat Ansible Automation Platform works against a list of managed nodes or hosts in your infrastructure that are logically organized, using an inventory file. You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario and describe host deployments to Ansible. By using an inventory file, Ansible can manage a large number of hosts with a single command. Inventories also help you use Ansible more efficiently by reducing the number of command line options you have to specify.

The inventory file can be in one of many formats, depending on the inventory plugins that you have. The most common formats are INI and YAML. Inventory files listed in this document are shown in INI format.

The location of the inventory file depends on the installer you used. The following table shows possible locations:

Installer Location

Bundle tar

/ansible-automation-platform-setup-bundle-<latest-version>

Non-bundle tar

/ansible-automation-platform-setup-<latest-version>

RPM

/opt/ansible-automation-platform/installer

You can verify the hosts in your inventory using the command:

ansible all -i <path-to-inventory-file. --list-hosts
Example inventory file
[automationcontroller]
host1.example.com
host2.example.com
Host4.example.com

[automationhub]
host3.example.com

[database]
Host5.example.com

[all:vars]
admin_password='<password>'

pg_host=''
pg_port=''

pg_database='awx'
pg_username='awx'
pg_password='<password>'

registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'

The first part of the inventory file specifies the hosts or groups that Ansible can work with.

Guidelines for hosts and groups

Databases
  • When using an external database, ensure the [database] sections of your inventory file are properly set up.

  • To improve performance, do not colocate the database and the automation controller on the same server.

Automation hub
  • If there is an [automationhub] group, you must include the variables automationhub_pg_host and automationhub_pg_port.

  • Add Ansible automation hub information in the [automationhub] group

  • Do not install Ansible automation hub and automation controller on the same node.

  • Provide a reachable IP address or fully qualified domain name (FDQN) for the [automationhub] and [automationcontroller] hosts to ensure that users can synchronize and install content from Ansible automation hub and automation controller from a different node. Do not use localhost.

Private automation hub
  • Do not install private automation hub and automation controller on the same node.

  • You can use the same Postgresql (database) instance, but they must use a different (database) name.

  • If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, it can result in an installation you cannot use as a container registry without certificate issues.

Important

You must separate the installation of automation controller and Ansible automation hub because the [database] group does not distinguish between the two if both are installed at the same time.

If you use one value in [database] and both automation controller and Ansible automation hub define it, they would use the same database.

Automation controller
  • Automation controller does not configure replication or failover for the database that it uses.

  • automation controller works with any replication that you have.

Clustered installations
  • When upgrading an existing cluster, you can also reconfigure your cluster to omit existing instances or instance groups. Omitting the instance or the instance group from the inventory file is not enough to remove them from the cluster. In addition to omitting instances or instance groups from the inventory file, you must also deprovision instances or instance groups before starting the upgrade. See Deprovisioning nodes or groups. Otherwise, omitted instances or instance groups continue to communicate with the cluster, which can cause issues with automation controller services during the upgrade.

  • If you are creating a clustered installation setup, you must replace [localhost] with the hostname or IP address of all instances. Installers for automation controller and automation hub do not accept [localhost] All nodes and instances must be able to reach any others using this hostname or address. In other words, you cannot use the localhost ansible_connection=local on one of the nodes. Use the same format for the host names of all the nodes.

    Therefore, this does not work:

    [automationhub]
    localhost ansible_connection=local
    hostA
    hostB.example.com
    172.27.0.4

    Instead, use these formats:

    [automationhub]
    hostA
    hostB
    hostC

    or

    [automationhub]
    hostA.example.com
    hostB.example.com
    hostC.example.com

Deprovisioning nodes or groups

You can deprovision nodes and instance groups using the Ansible Automation Platform installer. Running the installer will remove all configuration files and logs attached to the nodes in the group.

Note

You can deprovision any hosts in your inventory except for the first host specified in the [automationcontroller] group.

To deprovision nodes, append node_state=deprovision to the node or group within the inventory file.

For example:

To remove a single node from a deployment:

[automationcontroller]
host1.example.com
host2.example.com
host4.example.com   node_state=deprovision

or

To remove an entire instance group from a deployment:

[instance_group_restrictedzone]
host4.example.com
host5.example.com

[instance_group_restrictedzone:vars]
node_state=deprovision

Inventory variables

The second part of the example inventory file, following [all:vars], is a list of variables used by the installer. Using all means the variables apply to all hosts.

To apply variables to a particular host, use [hostname:vars]. For example, [automationhub:vars].

Rules for declaring variables in inventory files

The values of string variables are declared in quotes. For example:

pg_database='awx'
pg_username='awx'
pg_password='<password>'

When declared in a :vars section, INI values are interpreted as strings. For example, var=FALSE creates a string equal to FALSE. Unlike host lines, :vars sections accept only a single entry per line, so everything after the = must be the value for the entry. Host lines accept multiple key=value parameters per line. Therefore they need a way to indicate that a space is part of a value rather than a separator. Values that contain whitespace can be quoted (single or double). See the Python shlex parsing rules for details.

If a variable value set in an INI inventory must be a certain type (for example, a string or a boolean value), always specify the type with a filter in your task. Do not rely on types set in INI inventories when consuming variables.

Note

Consider using YAML format for inventory sources to avoid confusion on the actual type of a variable. The YAML inventory plugin processes variable values consistently and correctly.

If a parameter value in the Ansible inventory file contains special characters, such as #, { or }, you must double-escape the value (that is enclose the value in both single and double quotation marks).

For example, to use mypasswordwith#hashsigns as a value for the variable pg_password, declare it as pg_password='"mypasswordwith#hashsigns"' in the Ansible host inventory file.

Securing secrets in the inventory file

You can encrypt sensitive or secret variables with Ansible Vault. However, encrypting the variable names as well as the variable values makes it hard to find the source of the values. To circumvent this, you can encrypt the variables individually using ansible-vault encrypt_string, or encrypt a file containing the variables.

Procedure
  1. Create a file labeled credentials.yml to store the encrypted credentials.

    $ cat credentials.yml
    
    admin_password: my_long_admin_pw
    pg_password: my_long_pg_pw
    registry_password: my_long_registry_pw
  2. Encrypt the credentials.yml file using ansible-vault.

    $ ansible-vault encrypt credentials.yml
    New Vault password:
    Confirm New Vault password:
    Encryption successful
    Important

    Store your encrypted vault password in a safe place.

  3. Verify that the credentials.yml file is encrypted.

    $ cat credentials.yml
    $ANSIBLE_VAULT;1.1;
    AES256363836396535623865343163333339613833363064653364656138313534353135303764646165393765393063303065323466663330646232363065316666310a373062303133376339633831303033343135343839626136323037616366326239326530623438396136396536356433656162333133653636616639313864300a353239373433313339613465326339313035633565353464356538653631633464343835346432376638623533613666326136343332313163343639393964613265616433363430633534303935646264633034383966336232303365383763
  4. Run setup.sh for installation of Ansible Automation Platform 2.4 and pass both credentials.yml and the --ask-vault-pass option.

    $ ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ANSIBLE_HOST_KEY_CHECKING=False ./setup.sh -e @credentials.yml -- --ask-vault-pass

Additional inventory file variables

You can further configure your Red Hat Ansible Automation Platform installation by including additional variables in the inventory file. These configurations add optional features for managing your Red Hat Ansible Automation Platform. Add these variables by editing the inventory file using a text editor.

A table of predefined values for inventory file variables can be found in Inventory File Variables in the Red Hat Ansible Automation Platform Installation Guide.

Installing and configuring automation controller on Red Hat OpenShift Container Platform web console

You can use these instructions to install the automation controller operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.

Automation controller configuration can be done through the automation controller extra_settings or directly in the user interface after deployment. However, it is important to note that configurations made in extra_settings take precedence over settings made in the user interface.

Note

When an instance of automation controller is removed, the associated PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation controller instance in the same namespace. See Finding and deleting PVCs for more information.

Prerequisites

  • You have installed the Red Hat Ansible Automation Platform catalog in Operator Hub.

  • For Controller, a default StorageClass must be configured on the cluster for the operator to dynamically create needed PVCs. This is not necessary if an external PostgreSQL database is configured.

  • For Hub a StorageClass that supports ReadWriteMany must be available on the cluster to dynamically created the PVC needed for the content, redis and api pods. If it is not the default StorageClass on the cluster, you can specify it when creating your AutomationHub object.

Installing the automation controller operator

Use this procedure to install the automation controller operator.

Procedure
  1. Navigate to menu:Operators[Installed Operators], then click on the Ansible Automation Platform operator.

  2. Locate the Automation controller tab, then click btn:[Create instance].

You can proceed with configuring the instance using either the Form View or YAML view.

Creating your automation controller form-view

Use this procedure to create your automation controller using the form-view.

Procedure
  1. Ensure Form view is selected. It should be selected by default.

  2. Enter the name of the new controller.

  3. Optional: Add any labels necessary.

  4. Click btn:[Advanced configuration].

  5. Enter Hostname of the instance. The hostname is optional. The default hostname will be generated based upon the deployment name you have selected.

  6. Enter the Admin account username.

  7. Enter the Admin email address.

  8. Under the Admin password secret drop-down menu, select the secret.

  9. Under Database configuration secret drop-down menu, select the secret.

  10. Under Old Database configuration secret drop-down menu, select the secret.

  11. Under Secret key secret drop-down menu, select the secret.

  12. Under Broadcast Websocket Secret drop-down menu, select the secret.

  13. Enter any Service Account Annotations necessary.

Configuring your controller image pull policy

Use this procedure to configure the image pull policy on your automation controller.

Procedure
  1. Under Image Pull Policy, click on the radio button to select

    • Always

    • Never

    • IfNotPresent

  2. To display the option under Image Pull Secrets, click the arrow.

    1. Click btn:[+] beside Add Image Pull Secret and enter a value.

  3. To display fields under the Web container resource requirements drop-down list, click the arrow.

    1. Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.

  4. To display fields under the Task container resource requirements drop-down list, click the arrow.

    1. Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.

  5. To display fields under the EE Control Plane container resource requirements drop-down list, click the arrow.

    1. Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.

  6. To display fields under the PostgreSQL init container resource requirements (when using a managed service) drop-down list, click the arrow.

    1. Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.

  7. To display fields under the Redis container resource requirements drop-down list, click the arrow.

    1. Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.

  8. To display fields under the PostgreSQL container resource requirements (when using a managed instance)* drop-down list, click the arrow.

    1. Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.

  9. To display the PostgreSQL container storage requirements (when using a managed instance) drop-down list, click the arrow.

    1. Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.

  10. Under Replicas, enter the number of instance replicas.

  11. Under Remove used secrets on instance removal, select true or false. The default is false.

  12. Under Preload instance with data upon creation, select true or false. The default is true.

Configuring your controller LDAP security

Use this procedure to configure LDAP security for your automation controller.

Procedure
  1. Under LDAP Certificate Authority Trust Bundle click the drop-down menu and select a secret.

  2. Under LDAP Password Secret, click the drop-down menu and select a secret.

  3. Under EE Images Pull Credentials Secret, click the drop-down menu and select a secret.

  4. Under Bundle Cacert Secret, click the drop-down menu and select a secret.

  5. Under Service Type, click the drop-down menu and select

    • ClusterIP

    • LoadBalancer

    • NodePort

Configuring your automation controller operator route options

The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation controller operator route options under Advanced configuration.

Procedure
  1. Click btn:[Advanced configuration].

  2. Under Ingress type, click the drop-down menu and select Route.

  3. Under Route DNS host, enter a common host name that the route answers to.

  4. Under Route TLS termination mechanism, click the drop-down menu and select Edge or Passthrough. For most instances Edge should be selected.

  5. Under Route TLS credential secret, click the drop-down menu and select a secret from the list.

  6. Under Enable persistence for /var/lib/projects directory select either true or false by moving the slider.

Configuring the Ingress type for your automation controller operator

The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation controller operator Ingress under Advanced configuration.

Procedure
  1. Click btn:[Advanced Configuration].

  2. Under Ingress type, click the drop-down menu and select Ingress.

  3. Under Ingress annotations, enter any annotations to add to the ingress.

  4. Under Ingress TLS secret, click the drop-down menu and select a secret from the list.

After you have configured your automation controller operator, click btn:[Create] at the bottom of the form view. Red Hat OpenShift Container Platform will now create the pods. This may take a few minutes.

You can view the progress by navigating to menu:Workloads[Pods] and locating the newly created instance.

Verification

Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation controller are running:

Operator manager controllers automation controller automation hub

The operator manager controllers for each of the 3 operators, include the following:

  • automation-controller-operator-controller-manager

  • automation-hub-operator-controller-manager

  • resource-operator-controller-manager

After deploying automation controller, you will see the addition of these pods:

  • controller

  • controller-postgres

After deploying automation hub, you will see the addition of these pods:

  • hub-api

  • hub-content

  • hub-postgres

  • hub-redis

  • hub-worker

Note

A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod.

Configuring an external database for automation controller on Red Hat Ansible Automation Platform operator

For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create command.

By default, the Red Hat Ansible Automation Platform operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Red Hat Ansible Automation Platform operator automatically creates.

Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations.

Note

The same external database (PostgreSQL instance) can be used for both automation hub and automation controller as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.

The following section outlines the steps to configure an external database for your automation controller on a Ansible Automation Platform operator.

Prerequisite

The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform.

Note

Ansible Automation Platform 2.4 supports PostgreSQL 13.

Procedure

The external postgres instance credentials and connection information will need to be stored in a secret, which will then be set on the automation controller spec.

  1. Create a postgres_configuration_secret .yaml file, following the template below:

    apiVersion: v1
    kind: Secret
    metadata:
      name: external-postgres-configuration
      namespace: <target_namespace> (1)
    stringData:
      host: "<external_ip_or_url_resolvable_by_the_cluster>" (2)
      port: "<external_port>" (3)
      database: "<desired_database_name>"
      username: "<username_to_connect_as>"
      password: "<password_to_connect_with>" (4)
      sslmode: "prefer" (5)
      type: "unmanaged"
    type: Opaque
    1. Namespace to create the secret in. This should be the same namespace you wish to deploy to.

    2. The resolvable hostname for your database node.

    3. External port defaults to 5432.

    4. Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration.

    5. The variable sslmode is valid for external databases only. The allowed values are: prefer, disable, allow, require, verify-ca, and verify-full.

  2. Apply external-postgres-configuration-secret.yml to your cluster using the oc create command.

    $ oc create -f external-postgres-configuration-secret.yml
  3. When creating your AutomationController custom resource object, specify the secret on your spec, following the example below:

    apiVersion: awx.ansible.com/v1beta1
    kind: AutomationController
    metadata:
      name: controller-dev
    spec:
      postgres_configuration_secret: external-postgres-configuration

Finding and deleting PVCs

A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.

Procedure
  1. List the existing PVCs in your deployment namespace:

    oc get pvc -n <namespace>
  2. Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.

  3. Delete the old PVC:

    oc delete pvc -n <namespace> <pvc-name>

Additional resources

Control plane adjustments

The control plane refers to the automation controller pods which contain the web and task containers that, among other things, provide the user interface and handle the scheduling and launching of jobs. On the automation controller custom resource, the number of replicas determines the number of automation controller pods in the automation controller deployment.

Requests and limits for task containers

You must set a value for the task container’s CPU and memory resource limits. For each job that is run in an execution node, processing must occur on the control plane to schedule, launch, and receive callback events for that job.

For Operator deployments of automation controller, this control plane capacity usage is tracked on the controlplane instance group. The available capacity is determined based on the limits the user sets on the task container, using the task_resource_requirements field in the automation controller specification, or in the OpenShift UI, when creating automation controller.

You can also set memory and CPU resource limits that make sense for your cluster.

Containers resource requirements

You can configure the resource requirements of tasks and the web containers, at both the lower end (requests) and the upper end (limits). The execution environment control plane is used for project updates, but is normally the same as the default execution environment for jobs.

Setting resource requests and limits is a best practice because a container thaat has both defined is given a higher Quality of Service class. This means that if the underlying node is resource constrained and the cluster has to reap a pod to prevent running memory or other failure, the control plane pod is less likely to be reaped.

These requests and limits apply to the control pods for automation controller and if limits are set, determine the capacity of the instance. By default, controlling a job takes 1 unit of capacity. The memory and CPU limits of the task container are used to determine the capacity of control nodes. For more information on how this is calculated, see Resouce determination.

Name Description Default

web_resource_requirements

Web container resource requirements

requests: {CPU: 100m, memory: 128Mi}

task_resource_requirements

Task container resource requirements

requests: {CPU: 100m, memory: 128Mi}

ee_resource_requirements

EE control plane container resource requirements

requests: {CPU: 100m, memory: 128Mi}

redis_resource_requirements

Redis control plane container resource requirements

requests: {CPU:100m, memory: 128Mi}

Because the use of topology_spread_constraints to maximally spread control nodes onto separate underlying kubernetes worker nodes is also recommended, a reasonable set of requests and limits would be limits whose sum is equal to the actual resources on the node. If only limits are set, then the request is automatically set to be equal to the limit. But because some variablity of resource usage between the containers in the control pod is permitted, you can set requests to a lower amount, for example to 25% of the resources available on the node. An example of container customization for a cluster where the worker nodes have 4 CPUs and 16 GB of RAM could be:

spec:
  ...
  web_resource_requirements:
    requests:
      cpu: 250m
      memory: 1Gi
    limits:
      cpu: 1000m
      memory: 4Gi
  task_resource_requirements:
    requests:
      cpu: 250m
      memory: 1Gi
    limits:
      cpu: 2000m
      memory: 4Gi
  redis_resource_requirements
    requests:
      cpu: 250m
      memory: 1Gi
    limits:
      cpu: 1000m
      memory: 4Gi
  ee_resource_requirements:
    requests:
      cpu: 250m
      memory: 1Gi
    limits:
      cpu: 1000m
      memory: 4Gi

Alternative capacity limiting with automation controller settings

The capacity of a control node in Openshift is determined by the memory and CPU limits. However, if these are not set then the capacity is determined by the memory and CPU detected by the pod on the filesystem, which are actually the CPU and memory of the underlying Kubernetes node.

This can lead to issues with overwhelming the underlying Kubernetes pod if the automation controller pod is not the only pod on that node. If you do not want to set limits directly on the task container, you can use extra_settings, see Extra Settings in Custom pod timeouts section for how to configure the following

SYSTEM_TASK_ABS_MEM = 3gi
SYSTEM_TASK_ABS_CPU = 750m

This acts as a soft limit within the application that enables automation controller to control how much work it attempts to run, while not risking any CPU throttling from Kubernetes itself, or being reaped if memory usage peaks above the desired limit. These settings accept the same format accepted by resource requests and limits in the kubernetes resource definition.

Red Hat Ansible Automation Platform 2.4 upgrades

Upgrade to Red Hat Ansible Automation Platform 2.4 by setting up your inventory and running the installation script. Ansible then upgrades your deployment to 2.4. If you plan to upgrade from Ansible Automation Platform 2.0 or earlier, you must migrate Ansible content for compatibility with 2.4.

Ansible Automation Platform upgrades

Upgrading to version 2.4 from Ansible Automation Platform 2.1 or later involves downloading the installation package and then performing the following steps:

  • Set up your inventory to match your installation environment.

  • Run the 2.4 installation program over your current Ansible Automation Platform installation.

Ansible Automation Platform legacy upgrades

Upgrading to version 2.4 from Ansible Automation Platform 2.0 or earlier requires you to migrate Ansible content for compatibility.

The following steps provide an overview of the legacy upgrade process:

  • Duplicate your custom virtual environments into automation execution environments using the awx-manage command.

  • Migrate data from isolated legacy nodes to execution nodes by performing a side-by-side upgrade so nodes are compatible with the latest automation mesh features.

  • Import or generate a new automation hub API token.

  • Reconfigure your Ansible content to include Fully Qualified Collection Names (FQCN) for compatibility with ansible-core 2.15.

Automation mesh design patterns

The automation mesh topologies in this section provide examples you can use to design a mesh deployment in your environment. Examples range from a simple, hydrid node deployment to a complex pattern that deploys numerous automation controller instances, employing several execution and hop nodes.

Prerequisites
  • You reviewed conceptual information on node types and relationsips

Note

The following examples include images that illustrate the mesh topology. The arrows in the images indicate the direction of peering. After peering is established, the connection between the nodes allows bidirectional communication.

Multiple hybrid nodes inventory file example

This example inventory file deploys a control plane consisting of multiple hybrid nodes. The nodes in the control plane are automatically peered to one another.

[automationcontroller]
aap_c_1.example.com
aap_c_2.example.com
aap_c_3.example.com

The following image displays the topology of this mesh network.

The topology map of the multiple hybrid node mesh configuration consists of an automation controller group. The automation controller group contains three hybrid nodes: aap_c_1, aap_c_2, and aap_c_3. The control nodes are peered to one another as follows: aap_c_3 is peered to aap_c_1 and aap_c_1 is peered to aap_c_2.

The default node_type for nodes in the control plane is hybrid. You can explicitly set the node_type of individual nodes to hybrid in the [automationcontroller group]:

[automationcontroller]
aap_c_1.example.com node_type=hybrid
aap_c_2.example.com node_type=hybrid
aap_c_3.example.com node_type=hybrid

Alternatively, you can set the node_type of all nodes in the [automationcontroller] group. When you add new nodes to the control plane they are automatically set to hybrid nodes.

[automationcontroller]
aap_c_1.example.com
aap_c_2.example.com
aap_c_3.example.com

[automationcontroller:vars]
node_type=hybrid

If you think that you might add control nodes to your control plane in future, it is better to define a separate group for the hybrid nodes, and set the node_type for the group:

[automationcontroller]
aap_c_1.example.com
aap_c_2.example.com
aap_c_3.example.com

[hybrid_group]
aap_c_1.example.com
aap_c_2.example.com
aap_c_3.example.com

[hybrid_group:vars]
node_type=hybrid

Single node control plane with single execution node

This example inventory file deploys a single-node control plane and establishes a peer relationship to an execution node.

[automationcontroller]
aap_c_1.example.com

[automationcontroller:vars]
node_type=control
peers=execution_nodes

[execution_nodes]
aap_e_1.example.com

The following image displays the topology of this mesh network.

The topology map shows an automation controller group and an execution node. The automation controller group contains a single control node: aap_c_1. The execution node is aap_e_1. The aap_c_1 node is peered to aap_e_1.

The [automationcontroller] stanza defines the control nodes. If you add a new node to the automationcontroller group, it will automatically peer with the aap_c_1.example.com node.

The [automationcontroller:vars] stanza sets the node type to control for all nodes in the control plane and defines how the nodes peer to the execution nodes:

  • If you add a new node to the execution_nodes group, the control plane nodes automatically peer to it.

  • If you add a new node to the automationcontroller group, the node type is set to control.

The [execution_nodes] stanza lists all the execution and hop nodes in the inventory. The default node type is execution. You can specify the node type for an individual node:

[execution_nodes]
aap_e_1.example.com node_type=execution

Alternatively, you can set the node_type of all execution nodes in the [execution_nodes] group. When you add new nodes to the group, they are automatically set to execution nodes.

[execution_nodes]
aap_e_1.example.com

[execution_nodes:vars]
node_type=execution

If you plan to add hop nodes to your inventory in future, it is better to define a separate group for the execution nodes, and set the node_type for the group:

[execution_nodes]
aap_e_1.example.com

[local_execution_group]
aap_e_1.example.com

[local_execution_group:vars]
node_type=execution

Minimum resilient configuration

This example inventory file deploys a control plane consisting of two control nodes, and two execution nodes. All nodes in the control plane are automatically peered to one another. All nodes in the control plane are peered with all nodes in the execution_nodes group. This configuration is resilient because the execution nodes are reachable from all control nodes.

The capacity algorithm determines which control node is chosen when a job is launched. Refer to Automation controller Capacity Determination and Job Impact in the Automation Controller User Guide for more information.

The following inventory file defines this configuration.

[automationcontroller]
aap_c_1.example.com
aap_c_2.example.com

[automationcontroller:vars]
node_type=control
peers=execution_nodes

[execution_nodes]
aap_e_1.example.com
aap_e_1.example.com

The [automationcontroller] stanza defines the control nodes. All nodes in the control plane are peered to one another. If you add a new node to the automationcontroller group, it will automatically peer with the original nodes.

The [automationcontroller:vars] stanza sets the node type to control for all nodes in the control plane and defines how the nodes peer to the execution nodes:

  • If you add a new node to the execution_nodes group, the control plane nodes automatically peer to it.

  • If you add a new node to the automationcontroller group, the node type is set to control.

The following image displays the topology of this mesh network.

The topology map of the minimum resilient mesh configuration consists of an automation controller group and two execution nodes. The automation controller group consists of two control nodes: aap_c_1 and aap_c_2. The execution nodes are aap_e_1 and aap_e_2. The aap_c_1 node is peered to aap_c_2. Every control node is peered to every execution node.

Segregated local and remote execution configuration

This configuration adds a hop node and a remote execution node to the resilient configuration. The remote execution node is reachable from the hop node.

You can use this setup if you are setting up execution nodes in a remote location, or if you need to run automation in a DMZ network.

[automationcontroller]
aap_c_1.example.com
aap_c_2.example.com

[automationcontroller:vars]
node_type=control
peers=instance_group_local

[execution_nodes]
aap_e_1.example.com
aap_e_2.example.com
aap_h_1.example.com
aap_e_3.example.com

[instance_group_local]
aap_e_1.example.com
aap_e_2.example.com

[hop]
aap_h_1.example.com

[hop:vars]
peers=automationcontroller

[instance_group_remote]
aap_e_3.example.com

[instance_group_remote:vars]
peers=hop

The following image displays the topology of this mesh network.

The topology map of the configuration consists of an automation controller group, a local execution group, a hop node group, and a remote execution node group. The automation controller group consists of two control nodes: aap_c_1 and aap_c_2. The local execution nodes are aap_e_1 and aap_e_2. Every control node is peered to every local execution node. The hop node group contains one hop node, aap_h_1. It is peered to the controller group. The remote execution node group contains one execution node, aap_e_3. It is peered to the hop node group.

The [automationcontroller:vars] stanza sets the node types for all nodes in the control plane and defines how the control nodes peer to the local execution nodes:

  • All nodes in the control plane are automatically peered to one another.

  • All nodes in the control plane are peered with all local execution nodes.

If the name of a group of nodes begins with instance_group_, the installer recognises it as an instance group and adds it to the Ansible Automation Platform user interface.

Multi-hopped execution node

In this configuration, resilient controller nodes are peered with resilient local execution nodes. Resilient local hop nodes are peered with the controller nodes. A remote execution node and a remote hop node are peered with the local hop nodes.

You can use this setup if you need to run automation in a DMZ network from a remote network.

[automationcontroller]
aap_c_1.example.com
aap_c_2.example.com
aap_c_3.example.com

[automationcontroller:vars]
node_type=control
peers=instance_group_local

[execution_nodes]
aap_e_1.example.com
aap_e_2.example.com
aap_e_3.example.com
aap_e_4.example.com
aap_h_1.example.com node_type=hop
aap_h_2.example.com node_type=hop
aap_h_3.example.com node_type=hop

[instance_group_local]
aap_e_1.example.com
aap_e_2.example.com

[instance_group_remote]
aap_e_3.example.com

[instance_group_remote:vars]
peers=local_hop

[instance_group_multi_hop_remote]
aap_e_4.example.com

[instance_group_multi_hop_remote:vars]
peers=remote_multi_hop

[local_hop]
aap_h_1.example.com
aap_h_2.example.com

[local_hop:vars]
peers=automationcontroller

[remote_multi_hop]
aap_h_3 peers=local_hop

The following image displays the topology of this mesh network.

The topology map of the configuration consists of an automation controller group, a local execution group, a hop node group, and a remote execution node group. The automation controller group consists of three control nodes: aap_c_1, aap_c_2, and aap_c_3. The local execution nodes are aap_e_1 and aap_e_2. Every control node is peered to every local execution node. The hop node group contains two hop nodes, aap_h_1 and aap_h_2. It is peered to the controller group. The remote execution node group contains one execution node, aap_e_3. It is peered to the hop node group. A remote hop node group, consisting of node aap_h_3, is peered with the local hop node group. An execution node, aap_e_4, is peered with the remote hop group

The [automationcontroller:vars] stanza sets the node types for all nodes in the control plane and defines how the control nodes peer to the local execution nodes:

  • All nodes in the control plane are automatically peered to one another.

  • All nodes in the control plane are peered with all local execution nodes.

The [local_hop:vars] stanza peers all nodes in the [local_hop] group with all the control nodes.

If the name of a group of nodes begins with instance_group_, the installer recognises it as an instance group and adds it to the Ansible Automation Platform user interface.

Outbound only connections to controller nodes

This example inventory file deploys a control plane consisting of two control nodes, and several execution nodes. Only outbound connections are allowed to the controller nodes All nodes in the 'execution_nodes' group are peered with all nodes in the controller plane.

[automationcontroller]
controller-[1:2].example.com

[execution_nodes]
execution-[1:5].example.com

[execution_nodes:vars]
# connection is established *from* the execution nodes *to* the automationcontroller
peers=automationcontroller

The following image displays the topology of this mesh network.

The topology map consists of an automation controller group, and local execution group. The automation controller group consists of two control nodes: aap_c_1, and aap_c_2. The local execution nodes are aap-e-1, aap-e-2, aap-e-3, aap-e-4, and aap-e-5. Every execution node is peered to every control node in an outgoing connection.

Creating Red Hat Ansible Automation Platform backup resources

Backing up your Red Hat Ansible Automation Platform deployment involves creating backup resources for your deployed automation hub and automation controller instances. Use these procedures to create backup resources for your Red Hat Ansible Automation Platform deployment.

Backing up the Automation controller deployment

Use this procedure to back up a deployment of the controller, including jobs, inventories, and credentials.

Prerequisites
  • You must be authenticated with an Openshift cluster.

  • The Ansible Automation Platform Operator has been installed to the cluster.

  • The automation controller is deployed to using the Ansible Automation Platform Operator.

Procedure
  1. Log in to Red Hat OpenShift Container Platform.

  2. Navigate to menu:Operators[Installed Operators].

  3. Select the Ansible Automation Platform Operator installed on your project namespace.

  4. Select the Automation Controller Backup tab.

  5. Click btn:[Create AutomationControllerBackup].

  6. Enter a Name for the backup.

  7. Enter the Deployment name of the deployed Ansible Automation Platform instance being backed up. For example, if your automation controller must be backed up and the deployment name is aap-controller, enter 'aap-controller' in the Deployment name field.

  8. If you want to use a custom, pre-created pvc:

    1. Optionally enter the name of the Backup persistant volume claim.

    2. Optionally enter the Backup PVC storage requirements, and Backup PVC storage class.

      Note

      If no pvc or storage class is provided, the cluster’s default storage class is used to create the pvc.

    3. If you have a large database, specify your storage requests accordingly under Backup management pod resource requirements.

      Note

      You can check the size of the existing postgres database data directory by running the following command inside the postgres pod.

      $ df -h | grep "/var/lib/pgsql/data"
  9. Click btn:[Create].

    A backup tarball of the specified deployment is created and available for data recovery or deployment rollback. Future backups are stored in separate tar files on the same pvc.

Backing up the Automation hub deployment

Use this procedure to back up a deployment of the hub, including all hosted Ansible content.

Prerequisites
  • You must be authenticated with an Openshift cluster.

  • The Ansible Automation Platform Operator has been installed to the cluster.

  • The automation hub is deployed to using the Ansible Automation Platform Operator.

Procedure
  1. Log in to Red Hat OpenShift Container Platform.

  2. Navigate to menu:Operators[Installed Operators].

  3. Select the Ansible Automation Platform Operator installed on your project namespace.

  4. Select the Automation Hub Backup tab.

  5. Click btn:[Create AutomationHubBackup].

  6. Enter a Name for the backup.

  7. Enter the Deployment name of the deployed Ansible Automation Platform instance being backed up. For example, if your automation hub must be backed up and the deployment name is aap-hub, enter 'aap-hub' in the Deployment name field.

  8. If you want to use a custom, pre-created pvc:

    1. Optionally, enter the name of the Backup persistent volume claim, Backup persistent volume claim namespace, Backup PVC storage requirements, and Backup PVC storage class.

  9. Click btn:[Create].

    A backup of the specified deployment is created and available for data recovery or deployment rollback.

Installing Red Hat Ansible Automation Platform

Ansible Automation Platform is a modular platform and you can deploy automation controller with other automation platform components, such as automation hub and Event-Driven Ansible controller. For more information about the components provided with Ansible Automation Platform, see Red Hat Ansible Automation Platform components in the Red Hat Ansible Automation Platform Planning Guide.

There are a number of supported installation scenarios for Red Hat Ansible Automation Platform. To install Red Hat Ansible Automation Platform, you must edit the inventory file parameters to specify your installation scenario using one of the following examples:

Editing the Red Hat Ansible Automation Platform installer inventory file

You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario.

Procedure
  1. Navigate to the installer:

    1. [RPM installed package]

      $ cd /opt/ansible-automation-platform/installer/
    2. [bundled installer]

      $ cd ansible-automation-platform-setup-bundle-<latest-version>
    3. [online installer]

      $ cd ansible-automation-platform-setup-<latest-version>
  2. Open the inventory file with a text editor.

  3. Edit inventory file parameters to specify your installation scenario. Use one of the supported Installation scenario examples to update your inventory file.

Additional resources

For a comprehensive list of pre-defined variables used in Ansible installation inventory files, see Inventory file variables.

Inventory file examples based on installation scenarios

Red Hat supports a number of installation scenarios for Ansible Automation Platform. Review the following examples and choose those suitable for your preferred installation scenario.

Important
  • For Red Hat Ansible Automation Platform or automation hub: Add an automation hub host in the [automationhub] group.

  • For internal databases: [database] cannot be used to point to another host in the Ansible Automation Platform cluster. The database host set to be installed by the installer needs to be a unique host.

  • Do not install automation controller and automation hub on the same node for versions of Ansible Automation Platform in a production or customer environment. This can cause contention issues and heavy resource use.

  • Provide a reachable IP address or fully qualified domain name (FDQN) for the [automationhub] and [automationcontroller] hosts to ensure users can sync and install content from automation hub from a different node. Do not use 'localhost'.

  • Do not use special characters for pg_password. It can cause the setup to fail.

  • Enter your Red Hat Registry Service Account credentials in registry_username and registry_password to link to the Red Hat container registry.

  • The inventory file variables registry_username and registry_password are only required if a non-bundle installer is used.

Standalone automation controller with internal database

Use this example to populate the inventory file to install Red Hat Ansible Automation Platform. This installation inventory file includes a single automation controller node with an internal database.

[automationcontroller]
controller.example.com

[all:vars]
admin_password='<password>'
pg_host=''
pg_port='5432'
pg_database='awx'
pg_username='awx'
pg_password='<password>'
pg_sslmode='prefer'  # set to 'verify-full' for client-side enforced SSL

registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'


# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in nginx for the web UI and API
# web_server_ssl_cert=/path/to/tower.cert
# web_server_ssl_key=/path/to/tower.key
# Server-side SSL settings for PostgreSQL (when we are installing it).
# postgres_use_ssl=False
# postgres_ssl_cert=/path/to/pgsql.crt
# postgres_ssl_key=/path/to/pgsql.key
Single automation controller with external (installer managed) database

Use this example to populate the inventory file to install Red Hat Ansible Automation Platform. This installation inventory file includes a single automation controller node with an external database on a separate node.

[automationcontroller]
controller.example.com

[database]
data.example.com

[all:vars]
admin_password='<password>'
pg_host='data.example.com'
pg_port='5432'
pg_database='awx'
pg_username='awx'
pg_password='<password>'
pg_sslmode='prefer'  # set to 'verify-full' for client-side enforced SSL

registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'

# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in nginx for the web UI and API
# web_server_ssl_cert=/path/to/tower.cert
# web_server_ssl_key=/path/to/tower.key
# Server-side SSL settings for PostgreSQL (when we are installing it).
# postgres_use_ssl=False
# postgres_ssl_cert=/path/to/pgsql.crt
# postgres_ssl_key=/path/to/pgsql.key
Single automation controller with external (customer provided) database

Use this example to populate the inventory file to install Red Hat Ansible Automation Platform. This installation inventory file includes a single automation controller node with an external database on a separate node that is not managed by the platform installer.

Important

This example does not have a host under the database group. This indicates to the installer that the database already exists, and is being managed elsewhere.

[automationcontroller]
controller.example.com

[database]

[all:vars]
admin_password='<password>'

pg_host='data.example.com'
pg_port='5432'
pg_database='awx'
pg_username='awx'
pg_password='<password>'
pg_sslmode='prefer'  # set to 'verify-full' for client-side enforced SSL

registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'


# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in nginx for the web UI and API
# web_server_ssl_cert=/path/to/tower.cert
# web_server_ssl_key=/path/to/tower.key
# Server-side SSL settings for PostgreSQL (when we are installing it).
# postgres_use_ssl=False
# postgres_ssl_cert=/path/to/pgsql.crt
# postgres_ssl_key=/path/to/pgsql.key
Ansible Automation Platform with an external (installer managed) database

Use this example to populate the inventory file to install Ansible Automation Platform. This installation inventory file includes two automation controller nodes, two execution nodes, automation hub with an external managed database, and Event-Driven Ansible controller.

# Automation Controller Nodes
# There are two valid node_types that can be assigned for this group.
# A node_type=control implies that the node will only be able to run
# project and inventory updates, but not regular jobs.
# A node_type=hybrid will have the ability to run everything.
# If you do not define the node_type, it defaults to hybrid.
#
# control.example node_type=control
# hybrid.example  node_type=hybrid
# hybrid2.example <- this will default to hybrid

[automationcontroller]
controller1.example.com node_type=control
controller2.example.com node_type=control

# Execution Nodes
# There are two valid node_types that can be assigned for this group.
# A node_type=hop implies that the node will forward jobs to an execution node.
# A node_type=execution implies that the node will be able to run jobs.
# If you do not define the node_type, it defaults to execution.
#
# hop.example node_type=hop
# execution.example  node_type=execution
# execution2.example <- this will default to execution

[execution_nodes]
execution1.example.com node_type=execution
execution2.example.com node_type=execution

[automationhub]
automationhub.example.com
[automationedacontroller]
eda.example.com
[database]
data.example.com

[all:vars]
admin_password='<password>'
pg_host='data.example.com'
pg_port='5432'
pg_database='awx'
pg_username='awx'
pg_password='<password>'
pg_sslmode='prefer'  # set to 'verify-full' for client-side enforced SSL

registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'

# Receptor Configuration
#
receptor_listener_port=27199

# Automation Hub Configuration
#
automationhub_admin_password='<password>'
automationhub_pg_host='data.example.com
automationhub_pg_port='5432'
automationhub_pg_database='automationhub'
automationhub_pg_username='automationhub'
automationhub_pg_password='<password>'
automationhub_pg_sslmode='prefer'

# The default install will deploy a TLS enabled Automation Hub.
# If for some reason this is not the behavior wanted one can
# disable TLS enabled deployment.
#
# automationhub_disable_https = False
# The default install will generate self-signed certificates for the Automation
# Hub service. If you are providing valid certificate via automationhub_ssl_cert
# and automationhub_ssl_key, one should toggle that value to True.
#
# automationhub_ssl_validate_certs = False
# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in nginx for the web UI and API
# web_server_ssl_cert=/path/to/tower.cert
# web_server_ssl_key=/path/to/tower.key
# Certificate and key to install in Automation Hub node
# automationhub_ssl_cert=/path/to/automationhub.cert
# automationhub_ssl_key=/path/to/automationhub.key
# Server-side SSL settings for PostgreSQL (when we are installing it).
# postgres_use_ssl=False
# postgres_ssl_cert=/path/to/pgsql.crt
# postgres_ssl_key=/path/to/pgsql.key

# Automation EDA Controller Configuration
#

automationedacontroller_admin_password='<eda-password>'

automationedacontroller_pg_host='data.example.com'
automationedacontroller_pg_port=5432

automationedacontroller_pg_database='automationedacontroller'
automationedacontroller_pg_username='automationedacontroller'
automationedacontroller_pg_password='<password>'

# The full routable URL used by EDA to connect to a controller host.
# This URL is required if there is no Automation Controller configured
# in inventory.
#
# automation_controller_main_url=''

# Boolean flag used to verify Automation Controller's
# web certificates when making calls from Automation EDA Controller.
#
# automationedacontroller_controller_verify_ssl = true

# SSL-related variables

# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt

# Certificate and key to install in nginx for the web UI and API
# web_server_ssl_cert=/path/to/tower.cert
# web_server_ssl_key=/path/to/tower.key

# Certificate and key to install in Automation Hub node
# automationhub_ssl_cert=/path/to/automationhub.cert
# automationhub_ssl_key=/path/to/automationhub.key

# Server-side SSL settings for PostgreSQL (when we are installing it).
# postgres_use_ssl=False
# postgres_ssl_cert=/path/to/pgsql.crt
# postgres_ssl_key=/path/to/pgsql.key

# Keystore file to install in SSO node
# sso_custom_keystore_file='/path/to/sso.jks'

# The default install will deploy SSO with sso_use_https=True
# Keystore password is required for https enabled SSO
sso_keystore_password=''
Ansible Automation Platform with an external (customer provided) database

Use this example to populate the inventory file to install Red Hat Ansible Automation Platform. This installation inventory file includes one of each node type (control, hybrid, hop, and execution), automation hub with an external managed database that is not managed by the platform installer, and Event-Driven Ansible controller.

Important

This example does not have a host under the database group. This indicates to the installer that the database already exists, and is being managed elsewhere.

# Automation Controller Nodes
# There are two valid node_types that can be assigned for this group.
# A node_type=control implies that the node will only be able to run
# project and inventory updates, but not regular jobs.
# A node_type=hybrid will have the ability to run everything.
# If you do not define the node_type, it defaults to hybrid.
#
# control.example node_type=control
# hybrid.example  node_type=hybrid
# hybrid2.example <- this will default to hybrid

[automationcontroller]
hybrid1.example.com node_type=hybrid
controller1.example.com node_type=control

# Execution Nodes
# There are two valid node_types that can be assigned for this group.
# A node_type=hop implies that the node will forward jobs to an execution node.
# A node_type=execution implies that the node will be able to run jobs.
# If you do not define the node_type, it defaults to execution.
#
# hop.example node_type=hop
# execution.example  node_type=execution
# execution2.example <- this will default to execution

[execution_nodes]
hop1.example.com node_type=hop
execution1.example.com node_type=execution

[automationhub]
automationhub.example.com
[automationedacontroller]
eda.example.com
[database]

[all:vars]
admin_password='<password>'
pg_host='data.example.com'
pg_port='5432'
pg_database='awx'
pg_username='awx'
pg_password='<password>'
pg_sslmode='prefer'  # set to 'verify-full' for client-side enforced SSL

registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'

# Receptor Configuration
#
receptor_listener_port=27199

# Automation Hub Configuration
#
automationhub_admin_password='<password>'
automationhub_pg_host='data.example.com'
automationhub_pg_port='5432'
automationhub_pg_database='automationhub'
automationhub_pg_username='automationhub'
automationhub_pg_password='<password>'
automationhub_pg_sslmode='prefer'

# The default install will deploy a TLS enabled Automation Hub.
# If for some reason this is not the behavior wanted one can
# disable TLS enabled deployment.
#
# automationhub_disable_https = False
# The default install will generate self-signed certificates for the Automation
# Hub service. If you are providing valid certificate via automationhub_ssl_cert
# and automationhub_ssl_key, one should toggle that value to True.
#
# automationhub_ssl_validate_certs = False
# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in nginx for the web UI and API
# web_server_ssl_cert=/path/to/tower.cert
# web_server_ssl_key=/path/to/tower.key
# Certificate and key to install in Automation Hub node
# automationhub_ssl_cert=/path/to/automationhub.cert
# automationhub_ssl_key=/path/to/automationhub.key
# Server-side SSL settings for PostgreSQL (when we are installing it).
# postgres_use_ssl=False
# postgres_ssl_cert=/path/to/pgsql.crt
# postgres_ssl_key=/path/to/pgsql.key

# Automation EDA Controller Configuration
#

automationedacontroller_admin_password='<eda-password>'

automationedacontroller_pg_host='data.example.com'
automationedacontroller_pg_port=5432

automationedacontroller_pg_database='automationedacontroller'
automationedacontroller_pg_username='automationedacontroller'
automationedacontroller_pg_password='<password>'

# The full routable URL used by EDA to connect to a controller host.
# This URL is required if there is no Automation Controller configured
# in inventory.
#
# automation_controller_main_url=''

# Boolean flag used to verify Automation Controller's
# web certificates when making calls from Automation EDA Controller.
#
# automationedacontroller_controller_verify_ssl = true

# SSL-related variables

# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt

# Certificate and key to install in nginx for the web UI and API
# web_server_ssl_cert=/path/to/tower.cert
# web_server_ssl_key=/path/to/tower.key

# Certificate and key to install in Automation Hub node
# automationhub_ssl_cert=/path/to/automationhub.cert
# automationhub_ssl_key=/path/to/automationhub.key

# Server-side SSL settings for PostgreSQL (when we are installing it).
# postgres_use_ssl=False
# postgres_ssl_cert=/path/to/pgsql.crt
# postgres_ssl_key=/path/to/pgsql.key

# Keystore file to install in SSO node
# sso_custom_keystore_file='/path/to/sso.jks'

# The default install will deploy SSO with sso_use_https=True
# Keystore password is required for https enabled SSO
sso_keystore_password=''
Single Event-Driven Ansible controller node with internal database

Use this example to populate the inventory file to install Event-Driven Ansible controller. This installation inventory file includes a single Event-Driven Ansible controller node with an internal database.

Important

Automation controller must be installed before you populate the inventory file with the following Event-Driven Ansible variables.

# Automation EDA Controller Configuration
#

automationedacontroller_admin_password='<eda-password>'

automationedacontroller_pg_host=''
automationedacontroller_pg_port=5432

automationedacontroller_pg_database='automationedacontroller'
automationedacontroller_pg_username='automationedacontroller'
automationedacontroller_pg_password='<password>'

# The full routable URL used by EDA to connect to a controller host.
# This URL is required if there is no Automation Controller configured
# in inventory.
#
automation_controller_main_url = 'https://controller.example.com/'

# Boolean flag used to verify Automation Controller's
# web certificates when making calls from Automation EDA Controller.
#
automationedacontroller_controller_verify_ssl = true
Additional resources
Standalone automation hub with internal database

Use this example to populate the inventory file to deploy a standalone instance of automation hub with an internal database.

[automationcontroller]


[automationhub]
automationhub.example.com ansible_connection=local

[all:vars]
registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'

automationhub_admin_password= <PASSWORD>

automationhub_pg_host=''
automationhub_pg_port='5432'

automationhub_pg_database='automationhub'
automationhub_pg_username='automationhub'
automationhub_pg_password=<PASSWORD>
automationhub_pg_sslmode='prefer'

# The default install will deploy a TLS enabled Automation Hub.
# If for some reason this is not the behavior wanted one can
# disable TLS enabled deployment.
#
# automationhub_disable_https = False
# The default install will generate self-signed certificates for the Automation
# Hub service. If you are providing valid certificate via automationhub_ssl_cert
# and automationhub_ssl_key, one should toggle that value to True.
#
# automationhub_ssl_validate_certs = False
# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in Automation Hub node
# automationhub_ssl_cert=/path/to/automationhub.cert
# automationhub_ssl_key=/path/to/automationhub.key
Single automation hub with external (installer managed) database

Use this example to populate the inventory file to deploy a single instance of automation hub with an external (installer managed) database.

[automationcontroller]

[automationhub]
automationhub.example.com

[database]
data.example.com

[all:vars]
registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'

automationhub_admin_password= <PASSWORD>

automationhub_pg_host='data.example.com'
automationhub_pg_port='5432'

automationhub_pg_database='automationhub'
automationhub_pg_username='automationhub'
automationhub_pg_password=<PASSWORD>
automationhub_pg_sslmode='prefer'

# The default install will deploy a TLS enabled Automation Hub.
# If for some reason this is not the behavior wanted one can
# disable TLS enabled deployment.
#
# automationhub_disable_https = False
# The default install will generate self-signed certificates for the Automation
# Hub service. If you are providing valid certificate via automationhub_ssl_cert
# and automationhub_ssl_key, one should toggle that value to True.
#
# automationhub_ssl_validate_certs = False
# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in Automation Hub node
# automationhub_ssl_cert=/path/to/automationhub.cert
# automationhub_ssl_key=/path/to/automationhub.key
Single automation hub with external (customer provided) database

Use this example to populate the inventory file to deploy a single instance of automation hub with an external database that is not managed by the platform installer.

Important

This example does not have a host under the database group. This indicates to the installer that the database already exists, and is being managed elsewhere.

[automationcontroller]

[automationhub]
automationhub.example.com

[database]

[all:vars]
registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'

automationhub_admin_password= <PASSWORD>

automationhub_pg_host='data.example.com'
automationhub_pg_port='5432'

automationhub_pg_database='automationhub'
automationhub_pg_username='automationhub'
automationhub_pg_password=<PASSWORD>
automationhub_pg_sslmode='prefer'

# The default install will deploy a TLS enabled Automation Hub.
# If for some reason this is not the behavior wanted one can
# disable TLS enabled deployment.
#
# automationhub_disable_https = False
# The default install will generate self-signed certificates for the Automation
# Hub service. If you are providing valid certificate via automationhub_ssl_cert
# and automationhub_ssl_key, one should toggle that value to True.
#
# automationhub_ssl_validate_certs = False
# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in Automation Hub node
# automationhub_ssl_cert=/path/to/automationhub.cert
# automationhub_ssl_key=/path/to/automationhub.key
LDAP configuration on private automation hub

You must set the following six variables in your Red Hat Ansible Automation Platform installer inventory file to configure your private automation hub for LDAP authentication:

  • automationhub_authentication_backend

  • automationhub_ldap_server_uri

  • automationhub_ldap_bind_dn

  • automationhub_ldap_bind_password

  • automationhub_ldap_user_search_base_dn

  • automationhub_ldap_group_search_base_dn

If any of these variables are missing, the Ansible Automation installer will not complete the installation.

Setting up your inventory file variables

When you configure your private automation hub with LDAP authentication, you must set the proper variables in your inventory files during the installation process.

Procedure
  1. Access your inventory file according to the procedure in Editing the Red Hat Ansible Automation Platform installer inventory file.

  2. Use the following example as a guide to set up your Ansible Automation Platform inventory file:

    automationhub_authentication_backend = "ldap"
    
    automationhub_ldap_server_uri = "ldap://ldap:389"   (for LDAPs use  automationhub_ldap_server_uri = "ldaps://ldap-server-fqdn")
    automationhub_ldap_bind_dn = "cn=admin,dc=ansible,dc=com"
    automationhub_ldap_bind_password = "GoodNewsEveryone"
    automationhub_ldap_user_search_base_dn = "ou=people,dc=ansible,dc=com"
    automationhub_ldap_group_search_base_dn = "ou=people,dc=ansible,dc=com"
    Note

    The following variables will be set with default values, unless you set them with other options.

    auth_ldap_user_search_scope= `SUBTREE'
    auth_ldap_user_search_filter= `(uid=%(user)s)`
    auth_ldap_group_search_scope= 'SUBTREE'
    auth_ldap_group_search_filter= '(objectClass=Group)`
    auth_ldap_group_type_class= 'django_auth_ldap.config:GroupOfNamesType'
  3. If you plan to set up extra parameters in your private automation hub (such as user groups, super user access, mirroring, or others), proceed to the next section.

Configuring extra LDAP parameters

If you plan to set up super user access, user groups, mirroring or other extra parameters, you can create a yaml file that comprises them in your ldap_extra_settings dictionary.

Procedure
  1. Create a yaml file that will contain ldap_extra_settings, such as the following:

    #ldapextras.yml
    ---
    ldap_extra_settings:
      AUTH_LDAP_USER_ATTR_MAP: '{"first_name": "givenName", "last_name": "sn", "email": "mail"}'
    ...

    Then run setup.sh -e @ldapextras.yml during private automation hub installation.

  2. Use this example to set up a superuser flag based on membership in an LDAP group.

    #ldapextras.yml
    ---
    ldap_extra_settings:
      AUTH_LDAP_USER_FLAGS_BY_GROUP: {"is_superuser": "cn=pah-admins,ou=groups,dc=example,dc=com",}
    ...

    Then run setup.sh -e @ldapextras.yml during private automation hub installation.

  3. Use this example to set up superuser access.

    #ldapextras.yml
    ---
    ldap_extra_settings:
      AUTH_LDAP_USER_FLAGS_BY_GROUP: {"is_superuser": "cn=pah-admins,ou=groups,dc=example,dc=com",}
    ...

    Then run setup.sh -e @ldapextras.yml during private automation hub installation.

  4. Use this example to mirror all LDAP groups you belong to.

    #ldapextras.yml
    ---
    ldap_extra_settings:
      AUTH_LDAP_MIRROR_GROUPS: True
    ...

    Then run setup.sh -e @ldapextras.yml during private automation hub installation.

  5. Use this example to map LDAP user attributes (such as first name, last name, and email address of the user).

    #ldapextras.yml
    ---
    ldap_extra_settings:
      AUTH_LDAP_USER_ATTR_MAP: {"first_name": "givenName", "last_name": "sn", "email": "mail",}
    ...

    Then run setup.sh -e @ldapextras.yml during private automation hub installation.

  6. Use the following examples to grant or deny access based on LDAP group membership.

    1. To grant private automation hub access (for example, members of the cn=pah-nosoupforyou,ou=groups,dc=example,dc=com group):

      #ldapextras.yml
      ---
      ldap_extra_settings:
        AUTH_LDAP_DENY_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com'
      ...
    2. To deny private automation hub access (for example, members of the cn=pah-nosoupforyou,ou=groups,dc=example,dc=com group):

      #ldapextras.yml
      ---
      ldap_extra_settings:
        AUTH_LDAP_DENY_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com'
      ...

      Then run setup.sh -e @ldapextras.yml during private automation hub installation.

  7. Use this example to enable LDAP debug logging.

    #ldapextras.yml
    ---
    ldap_extra_settings:
      GALAXY_LDAP_LOGGING: True
    ...

    Then run setup.sh -e @ldapextras.yml during private automation hub installation.

    Note

    If it is not practical to re-run setup.sh or if debug logging is enabled for a short time, you can add a line containing GALAXY_LDAP_LOGGING: True manually to the /etc/pulp/settings.py file on private automation hub. Restart both pulpcore-api.service and nginx.service for the changes to take effect. To avoid failures due to human error, use this method only when necessary.

  8. Use this example to configure LDAP caching by setting the variable AUTH_LDAP_CACHE_TIMEOUT.

    #ldapextras.yml
    ---
    ldap_extra_settings:
      AUTH_LDAP_CACHE_TIMEOUT: 3600
    ...

    Then run setup.sh -e @ldapextras.yml during private automation hub installation.

You can view all of your settings in the /etc/pulp/settings.py file on your private automation hub.

Running the Red Hat Ansible Automation Platform installer setup script

After you update the inventory file with required parameters for installing your private automation hub, run the installer setup script.

Procedure
  • Run the setup.sh script

    $ sudo ./setup.sh

Installation of Red Hat Ansible Automation Platform will begin.

Verifying installation of automation controller

Verify that you installed automation controller successfully by logging in with the admin credentials you inserted in the inventory file.

Procedure
  1. Navigate to the IP address specified for the automation controller node in the inventory file.

  2. Log in with the Admin credentials you set in the inventory file.

Note

The automation controller server is accessible from port 80 (https://<CONTROLLER_SERVER_NAME>/) but redirects to port 443, so port 443 must also be available.

Important

If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal.

Upon a successful login to automation controller, your installation of Red Hat Ansible Automation Platform 2.4 is complete.

Additional automation controller configuration and resources

See the following resources to explore additional automation controller configurations.

Table 18. Resources to configure automation controller
Resource link Description

Automation Controller Quick Setup Guide

Set up automation controller and run your first playbook

Automation Controller Administration Guide

Configure automation controller administration through customer scripts, management jobs, etc.

Configuring proxy support for Red Hat Ansible Automation Platform

Set up automation controller with a proxy server

Managing usability analytics and data collection from automation controller

Manage what automation controller information you share with Red Hat

Automation Controller User Guide

Review automation controller functionality in more detail

Verifying installation of automation hub

Verify that you installed your automation hub successfully by logging in with the admin credentials you inserted into the inventory file.

Procedure
  1. Navigate to the IP address specified for the automation hub node in the inventory file.

  2. Log in with the Admin credentials you set in the inventory file.

Important

If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal.

Upon a successful login to automation hub, your installation of Red Hat Ansible Automation Platform 2.4 is complete.

Additional automation hub configuration and resources

See the following resources to explore additional automation hub configurations.

Table 19. Resources to configure automation controller
Resource link Description

Managing user access in private automation hub

Configure user access for automation hub

Managing Red Hat Certified, validated, and Ansible Galaxy content in automation hub

Add content to your automation hub

Publishing proprietary content collections in automation hub

Publish internally developed collections on your automation hub

Verifying Event-Driven Ansible controller installation

Verify that you installed Event-Driven Ansible controller successfully by logging in with the admin credentials you inserted in the inventory file.

Procedure
  1. Navigate to the IP address specified for the Event-Driven Ansible controller node in the inventory file.

  2. Log in with the Admin credentials you set in the inventory file.

Important

If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal.

Upon a successful login to Event-Driven Ansible controller, your installation of Red Hat Ansible Automation Platform 2.4 is complete.

Post-installation steps

Whether you are a new Ansible Automation Platform user looking to start automating, or an existing administrator looking to migrate old Ansible content to your latest installed version of Red Hat Ansible Automation Platform, explore the next steps to begin leveraging the new features of Ansible Automation Platform 2.4:

Migrating data to Ansible Automation Platform 2.4

For platform administrators looking to complete an upgrade to the Ansible Automation Platform 2.4, there may be additional steps needed to migrate data to a new instance:

Migrating from legacy virtual environments (venvs) to automation execution environments

Ansible Automation Platform 2.4 moves you away from custom Python virtual environments (venvs) in favor of automation execution environments - containerized images that packages the necessary components needed to execute and scale your Ansible automation. This includes Ansible Core, Ansible Content Collections, Python dependencies, Red Hat Enterprise Linux UBI 8, and any additional package dependencies.

If you are looking to migrate your venvs to execution environments, you will (1) need to use the awx-manage command to list and export a list of venvs from your original instance, then (2) use ansible-builder to create execution environments.

Migrating to Ansible Engine 2.9 images using Ansible Builder

To migrate Ansible Engine 2.9 images for use with Ansible Automation Platform 2.4, the ansible-builder tool automates the process of rebuilding images (including its custom plugins and dependencies) for use with automation execution environments.

Additional resources

For more information on using Ansible Builder to build execution environments, see the Creating and Consuming Execution Environments.

Migrating to Ansible Core 2.13

When upgrading to Ansible Core 2.13, you need to update your playbooks, plugins, or other parts of your Ansible infrastructure in order to be supported by the latest version of Ansible Core. For instructions on updating your Ansible content for Ansible Core 2.13 compatibility, see the Ansible-core 2.13 Porting Guide.

Updating execution environment image locations

If your private automation hub was installed separately, you can update your execution environment image locations to point to your private automation hub. Use this procedure to update your execution environment image locations.

Procedure
  1. Navigate to the directory containing setup.sh

  2. Create ./group_vars/automationcontroller by running the following command:

    touch ./group_vars/automationcontroller
  3. Paste the following content into ./group_vars/automationcontroller, being sure to adjust the settings to fit your environment:

    # Automation Hub Registry
    registry_username: 'your-automation-hub-user'
    registry_password: 'your-automation-hub-password'
    registry_url: 'automationhub.example.org'
    registry_verify_ssl: False
    
    ## Execution Environments
    control_plane_execution_environment: 'automationhub.example.org/ee-supported-rhel8:latest'
    
    global_job_execution_environments:
      - name: "Default execution environment"
        image: "automationhub.example.org/ee-supported-rhel8:latest"
      - name: "Ansible Engine 2.9 execution environment"
        image: "automationhub.example.org/ee-29-rhel8:latest"
      - name: "Minimal execution environment"
        image: "automationhub.example.org/ee-minimal-rhel8:latest"
  4. Run the ./setup.sh script

    $ ./setup.sh
Verification
  1. Log into Ansible Automation Platform as a user with system administrator access.

  2. Navigate to menu:Administration[Execution Environments].

  3. In the Image column, confirm that the execution environment image location has changed from the default value of <registry url>/ansible-automation-platform-<version>/<image name>:<tag> to <automation hub url>/<image name>:<tag>.

Scale up your automation using automation mesh

The automation mesh component of the Red Hat Ansible Automation Platform simplifies the process of distributing automation across multi-site deployments. For enterprises with multiple isolated IT environments, automation mesh provides a consistent and reliable way to deploy and scale up automation across your execution nodes using a peer-to-peer mesh communication network.

When upgrading from version 1.x to the latest version of the Ansible Automation Platform, you will need to migrate the data from your legacy isolated nodes into execution nodes necessary for automation mesh. You can implement automation mesh by planning out a network of hybrid and control nodes, then editing the inventory file found in the Ansible Automation Platform installer to assign mesh-related values to each of your execution nodes.

For instructions on how to migrate from isolated nodes to execution nodes, see the Red Hat Ansible Automation Platform Upgrade and Migration Guide.

For information about automation mesh and the various ways to design your automation mesh for your environment, see the Red Hat Ansible Automation Platform automation mesh guide.

Upgrading Ansible Automation Platform Operator on OpenShift Container Platform

The Ansible Automation Platform Operator simplifies the installation, upgrade and deployment of new Red Hat Ansible Automation Platform instances in your OpenShift Container Platform environment.

Upgrade considerations

Red Hat Ansible Automation Platform version 2.0 was the first release of the Ansible Automation Platform Operator. If you are upgrading from version 2.0, continue to the Upgrading the Ansible Automation Platform Operator procedure.

If you are using a version of OpenShift Container Platform that is not supported by the version of Red Hat Ansible Automation Platform to which you are upgrading, you must upgrade your OpenShift Container Platform cluster to a supported version prior to upgrading.

Refer to the Red Hat Ansible Automation Platform Life Cycle to determine the OpenShift Container Platform version needed.

For information about upgrading your cluster, refer to Updating clusters.

Prerequisites

To upgrade to a newer version of Ansible Automation Platform Operator, it is recommended that you do the following:

  • Create AutomationControllerBackup and AutomationHubBackup objects.

  • Review the release notes for the new Ansible Automation Platform version to which you are upgrading and any intermediate versions.

Upgrading the Ansible Automation Platform Operator

To upgrade to the latest version of Ansible Automation Platform Operator on OpenShift Container Platform, do the following:

Prodedure
  1. Log in to OpenShift Container Platform.

  2. Navigate to menu:Operators[Installed Operators].

  3. Select the Subscriptions tab.

  4. Under Upgrade status, click btn:[Upgrade Available].

  5. Click btn:[Preview InstallPlan].

  6. Click btn:[Approve].

System requirements

Use this information when planning your Red Hat Ansible Automation Platform installations and designing automation mesh topologies that fit your use case.

Prerequisites
  • You must be able to obtain root access either through the sudo command, or through privilege escalation. For more on privilege escalation see Understanding Privilege Escalation.

  • You must be able to de-escalate privileges from root to users such as: AWX, PostgreSQL, Event-Driven Ansible, or Pulp.

  • You must configure an NTP client on all nodes. For more information, see Configuring NTP server using Chrony.

Red Hat Ansible Automation Platform system requirements

Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform.

Table 20. Base system
Requirement Required Notes

Subscription

Valid Red Hat Ansible Automation Platform

OS

Red Hat Enterprise Linux 8.6 or later 64-bit (x86, ppc64le, s390x, aarch64)

Red Hat Ansible Automation Platform is also supported on OpenShift, see Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform for more information.

Ansible

version 2.14 (to install)

Ansible Automation Platform ships with execution environments that contain ansible-core 2.15.

Python

3.8 or later

Browser

A currently supported version of Mozilla FireFox or Google Chrome

Database

PostgreSQL version 13

The following are necessary for you to work with project updates and collections:

  • Ensure that the following domain names are part of either the firewall or the proxy’s allowlist for successful connection and download of collections from automation hub or Galaxy server:

    • galaxy.ansible.com

    • cloud.redhat.com

    • console.redhat.com

    • sso.redhat.com

  • SSL inspection must be disabled either when using self signed certificates or for the Red Hat domains.

Note

The requirements for systems managed by Ansible Automation Platform are the same as for Ansible. See Getting started with Ansible in the Ansible User Guide.

Additional notes for Red Hat Ansible Automation Platform requirements
  • Although Red Hat Ansible Automation Platform depends on Ansible Playbooks and requires the installation of the latest stable version of Ansible before installing automation controller, manual installations of Ansible are no longer required.

  • For new installations, automation controller installs the latest release package of Ansible 2.14.

  • If performing a bundled Ansible Automation Platform installation, the installation program attempts to install Ansible (and its dependencies) from the bundle for you.

  • If you choose to install Ansible on your own, the Ansible Automation Platform installation program detects that Ansible has been installed and does not attempt to reinstall it.

Note

You must install Ansible using a package manager such as dnf, and the latest stable version of the package manager must be installed for Red Hat Ansible Automation Platform to work properly. Ansible version 2.14 is required for versions 2.4 and later.

Automation controller system requirements

Automation controller is a distributed system, where different software components can be co-located or deployed across multiple compute nodes. In the installer, node types of control, hybrid, execution, and hop are provided as abstractions to help you design the topology appropriate for your use case.

Use the following recommendations for node sizing:

Note

On control and hybrid nodes, allocate a minimum of 20 GB to /var/lib/awx for execution environment storage.

Execution nodes

Runs automation. Increases memory and CPU to increase capacity for running more forks

Requirement Required

RAM

16 GB

CPUs

4

Local disk

40GB minimum

Control nodes

Processes events and runs cluster jobs including project updates and cleanup jobs. Increasing CPU and memory can help with job event processing.

Requirement Required

RAM

16 GB

CPUs

4

Local disk

  • 40GB minimum with at least 20GB available under /var/lib/awx

  • Storage volume must be rated for a minimum baseline of 1500 IOPS

  • Projects are stored on control and hybrid nodes, and for the duration of jobs, are also stored on execution nodes. If the cluster has many large projects, consider doubling the GB in /var/lib/awx/projects, to avoid disk space errors

Hybrid nodes

Runs both automation and cluster jobs. Comments on CPU and memory for execution and control nodes also apply to this node type.

Requirement Required

RAM

16 GB

CPUs

4

Local disk

  • 40GB minimum with at least 20GB available under /var/lib/awx

  • Storage volume must be rated for a minimum baseline of 1500 IOPS

  • Projects are stored on control and hybrid nodes, and for the duration of jobs, are also stored on execution nodes. If the cluster has many large projects, consider doubling the GB in /var/lib/awx/projects, to avoid disk space errors

Hop nodes

Serves to route traffic from one part of the Automation Mesh to another (for example, could be a bastion host into another network). RAM could affect throughput, CPU activity is low. Network bandwidth and latency are generally a more important factor than either RAM or CPU.

Requirement Required

RAM

16 GB

CPUs

4

Local disk

40GB

  • Actual RAM requirements vary based on how many hosts automation controller will manage simultaneously (which is controlled by the forks parameter in the job template or the system ansible.cfg file). To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks + 2 GB reservation for automation controller, see Automation controller Capacity Determination and Job Impact for further details. If forks is set to 400, 42 GB of memory is recommended.

  • A larger number of hosts can of course be addressed, though if the fork number is less than the total host count, more passes across the hosts are required. These RAM limitations are avoided when using rolling updates or when using the provisioning callback system built into automation controller, where each system requesting configuration enters a queue and is processed as quickly as possible; or in cases where automation controller is producing or deploying images such as AMIs. All of these are great approaches to managing larger environments. For further questions, please contact Ansible support through the Red Hat Customer portal at https://access.redhat.com/.

Additional resources

Automation hub system requirements

Automation hub enables you to discover and use new certified automation content from Red Hat Ansible and Certified Partners. On Ansible automation hub, you can discover and manage Ansible Collections, which are supported automation content developed by Red Hat and its partners for use cases such as cloud automation, network automation, and security automation.

Automation hub has the following system requirements:

Requirement Required Notes

RAM

8 GB minimum

  • 8 GB RAM (minimum and recommended for Vagrant trial installations)

  • 8 GB RAM (minimum for external standalone PostgreSQL databases)

  • For capacity based on forks in your configuration, see additional resources

CPUs

2 minimum

For capacity based on forks in your configuration, see additional resources.

Local disk

60 GB disk

A minimum of 40GB should be dedicated to /var for collection storage.

Note

Private automation hub

If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, this can result in an installation which cannot be used as container registry without certificate issues.

To avoid this, use the automationhub_main_url inventory variable with a value like https://pah.example.com linking to the private automation hub node in the installation inventory file.

This adds the external address to /etc/pulp/settings.py.

This implies that you only want to use the external address.

For information on inventory file variables, see Inventory File Variables in the Red Hat Ansible Automation Platform Installation Guide.

Event-Driven Ansible controller system requirements

The Event-Driven Ansible controller is a single-node system capable of handling a variable number of long-running processes (such as, Rulebook activations) on-demand, depending on the number of CPU cores. Use the following minimum requirements to execute a maximum of 9 simultaneous activations:

Requirement Required

RAM

16 GB

CPUs

4

Local disk

40 GB minimum

PostgreSQL requirements

Red Hat Ansible Automation Platform uses PostgreSQL 13.

  • PostgreSQL user passwords are hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database.

  • To determine if your automation controller instance has access to the database, you can do so with the command, awx-manage check_db.

Table 21. Database
Service Required Notes

Each automation controller

40 GB dedicated hard disk space

  • Dedicate a minimum of 20 GB to /var/ for file and working directory storage.

  • Storage volume must be rated for a minimum baseline of 1500 IOPS.

  • Projects are stored on control and hybrid nodes, and for the duration of jobs, are also stored on execution nodes. If the cluster has many large projects, consider having twice the GB in /var/lib/awx/projects, to avoid disk space errors.

  • 150 GB+ recommended

Each automation hub

60 GB dedicated hard disk space

Storage volume must be rated for a minimum baseline of 1500 IOPS.

Database

20 GB dedicated hard disk space

  • 150 GB+ recommended

  • Storage volume must be rated for a high baseline IOPS (1500 or more).

  • All automation controller data is stored in the database. Database storage increases with the number of hosts managed, number of jobs run, number of facts stored in the fact cache, and number of tasks in any individual job. For example, a playbook run every hour (24 times a day) across 250 hosts, with 20 tasks, will store over 800000 events in the database every week.

  • If not enough space is reserved in the database, old job runs and facts must be cleaned on a regular basis. Refer to Management Jobs in the Automation Controller Administration Guide for more information

PostgreSQL Configurations

Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the Red Hat Ansible Automation Platform installer. When the Ansible Automation Platform installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. However, you can adjust these PostgreSQL settings for standalone database server node where ansible_memtotal_mb is the total memory size of the database server:

max_connections == 1024
shared_buffers == ansible_memtotal_mb*0.3
work_mem == ansible_memtotal_mb*0.03
maintenance_work_mem == ansible_memtotal_mb*0.04
Additional resources

For more detail on tuning your PostgreSQL server, see the PostgreSQL documentation.

Enabling the hstore extension for the automation hub PostgreSQL database

From Ansible Automation Platform 2.4, the database migration script uses hstore fields to store information, therefore the hstore extension to the automation hub PostgreSQL database must be enabled.

This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.

However, when the PostgreSQL database is external, you must carry out this step manually before automation hub installation.

If the hstore extension is not enabled before automation hub installation, a failure is raised during database migration.

Procedure
  1. Check if the extension is available on the PostgreSQL server (automation hub database).

    $ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"

    Where the default value for <automation hub database> is automationhub.

    This gives an output similar to the following:

    name  | default_version | installed_version |comment
    ------+-----------------+-------------------+---------------------------------------------------
     hstore | 1.7           |                   | data type for storing sets of (key, value) pairs
    (1 row)
  2. This indicates that the hstore 1.7 extension is available, but not enabled.

    If the hstore extension is not available on the PostgreSQL server, the result is similar to the following:

     name | default_version | installed_version | comment
    ------+-----------------+-------------------+---------
    (0 rows)
  3. On a RHEL based server, the hstore extension is included in the postgresql-contrib RPM package, which is not installed automatically when installing the PostgreSQL server RPM package.

    To install the RPM package, use the following command:

    dnf install postgresql-contrib
  4. Create the hstore PostgreSQL extension on the automation hub database with the following command:

    $ psql -d <automation hub database> -c "CREATE EXTENSION hstore"

    The output of which is:

    CREATE EXTENSION
  5. In the following output, the installed_version field contains the hstore extension used, indicating that hstore is enabled.

    name | default_version | installed_version | comment
    -----+-----------------+-------------------+------------------------------------------------------
    hstore  |     1.7      |       1.7         | data type for storing sets of (key, value) pairs
    (1 row)

Benchmarking storage performance for the Ansible Automation Platform PostgreSQL database

The following procedure describes how to benchmark the write/read IOPS performance of the storage system to check whether the minimum Ansible Automation Platform PostgreSQL database requirements are met.

Prerequisites
  • You have installed the Flexible I/O Tester (fio) storage performance benchmarking tool.

    To install fio, run the following command as the root user:

    # yum -y install fio
  • You have adequate disk space to store the fio test data log files.

    The examples shown in the procedure require at least 60GB disk space in the /tmp directory:

    • numjobs sets the number of jobs run by the command.

    • size=10G sets the file size generated by each job.

    To reduce the amount of test data, adjust the value of the size parameter.

Procedure
  1. Run a random write test:

    $ fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G \
    --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \
    --verify=0 --bs=4K --iodepth=64 --rw=randwrite \
    --group_reporting=1 > /tmp/fio_benchmark_write_iops.log \
    2>> /tmp/fio_write_iops_error.log
  2. Run a random read test:

    $ fio --name=read_iops --directory=/tmp \
    --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s \
    --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread \
    --group_reporting=1 > /tmp/fio_benchmark_read_iops.log \
    2>> /tmp/fio_read_iops_error.log
  3. Review the results:

    In the log files written by the benchmark commands, search for the line beginning with iops. This line shows the minimum, maximum, and average values for the test.

    The following example shows the line in the log file for the random read test:

    $ cat /tmp/fio_benchmark_read_iops.log
    read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
    […]
       iops        : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360
    […]

    You must review, monitor, and revisit the log files according to your own business requirements, application workloads, and new demands.

Planning your Red Hat Ansible Automation Platform operator installation on Red Hat OpenShift Container Platform

Red Hat Ansible Automation Platform is supported on both Red Hat Enterprise Linux and Red Hat Openshift.

OpenShift operators help install and automate day-2 operations of complex, distributed software on Red Hat OpenShift Container Platform. The Ansible Automation Platform Operator enables you to deploy and manage Ansible Automation Platform components on Red Hat OpenShift Container Platform.

You can use this section to help plan your Red Hat Ansible Automation Platform installation on your Red Hat OpenShift Container Platform environment. Before installing, review the supported installation scenarios to determine which meets your requirements.

About Ansible Automation Platform Operator

The Ansible Automation Platform Operator provides cloud-native, push-button deployment of new Ansible Automation Platform instances in your OpenShift environment. The Ansible Automation Platform Operator includes resource types to deploy and manage instances of Automation controller and Private Automation hub. It also includes automation controller job resources for defining and launching jobs inside your automation controller deployments.

Deploying Ansible Automation Platform instances with a Kubernetes native operator offers several advantages over launching instances from a playbook deployed on Red Hat OpenShift Container Platform, including upgrades and full lifecycle support for your Red Hat Ansible Automation Platform deployments.

You can install the Ansible Automation Platform Operator from the Red Hat Operators catalog in OperatorHub.

OpenShift Container Platform version compatibility

The Ansible Automation Platform Operator to install Ansible Automation Platform 2.4 is available on OpenShift Container Platform 4.9 and later versions.

Additional resources

Supported installation scenarios for Red Hat OpenShift Container Platform

You can use the OperatorHub on the Red Hat OpenShift Container Platform web console to install Ansible Automation Platform Operator.

Alternatively, you can install Ansible Automation Platform Operator from the OpenShift Container Platform command-line interface (CLI), oc.

Follow one of the workflows below to install the Ansible Automation Platform Operator and use it to install the components of Ansible Automation Platform that you require.

  • Automation controller custom resources first, then automation hub custom resources;

  • Automation hub custom resources first, then automation controller custom resources;

  • Automation controller custom resources;

  • Automation hub custom resources.

Custom resources

You can define custom resources for each primary installation workflows.

Additional resources

Using Red Hat Single Sign-On Operator with automation hub

Private automation hub uses Red Hat Single Sign-On for authentication.

The Red Hat Single Sign-On Operator creates and manages resources. Use this Operator to create custom resources to automate Red Hat Single Sign-On administration in Openshift.

  • When installing Ansible Automation Platform on Virtual Machines (VMs) the installer can automatically install and configure Red Hat Single Sign-On for use with private automation hub.

  • When installing Ansible Automation Platform on Red Hat OpenShift Container Platform you must install Single Sign-On separately.

This chapter describes the process to configure Red Hat Single Sign-On and integrate it with private automation hub when Ansible Automation Platform is installed on OpenShift Container Platform.

Prerequisites
  • You have access to Red Hat OpenShift Container Platform using an account with operator installation permissions.

  • You have installed the catalog containing the Red Hat Ansible Automation Platform operators.

  • You have installed the Red Hat Single Sign-On Operator. To install the Red Hat Single Sign-On Operator, follow the procedure in Installing Red Hat Single Sign-On using a custom resource in the Red Hat Single Sign-On documentation.

Creating a Keycloak instance

When the Red Hat Single Sign-On Operator is installed you can create a Keycloak instance for use with Ansible Automation Platform.

From here you provide an external Postgres or one will be created for you.

Procedure
  1. Navigate to menu:Operator[Installed Operators].

  2. Select the rh-sso project.

  3. Select the Red Hat Single Sign-On Operator.

  4. On the Red Hat Single Sign-On Operator details page select btn:[Keycloak].

  5. Click btn:[Create instance].

  6. Click btn:[YAML view].

    The default Keycloak custom resource is as follows:

    apiVersion: keycloak.org/v1alpha1
    kind: Keycloak
    metadata:
      name: example-keycloak
      labels:
    	app: sso
      namespace: aap
    spec:
      externalAccess:
    	enabled: true
      instances: 1
  7. Click btn:[Create]

  8. When deployment is complete, you can use this credential to login to the administrative console.

  9. You can find the credentials for the administrator in the credential-<custom-resource> (example keycloak) secret in the namespace.

Creating a Keycloak realm for Ansible Automation Platform

Create a realm to manage a set of users, credentials, roles, and groups. A user belongs to and logs into a realm. Realms are isolated from one another and can only manage and authenticate the users that they control.

Procedure
  1. Navigate to menu:Operator[Installed Operators].

  2. Select the Red Hat Single Sign-On Operator project.

  3. Select the Keycloak Realm tab and click btn:[Create Keycloak Realm].

  4. On the Keycloak Realm form, select btn:[YAML view]. Edit the YAML file as follows:

    kind: KeycloakRealm
    apiVersion: keycloak.org/v1alpha1
    metadata:
      name: ansible-automation-platform-keycloakrealm
      namespace: rh-sso
      labels:
        app: sso
        realm: ansible-automation-platform
    spec:
      realm:
        id: ansible-automation-platform
        realm: ansible-automation-platform
        enabled: true
        displayName: Ansible Automation Platform
      instanceSelector:
        matchLabels:
          app: sso

    Field

    Description

    metadata.name

    Set a unique value in metadata for the name of the configuration resource (CR).

    metadata.namespace

    Set a unique value in metadata for the name of the configuration resource (CR).

    metadata.labels.app

    Set labels to a unique value. This is used when creating the client CR.

    metadata.labels.realm

    Set labels to a unique value. This is used when creating the client CR.

    spec.realm.id

    Set the realm name and id. These must be the same.

    spec.realm.realm

    Set the realm name and id. These must be the same.

    spec.realm.displayname

    Set the name to display.

  5. Click btn:[Create] and wait for the process to complete.

Creating a Keycloak client

Keycloak clients authenticate hub users with Red Hat Single Sign-On. When a user authenticates the request goes through the Keycloak client. When Single Sign-On validates or issues the OAuth token, the client provides the response to automation hub and the user can log in.

Procedure
  1. Navigate to menu:Operator[Installed Operators].

  2. Select the Red Hat Single Sign-On Operator project.

  3. Select the Keycloak Client tab and click btn:[Create Keycloak Client].

  4. On the Keycloak Realm form, select btn:[YAML view].

  5. Replace the default YAML file with the following:

    kind: KeycloakClient
    apiVersion: keycloak.org/v1alpha1
    metadata:
      name: automation-hub-client-secret
      labels:
        app: sso
        realm: ansible-automation-platform
      namespace: rh-sso
    spec:
      realmSelector:
        matchLabels:
          app: sso
          realm: ansible-automation-platform
      client:
        name: Automation Hub
        clientId: automation-hub
        secret: <client-secret>                        (1)
        clientAuthenticatorType: client-secret
        description: Client for automation hub
        attributes:
          user.info.response.signature.alg: RS256
          request.object.signature.alg: RS256
        directAccessGrantsEnabled: true
        publicClient: true
        protocol: openid-connect
        standardFlowEnabled: true
        protocolMappers:
          - config:
              access.token.claim: "true"
              claim.name: "family_name"
              id.token.claim: "true"
              jsonType.label: String
              user.attribute: lastName
              userinfo.token.claim: "true"
            consentRequired: false
            name: family name
            protocol: openid-connect
            protocolMapper: oidc-usermodel-property-mapper
          - config:
              userinfo.token.claim: "true"
              user.attribute: email
              id.token.claim: "true"
              access.token.claim: "true"
              claim.name: email
              jsonType.label: String
            name: email
            protocol: openid-connect
            protocolMapper: oidc-usermodel-property-mapper
            consentRequired: false
          - config:
              multivalued: "true"
              access.token.claim: "true"
              claim.name: "resource_access.${client_id}.roles"
              jsonType.label: String
            name: client roles
            protocol: openid-connect
            protocolMapper: oidc-usermodel-client-role-mapper
            consentRequired: false
          - config:
              userinfo.token.claim: "true"
              user.attribute: firstName
              id.token.claim: "true"
              access.token.claim: "true"
              claim.name: given_name
              jsonType.label: String
            name: given name
            protocol: openid-connect
            protocolMapper: oidc-usermodel-property-mapper
            consentRequired: false
          - config:
              id.token.claim: "true"
              access.token.claim: "true"
              userinfo.token.claim: "true"
            name: full name
            protocol: openid-connect
            protocolMapper: oidc-full-name-mapper
            consentRequired: false
          - config:
              userinfo.token.claim: "true"
              user.attribute: username
              id.token.claim: "true"
              access.token.claim: "true"
              claim.name: preferred_username
              jsonType.label: String
            name: <username>
            protocol: openid-connect
            protocolMapper: oidc-usermodel-property-mapper
            consentRequired: false
          - config:
              access.token.claim: "true"
              claim.name: "group"
              full.path: "true"
              id.token.claim: "true"
              userinfo.token.claim: "true"
            consentRequired: false
            name: group
            protocol: openid-connect
            protocolMapper: oidc-group-membership-mapper
          - config:
              multivalued: 'true'
              id.token.claim: 'true'
              access.token.claim: 'true'
              userinfo.token.claim: 'true'
              usermodel.clientRoleMapping.clientId:  'automation-hub'
              claim.name: client_roles
              jsonType.label: String
            name: client_roles
            protocolMapper: oidc-usermodel-client-role-mapper
            protocol: openid-connect
          - config:
              id.token.claim: "true"
              access.token.claim: "true"
              included.client.audience: 'automation-hub'
            protocol: openid-connect
            name: audience mapper
            protocolMapper: oidc-audience-mapper
      roles:
        - name: "hubadmin"
          description: "An administrator role for automation hub"
    1. Replace this with a unique value.

  6. Click btn:[Create] and wait for the process to complete.

When automation hub is deployed, you must update the client with the “Valid Redirect URIs” and “Web Origins” as described in Updating the Red Hat Single Sign-On client Additionally, the client comes pre-configured with token mappers, however, if your authentication provider does not provide group data to Red Hat SSO, then the group mapping must be updated to reflect how that information is passed. This is commonly by user attribute.

Creating a Keycloak user

This procedure creates a Keycloak user, with the hubadmin role, that can log in to automation hub with Super Administration privileges.

Procedure
  1. Navigate to menu:Operator[Installed Operators].

  2. Select the Red Hat Single Sign-On Operator project.

  3. Select the Keycloak Realm tab and click btn:[Create Keycloak User].

  4. On the Keycloak User form, select btn:[YAML view].

  5. Replace the default YAML file with the following:

    apiVersion: keycloak.org/v1alpha1
    kind: KeycloakUser
    metadata:
      name: hubadmin-user
      labels:
        app: sso
        realm: ansible-automation-platform
      namespace: rh-sso
    spec:
      realmSelector:
        matchLabels:
          app: sso
          realm: ansible-automation-platform
      user:
        username: hub_admin
        firstName: Hub
        lastName: Admin
        email: hub_admin@example.com
        enabled: true
        emailVerified: false
        credentials:
          - type: password
            value: <ch8ngeme>
        clientRoles:
          automation-hub:
            - hubadmin
  6. Click btn:[Create] and wait for the process to complete.

When a user is created, the Operator creates a Secret containing both the username and password using the following naming pattern: credential-<realm name>-<username>-<namespace>. In this example the credential is called credential-ansible-automation-platform-hub-admin-rh-sso. When a user is created the Operator does not update the user’s password. Password changes are not reflected in the Secret.

Installing the Ansible Automation Platform Operator

Procedure
  1. Navigate to menu:Operator[Operator Hub] and search for the Ansible Automation Platform Operator.

  2. Select the Ansible Automation Platform Operator project.

  3. Click on the Operator tile.

  4. Click btn:[Install].

  5. Select a Project to install the Operator into. Red Hat recommends using the Operator recommended Namespace name.

    1. If you want to install the Operator into a project other than the recommended one, select Create Project from the drop down menu.

    2. Enter the Project name.

    3. Click btn:[Create].

  6. Click btn:[Install].

  7. When the Operator has been installed, click btn:[View Operator].

Creating a Red Hat Single Sign-On connection secret

Use this procedure to create a connection secret for Red Hat Single Sign-On.

Procedure
  1. Navigate to https://<sso_host>/auth/realms/ansible-automation-platform.

  2. Copy the public_key value.

  3. In the OpenShift Web UI, navigate to menu:Workloads[Secrets].

  4. Select the ansible-automation-platform project.

  5. Click btn:[Create], and select btn:[From YAML].

  6. Edit the following YAML to create the secret

    apiVersion: v1
    kind: Secret
    metadata:
      name: automation-hub-sso                       (1)
      namespace: ansible-automation-platform
    type: Opaque
    stringData:
      keycloak_host: "keycloak-rh-sso.apps-crc.testing"
      keycloak_port: "443"
      keycloak_protocol: "https"
      keycloak_realm: "ansible-automation-platform"
      keycloak_admin_role: "hubadmin"
      social_auth_keycloak_key: "automation-hub"
      social_auth_keycloak_secret: "client-secret"   (2)
      social_auth_keycloak_public_key: >-            (3)
    1. This name is used in the next step when creating the automation hub instance.

    2. If the secret was changed when creating the Keycloak client for automation hub be sure to change this value to match.

    3. Enter the value of the public_key copied in Installing the Ansible Automation Platform Operator.

  7. Click btn:[Create] and wait for the process to complete.

Installing automation hub using the Operator

Use the following procedure to install automation hub using the operator.

Procedure
  1. Navigate to menu:Operator[Installed Operators].

  2. Select the Ansible Automation Platform.

  3. Select the Automation hub tab and click btn:[Create Automation hub].

  4. Select btn:[YAML view]. The YAML should be similar to:

    apiVersion: automationhub.ansible.com/v1beta1
    kind: AutomationHub
    metadata:
      name: private-ah                              (1)
      namespace: ansible-automation-platform
    spec:
      sso_secret: automation-hub-sso                (2)
      pulp_settings:
        verify_ssl: false
      route_tls_termination_mechanism: Edge
      ingress_type: Route
      loadbalancer_port: 80
      file_storage_size: 100Gi
      image_pull_policy: IfNotPresent
      web:
        replicas: 1
      file_storage_access_mode: ReadWriteMany
      content:
        log_level: INFO
        replicas: 2
      postgres_storage_requirements:
        limits:
          storage: 50Gi
        requests:
          storage: 8Gi
      api:
        log_level: INFO
        replicas: 1
      postgres_resource_requirements:
        limits:
          cpu: 1000m
          memory: 8Gi
        requests:
          cpu: 500m
          memory: 2Gi
      loadbalancer_protocol: http
      resource_manager:
        replicas: 1
      worker:
        replicas: 2
    1. Set metadata.name to the name to use for the instance.

    2. Set spec.sso_secret to the name of the secret created in Creating a Secret to hold the Red Hat Single Sign On connection details.

    Note

    This YAML turns off SSL verification (ssl_verify: false). If you are not using self-signed certificates for OpenShift this setting can be removed.

  5. Click btn:[Create] and wait for the process to complete.

Determining the automation hub Route

Use the following procedure to determine the hub route.

Procedure
  1. Navigate to menu:Networking[Routes].

  2. Select the project you used for the install.

  3. Copy the location of the private-ah-web-svc service. The name of the service is different if you used a different name when creating the automation hub instance. This is used later to update the Red Hat Single Sign-On client.

Updating the Red Hat Single Sign-On client

When automation hub is installed and you know the URL of the instance, you must update the Red Hat Single Sign-On to set the Valid Redirect URIs and Web Origins settings.

Procedure
  1. Navigate to menu:Operator[Installed Operators].

  2. Select the RH-SSO project.

  3. Click btn:[Red Hat Single Sign-On Operator].

  4. Select btn:[Keycloak Client].

  5. Click on the automation-hub-client-secret client.

  6. Select btn:[YAML].

  7. Update the Client YAML to add the Valid Redirect URIs and Web Origins settings.

    redirectUris:
      - 'https://private-ah-ansible-automation-platform.apps-crc.testing/*'
    webOrigins:
      - 'https://private-ah-ansible-automation-platform.apps-crc.testing'

    Field

    Description

    redirectURIs

    This is the location determined in Determine Automation Hub Route. Be sure to add the /* to the end of the redirectUris setting.

    webOrigins

    This is the location determined in Determine Automation Hub Route.

    Note

    Ensure the indentation is correct when entering these settings.

  8. Click btn:[Save].

To verify connectivity
  1. Navigate to the automation hub route.

  2. Enter the hub_admin user credentials and sign in.

  3. Red Hat Single Sign-On processes the authentication and redirects back to automation hub.

Additional resources

Managing credentials

Credentials authenticate the controller user to launch Ansible playbooks. The passwords and SSH keys are used to authenticate against inventory hosts. By using the credentials feature of automation controller, you can require the automation controller user to enter a password or key phrase when a playbook launches.

Creating a credential

As part of the initial setup, a demonstration credential and a Galaxy credential have been created for your use. Use the Galaxy credential as a template. It can be copied, but not edited. You can add more credentials as necessary.

Procedure
  1. Select Credentials from the navigation panel, to access the list of credentials.

  2. To add a new credential, see Add a new Credential in the Automation Controller User Guide for more information.

    Note

    When you set up additional credentials, the user you assign must have root access or be able to use SSH to connect to the host machine.

  3. Click btn:[Demo Credential] to view its details.

Demo Credential

Editing a credential

As part of the initial setup, you can leave the default Demo Credential as it is, and you can edit it later.

Procedure
  1. Edit the credential by using one of these methods:

    • Go to menu:Details[Edit].

    • From the navigation panel, select menu:Credentials[Edit Credential] next to the credential name and edit the appropriate details.

  2. Save your changes.

Pod specification modifications

Introduction

The Kubernetes concept of a pod is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, or managed.

Pods are the equivalent of a machine instance (physical or virtual) to a container. Each pod is allocated its own internal IP address, therefore owning its entire port space, and containers within pods can share their local storage and networking.

Pods have a life cycle. They are defined, then they are assigned to run on a node, then they run until their container(s) exit or they are removed for some other reason. Pods, depending on policy and exit code, may be removed after exiting, or may be retained to enable access to the logs of their containers.

Red Hat Ansible Automation Platform provides a simple default Pod specification, however, you can provide a custom YAML, or JSON document that overrides the default Pod specification. This custom document uses custom fields, such as ImagePullSecrets, that can be serialized as valid Pod JSON or YAML.

A full list of options can be found in the Openshift documentation.

Example of a pod that provides a long-running service.

This example demonstrates many features of pods, most of which are discussed in other topics and thus only briefly mentioned here:

apiVersion: v1
kind: Pod
metadata:
  annotations: { ... }                      (1)
  labels:
    deployment: docker-registry-1
    deploymentconfig: docker-registry
    docker-registry: default
  generateName: docker-registry-1-          (2)
spec:
  containers:                               (3)
  - env:         	            	  (4)
    - name: OPENSHIFT_CA_DATA
      value: ...
    - name: OPENSHIFT_CERT_DATA
      value: ...
    - name: OPENSHIFT_INSECURE
      value: "false"
    - name: OPENSHIFT_KEY_DATA
      value: ...
    - name: OPENSHIFT_MASTER
      value: https://master.example.com:8443
    image: openshift/origin-docker-registry:v0.6.2 (5)
    imagePullPolicy: IfNotPresent
    name: registry
    ports:   		                          (6)
    - containerPort: 5000
      protocol: TCP
    resources: {}                                    (7)
    securityContext: { ... }    		 (8)
    volumeMounts:                       	   (9)
    - mountPath: /registry
      name: registry-storage
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-br6yz
      readOnly: true
  dnsPolicy: ClusterFirst
  imagePullSecrets:                                  (10)
  - name: default-dockercfg-at06w
  restartPolicy: Always  			     (11)
  serviceAccount: default			    (12)
  volumes:        	                            (13)
  - emptyDir: {}
    name: registry-storage
  - name: default-token-br6yz
    secret:
      secretName: default-token-br6yz
Label Description

annotations:

Pods can be "tagged" with one or more labels, which can then be used to select and manage groups of pods in a single operation. The labels are stored in key:value format in the metadata hash. One label in this example is docker-registry=default.

generateName:

Pods must have a unique name within their namespace. A pod definition can specify the basis of a name with the generateName attribute, and add random characters automatically to generate a unique name.

containers:

containers specifies an array of container definitions. In this case (as with most), defines only one container.

env:

Environment variables pass necessary values to each container.

image:

Each container in the pod is instantiated from its own Docker-formatted container image.

ports:

The container can bind to ports made available on the pod’s IP.

resources

When you specify a Pod, you can optionally describe how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM). Other resources are available.

securityContext

OpenShift Online defines a security context for containers that specifies whether they are permitted to run as privileged containers, run as a user of their choice, and more. The default context is very restrictive but administrators can modify this as required.

volumeMounts:

The container specifies where external storage volumes should be mounted within the container. In this case, there is a volume for storing the registry’s data, and one for access to credentials the registry needs for making requests against the OpenShift Online API.

ImagePullSecrets

A pod can contain one or more containers, which must be pulled from some registry. If containers come from registries that require authentication, you can provide a list of ImagePullSecrets that refer to ImagePullSecrets present in the namespace. Having these specified enables Red Hat OpenShift Container Platform to authenticate with the container registry when pulling the image. For further information, see Managing resource containers in the kubernetes documentation.

restartPolicy:

The pod restart policy with possible values Always, OnFailure, and Never. The default value is Always.

serviceAccount:

Pods making requests against the OpenShift Online API is a common enough pattern that there is a serviceAccount field for specifying which service account user the pod should authenticate as when making the requests. This enables fine-grained access control for custom infrastructure components.

volumes:

The pod defines storage volumes that are available to its container(s) to use. In this case, it provides an ephemeral volume for the registry storage and a secret volume containing the service account credentials.

You can modify the pod used to run jobs in a Kubernetes-based cluster using automation controller by editing the pod specification in the automation controller UI. The pod specification that is used to create the pod that runs the job is in YAML format. For further information on editing the pod specifications, see Customizing the pod specification.

Customizing the pod specification

You can use the following procedure to customize the pod.

Procedure
  1. In the automation controller UI, navigate to menu:Administration[Instance Groups].

  2. Check btn:[Customize pod specification].

  3. In the Pod Spec Override field, specify the namespace by using the toggle to enable and expand the Pod Spec Override field.

  4. Click btn:[Save].

  5. Optional: Click btn:[Expand] to view the entire customization window if you wish to provide additional customizations.

The image used at job launch time is determined by the execution environment associated with the job. If a Container Registry credential is associated with the execution environment, then automation controller uses ImagePullSecret to pull the image. If you prefer not to give the service account permission to manage secrets, you must pre-create the ImagePullSecret, specify it on the pod specification, and omit any credential from the execution environment used.

Enabling pods to reference images from other secured registries

If a container group uses a container from a secured registry that requires a credential, you can associate a Container Registry credential with the Execution Environment that is assigned to the job template. Automation controller uses this to create an ImagePullSecret for you in the OpenShift Container Platform namespace where the container group job runs, and cleans it up after the job is done.

Alternatively, if the ImagePullSecret already exists in the container group namespace, you can specify the ImagePullSecret in the custom pod specification for the ContainerGroup.

Note that the image used by a job running in a container group is always overridden by the Execution Environment associated with the job.

Use of pre-created ImagePullSecrets (Advanced)

If you want to use this workflow and pre-create the ImagePullSecret, you can source the necessary information to create it from your local .dockercfg file on a system that has previously accessed a secure container registry.

Procedure

The .dockercfg file, or $HOME/.docker/config.json for newer Docker clients, is a Docker credentials file that stores your information if you have previously logged into a secured or insecure registry.

  1. If you already have a .dockercfg file for the secured registry, you can create a secret from that file by running the following command:

    $ oc create secret generic <pull_secret_name> \
    --from-file=.dockercfg=<path/to/.dockercfg> \
    --type=kubernetes.io/dockercfg
  2. Or if you have a $HOME/.docker/config.json file:

    $ oc create secret generic <pull_secret_name> \
    --from-file=.dockerconfigjson=<path/to/.docker/config.json> \
    --type=kubernetes.io/dockerconfigjson
  3. If you do not already have a Docker credentials file for the secured registry, you can create a secret by running the following command:

    $ oc create secret docker-registry <pull_secret_name> \
    --docker-server=<registry_server> \
    --docker-username=<user_name> \
    --docker-password=<password> \
    --docker-email=<email>
  4. To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example must match the name of the service account the pod uses. The default is the default service account.

    $ oc secrets link default <pull_secret_name> --for=pull
  5. Optional: To use a secret for pushing and pulling build images, the secret must be mountable inside a pod. You can do this by running:

    $ oc secrets link builder <pull_secret_name>
  6. Optional: For builds, you must also reference the secret as the pull secret from within your build configuration.

When the container group is successfully created, the Details tab of the newly created container group remains, which allows you to review and edit your container group information. This is the same menu that is opened if you click the btn:[Edit] icon from the Instance Group link. You can also edit instances and review jobs associated with this instance group.

Resource management for pods and containers

When you specify a Pod, you can specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).

When you specify the resource request for containers in a Pod, the kubenetes-scheduler uses this information to allocate the node to place the Pod on.

When you specify a resource limit for a container, the kubelet, or node agent, enforces those limits so that the running container is not permitted to use more of that resource than the limit you set. The kubelet also reserves at least the requested amount of that system resource specifically for that container to use.

Requests and limits

If the node where a Pod is running has sufficient resources available, it is possible for a container to use more resources than its request for that resource specifies. However, a container is not allowed to use more than its resource limit.

For example, if you set a memory request of 256 MiB for a container, and that container is in a Pod scheduled to a Node with 8GiB of memory and no other Pods, then the container can try to use more RAM.

If you set a memory limit of 4GiB for that container, the kubelet and container runtime enforce the limit. The runtime prevents the container from using more than the configured resource limit.

If a process in the container tries to consume more than the allowed amount of memory, the system kernel terminates the process that attempted the allocation, with an Out Of Memory (OOM) error.

You can implement limits in two ways:

  • Reactively: the system intervenes once it sees a violation.

  • By enforcement: the system prevents the container from ever exceeding the limit.

Different runtimes can have different ways to implement the same restrictions.

Note

If you specify a limit for a resource, but do not specify any request, and no admission-time mechanism has applied a default request for that resource, then Kubernetes copies the limit you specified and uses it as the requested value for the resource.

Resource types

CPU and memory are both resource types. A resource type has a base unit. CPU represents compute processing and is specified in units of Kubernetes CPUs. Memory is specified in units of bytes.

CPU and memory are collectively referred to as compute resources, or resources. Compute resources are measurable quantities that can be requested, allocated, and consumed. They are distinct from API resources. API resources, such as Pods and Services, are objects that can be read and modified through the Kubernetes API server.

Specifying resource requests and limits for pods and containers

For each container, you can specify resource limits and requests, including the following:

spec.containers[].resources.limits.cpu
spec.containers[].resources.limits.memory
spec.containers[].resources.requests.cpu
spec.containers[].resources.requests.memory

Although you can only specify requests and limits for individual containers, it is also useful to think about the overall resource requests and limits for a Pod. For a particular resource, a Pod resource request/limit is the sum of the resource requests/limits of that type for each container in the Pod.

Resource units in Kubernetes

CPU resource units

Limits and requests for CPU resources are measured in CPU units. In Kubernetes, 1 CPU unit is equivalent to 1 physical processor core, or 1 virtual core, depending on whether the node is a physical host or a virtual machine running inside a physical machine.

Fractional requests are allowed. When you define a container with spec.containers[].resources.requests.cpu set to 0.5, you are requesting half as much CPU time compared to if you asked for 1.0 CPU. For CPU resource units, the quantity expression 0.1 is equivalent to the expression 100m, which can be read as one hundred millicpu or one hundred millicores. millicpu and millicores mean the same thing. CPU resource is always specified as an absolute amount of resource, never as a relative amount. For example, 500m CPU represents the same amount of computing power whether that container runs on a single-core, dual-core, or 48-core machine.

Note

To specify CPU units less than 1.0 or 1000m you must use the milliCPU form. For example, use 5m, not 0.005 CPU.

Memory resource units

Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point number using one of these quantity suffixes: E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value:

128974848, 129e6, 129M,  128974848000m, 123Mi

Pay attention to the case of the suffixes. If you request 400m of memory, this is a request for 0.4 bytes, not 400 mebibytes (400Mi) or 400 megabytes (400M).

Example CPU and memory specification

The following cluster has enough free resources to schedule a task pod with a dedicated 100m CPU and 250Mi. The cluster can also withstand bursts over that dedicated usage up to 2000m CPU and 2Gi memory.

spec:
  task_resource_requirements:
    requests:
      cpu: 100m
      memory: 250Mi
    limits:
      cpu: 2000m
      memory: 2Gi

Automation controller will not schedule jobs that use more resources than the limit set. If the task pod does use more resources than the limit set, the container is OOMKilled by kubernetes and restarted.

Size recommendations for resource requests

All jobs that use a container group use the same Pod Specification. The Pod Specification includes the resource requests for the pod that runs the job.

All jobs use the same resource requests. The specified resource requests for your particular job on the pod specification affect how Kubernetes schedules the job pod based on resources available on worker nodes. These are the default values.

  • One fork typically requires 100Mb of memory. This is set using system_task_forks_mem. If your jobs have five forks, the job pod specification must request 500Mb of memory.

  • For job templates that have a particularly high forks value or otherwise need larger resource requests, you should create a separate container group with a different pod spec that indicates larger resource requests. Then you can assign it to the job template. For example, a job template with the forks value of 50 must be paired with a container group that requests 5GB of memory.

  • If the fork value for a job is high enough that no single pod would be able to contain the job, use the job slicing feature. This splits the inventory up such that the individual job “slices” fit in an automation pod provisioned by the container group.

Migrating Red Hat Ansible Automation Platform to Ansible Automation Platform Operator

Migrating your Red Hat Ansible Automation Platform deployment to the Ansible Automation Platform Operator allows you to take advantage of the benefits provided by a Kubernetes native operator, including simplified upgrades and full lifecycle support for your Red Hat Ansible Automation Platform deployments.

Use these procedures to migrate any of the following deployments to the Ansible Automation Platform Operator:

  • A VM-based installation of Ansible Tower 3.8.6, automation controller, or automation hub

  • An Openshift instance of Ansible Tower 3.8.6 (Ansible Automation Platform 1.2)

Migration considerations

If you are upgrading from Ansible Automation Platform 1.2 on OpenShift Container Platform 3 to Ansible Automation Platform 2.x on OpenShift Container Platform 4, you must provision a fresh OpenShift Container Platform version 4 cluster and then migrate the Ansible Automation Platform to the new cluster.

Preparing for migration

Before migrating your current Ansible Automation Platform deployment to Ansible Automation Platform Operator, you need to back up your existing data, create k8s secrets for your secret key and postgresql configuration.

Note

If you are migrating both automation controller and automation hub instances, repeat the steps in Creating a secret key secret and Creating a postgresql configuration secret for both and then proceed to Migrating data to the Ansible Automation Platform Operator.

Prerequisites

To migrate Ansible Automation Platform deployment to Ansible Automation Platform Operator, you must have the following:

  • Secret key secret

  • Postgresql configuration

  • Role-based Access Control for the namespaces on the new OpenShift cluster

  • The new OpenShift cluster must be able to connect to the previous PostgreSQL database

Note

Secret key information can be located in the inventory file created during the initial Red Hat Ansible Automation Platform installation. If you are unable to remember your secret key or have trouble locating your inventory file, contact Ansible support via the Red Hat Customer portal.

Before migrating your data from Ansible Automation Platform 2.x or earlier, you must back up your data for loss prevention. To backup your data, do the following:

Procedure
  1. Log in to your current deployment project.

  2. Run setup.sh to create a backup of your current data/deployment:

    For on-prem deployments of version 2.x or earlier:

    $ ./setup.sh -b

    For OpenShift deployments prior to version 2.0 (non-operator deployments):

    ./setup_openshift.sh -b

Creating a secret key secret

To migrate your data to Ansible Automation Platform Operator on OpenShift Container Platform, you must create a secret key that matches the secret key defined in the inventory file during your initial installation. Otherwise, the migrated data will remain encrypted and unusable after migration.

Procedure
  1. Locate the old secret key in the inventory file you used to deploy AAP in your previous installation.

  2. Create a yaml file for your secret key:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <resourcename>-secret-key
      namespace: <target-namespace>
    stringData:
      secret_key: <old-secret-key>
    type: Opaque
  3. Apply the secret key yaml to the cluster:

    oc apply -f <secret-key.yml>

Creating a postgresql configuration secret

For migration to be successful, you must provide access to the database for your existing deployment.

Procedure
  1. Create a yaml file for your postgresql configuration secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <resourcename>-old-postgres-configuration
      namespace: <target namespace>
    stringData:
      host: "<external ip or url resolvable by the cluster>"
      port: "<external port, this usually defaults to 5432>"
      database: "<desired database name>"
      username: "<username to connect as>"
      password: "<password to connect with>"
    type: Opaque
  2. Apply the postgresql configuration yaml to the cluster:

oc apply -f <old-postgres-configuration.yml>

Verifying network connectivity

To ensure successful migration of your data, verify that you have network connectivity from your new operator deployment to your old deployment database.

Prerequisites

Take note of the host and port information from your existing deployment. This information is located in the postgres.py file located in the conf.d directory.

Procedure
  1. Create a yaml file to verify the connection between your new deployment and your old deployment database:

    apiVersion: v1
    kind: Pod
    metadata:
        name: dbchecker
    spec:
      containers:
        - name: dbchecker
          image: registry.redhat.io/rhel8/postgresql-13:latest
          command: ["sleep"]
          args: ["600"]
  2. Apply the connection checker yaml file to your new project deployment:

    oc project ansible-automation-platform
    oc apply -f connection_checker.yaml
  3. Verify that the connection checker pod is running:

    oc get pods
  4. Connect to a pod shell:

    oc rsh dbchecker
  5. After the shell session opens in the pod, verify that the new project can connect to your old project cluster:

    pg_isready -h <old-host-address> -p <old-port-number> -U awx
    Example
    <old-host-address>:<old-port-number> - accepting connections

Migrating data to the Ansible Automation Platform Operator

After you have set your secret key, postgresql credentials, verified network connectivity and installed the Ansible Automation Platform Operator, you must create a custom resource controller object before you can migrate your data.

Creating an AutomationController object

Use the following steps to create an AutomationController custom resource object.

Procedure
  1. Log in to Red Hat OpenShift Container Platform.

  2. Navigate to menu:Operators[Installed Operators].

  3. Select the Ansible Automation Platform Operator installed on your project namespace.

  4. Select the Automation Controller tab.

  5. Click btn:[Create AutomationController].

  6. Enter a name for the new deployment.

  7. In Advanced configurations, select your secret key secret and postgres configuration secret.

  8. Click btn:[Create].

Creating an AutomationHub object

Use the following steps to create an AutomationHub custom resource object.

Procedure
  1. Log in to Red Hat OpenShift Container Platform.

  2. Navigate to menu:Operators[Installed Operators].

  3. Select the Ansible Automation Platform Operator installed on your project namespace.

  4. Select the Automation Hub tab.

  5. Click btn:[Create AutomationHub].

  6. Enter a name for the new deployment.

  7. In Advanced configurations, select your secret key secret and postgres configuration secret.

  8. Click btn:[Create].

Post migration cleanup

After your data migration is complete, you must delete any InstanceGroups that are no longer required.

Procedure
  1. Log in to Red Hat Ansible Automation Platform as the administrator with the password you created during migration.

    Note

    Note: If you did not create an administrator password during migration, one was automatically created for you. To locate this password, go to your project, select menu:Workloads[Secrets] and open controller-admin-password. From there you can copy the password and paste it into the Red Hat Ansible Automation Platform password field.

  2. Select menu:Administration[InstanceGroups].

  3. Select all InstanceGroups except controlplane and default.

  4. Click btn:[Delete].

Specifying dedicated nodes

A kubernetes cluster runs on top of multiple Virtual Machines or nodes (generally anywhere between 2 and 20 nodes). Pods can be scheduled on any of these nodes. When you create or schedule a new pod, use the topology_spread_constraints setting to configure how new pods are distributed across the underlying nodes when scheduled or created.

Do not schedule your pods on a single node, because if that node fails, the services that those pods provide also fails.

Schedule the control plane nodes to run on different nodes to the automation job pods. If the control plane pods share nodes with the job pods, the control plane can become resource starved and degrade the performance of the whole application.

Assigning pods to specific nodes

You can constrain the automation controller pods created by the operator to run on a certain subset of nodes.

  • node_selector and postgres_selector constrain the automation controller pods to run only on the nodes that match all the specified key, or value, pairs.

  • tolerations and postgres_tolerations enable the automation controller pods to be scheduled onto nodes with matching taints. See Taints and Toleration in the Kubernetes documentation for further details.

The following table shows the settings and fields that can be set on the automation controller’s specification section of the YAML (or using the Openshift UI form).

Name Description Default

postgres_image

Path of the image to pull

postgres

postgres_image_version

Image version to pull

13

node_selector

AutomationController pods’ nodeSelector

“”’’

topology_spread_constraints

AutomationController pods’ topologySpreadConstraints

“”’’

tolerations

AutomationController pods’ tolerations

“”’’

annotations

AutomationController pods’ annotations

“”’’

postgres_selector

Postgres pods’ nodeSelector

“”’’

postgres_tolerations

Postgres pods’ tolerations

“”’’

topology_spread_constraints can help optimize spreading your control plane pods across the compute nodes that match your node selector. For example, with the maxSkew parameter of this option set to 100, this means maximally spread across available nodes. So if there are 3 matching compute nodes and 3 pods, 1 pod will be assigned to each compute node. This parameter helps prevent the control plane pods from competing for resources with each other.

Example of a custom configuration for constraining controller pods to specific nodes
spec:
  ...
  node_selector: |
    disktype: ssd
    kubernetes.io/arch: amd64
    kubernetes.io/os: linux
  topology_spread_constraints: |
    - maxSkew: 100
      topologyKey: "topology.kubernetes.io/zone"
      whenUnsatisfiable: "ScheduleAnyway"
      labelSelector:
        matchLabels:
          app.kubernetes.io/name: "<resourcename>"
  tolerations: |
    - key: "dedicated"
      operator: "Equal"
      value: "AutomationController"
      effect: "NoSchedule"
  postgres_selector: |
    disktype: ssd
    kubernetes.io/arch: amd64
    kubernetes.io/os: linux
  postgres_tolerations: |
    - key: "dedicated"
      operator: "Equal"
      value: "AutomationController"
      effect: "NoSchedule"

Specify nodes for job execution

You can add a node selector to the container group pod specification to ensure they only run against certain nodes. First add a label to the nodes you want to run jobs against.

The following procedure adds a label to a node.

Procedure
  1. List the nodes in your cluster, along with their labels:

    kubectl get nodes --show-labels

    The output is similar to this (shown here in a table):

    Name Status Roles Age Version Labels

    worker0

    Ready

    <none>

    1d

    v1.13.0

    …​,kubernetes.io/hostname=worker0

    worker1

    Ready

    <none>

    1d

    v1.13.0

    …​,kubernetes.io/hostname=worker1

    worker2

    Ready

    <none>

    1d

    v1.13.0

    …​,kubernetes.io/hostname=worker2

  2. Choose one of your nodes, and add a label to it using the following command:

    kubectl label nodes <your-node-name> <aap_node_type>=<execution>

    For example:

    kubectl label nodes <your-node-name> disktype=ssd

    where <your-node-name> is the name of your chosen node.

  3. Verify that your chosen node has a disktype=ssd label:

    kubectl get nodes --show-labels
  4. The output is similar to this (shown here in a table):

    Name Status Roles Age Version Labels

    worker0

    Ready

    <none>

    1d

    v1.13.0

    …​disktype=ssd,kubernetes.io/hostname=worker0

    worker1

    Ready

    <none>

    1d

    v1.13.0

    …​,kubernetes.io/hostname=worker1

    worker2

    Ready

    <none>

    1d

    v1.13.0

    …​,kubernetes.io/hostname=worker2

    You can see that the worker0 node now has a disktype=ssd label.

  5. In the automation controller UI, specify that label in the metadata section of your customized pod specification in the container group.

apiVersion: v1
kind: Pod
metadata:
  disktype: ssd
  namespace: ansible-automation-platform
spec:
  serviceAccountName: default
  automountServiceAccountToken: false
  nodeSelector:
    aap_node_type: execution
  containers:
    - image: >-
     registry.redhat.io/ansible-automation-platform-22/ee-supported-rhel8@sha256:d134e198b179d1b21d3f067d745dd1a8e28167235c312cdc233860410ea3ec3e
      name: worker
      args:
        - ansible-runner
        - worker
        - '--private-data-dir=/runner'
      resources:
        requests:
          cpu: 250m
          memory: 100Mi
Extra settings

With extra_settings, you can pass multiple custom settings using the awx-operator. The parameter extra_settings is appended to /etc/tower/settings.py and can be an alternative to the extra_volumes parameter.

Name Description Default

extra_settings

Extra settings

‘’

Example configuration of extra_settings parameter
 spec:
    extra_settings:
      - setting: MAX_PAGE_SIZE
        value: "500"

      - setting: AUTH_LDAP_BIND_DN
        value: "cn=admin,dc=example,dc=com"

      - setting: SYSTEM_TASK_ABS_MEM
        value: "500"

Custom pod timeouts

A container group job in automation controller transitions to the running state just before you submit the pod to the Kubernetes API. Automation controller then expects the pod to enter the Running state before AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT seconds has elapsed. You can set AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT to a higher value if you want automation controller to wait for longer before cancelling jobs that fail to enter the Running state. AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT is how long automation controller waits from creation of a pod until the Ansible work begins in the pod. You can also extend the time if the pod cannot be scheduled becaus of resource constraints. You can do this using extra_settings on the automation controller specification. The default value is two hours.

This is used if you are consistently launching many more jobs than Kubernetes can schedule, and jobs are spending periods longer than AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT in pending.

Jobs are not launched until control capacity is available. If many more jobs are being launched than the container group has capacity to run, consider scaling up your Kubernetes worker nodes.

Jobs scheduled on the worker nodes

Both automation controller and Kubernetes play a role in scheduling a job.

When a job is launched, its dependencies are fulfilled, meaning any project updates or inventory updates are launched by automation controller as required by the job template, project, and inventory settings.

If the job is not blocked by other business logic in automation controller and there is control capacity in the control plane to start the job, the job is submitted to the dispatcher. The default settings of the "cost" to control a job is 1 capacity. So, a control pod with 100 capacity is able to control up to 100 jobs at a time. Given control capacity, the job transitions from pending to waiting.

The dispatcher, which is a background process in the control plan pod, starts a worker process to run the job. This communicates with the kubernetes API using a service account associated with the container group and uses the pod specification as defined on the Container Group in automation controller to provision the pod. The job status in automation controller is shown as running.

Kubernetes now schedules the pod. A pod can remain in the pending state for AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT. If the pod is denied through a ResourceQuota, the job starts over at pending. You can configure a resource quota on a namespace to limit how many resources may be consumed by pods in the namespace. For further information on ResourceQuotas, see Resource Quotas.

Installing and configuring automation hub on Red Hat OpenShift Container Platform web console

You can use these instructions to install the automation hub operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.

Automation hub configuration can be done through the automation hub pulp_settings or directly in the user interface after deployment. However, it is important to note that configurations made in pulp_settings take precedence over settings made in the user interface. Hub settings should always be set as lowercase on the Hub custom resource specification.

Note

When an instance of automation hub is removed, the PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation hub instance in the same namespace. See Finding and deleting PVCs for more information.

Prerequisites

  • You have installed the Red Hat Ansible Automation Platform operator in Operator Hub.

Installing the automation hub operator

Use this procedure to install the automation hub operator.

Procedure
  1. Navigate to menu:Operators[Installed Operators].

  2. Locate the Automation hub entry, then click btn:[Create instance].

Storage options for Ansible Automation Platform Operator installation on Red Hat OpenShift Container Platform

Automation hub requires ReadWriteMany file-based storage, Azure Blob storage, or Amazon S3-compliant storage for operation so that multiple pods can access shared content, such as collections.

The process for configuring object storage on the AutomationHub CR is similar for Amazon S3 and Azure Blob Storage.

If you are using file-based storage and your installation scenario includes automation hub, ensure that the storage option for Ansible Automation Platform Operator is set to ReadWriteMany. ReadWriteMany is the default storage option.

In addition, OpenShift Data Foundation provides a ReadWriteMany or S3-compliant implementation. Also, you can set up NFS storage configuration to support ReadWriteMany. This, however, introduces the NFS server as a potential, single point of failure.

Provisioning OCP storage with ReadWriteMany access mode

To ensure successful installation of Ansible Automation Platform Operator, you must provision your storage type for automation hub initially to ReadWriteMany access mode.

Procedure
  1. Click Provisioning to update the access mode.

  2. In the first step, update the accessModes from the default ReadWriteOnce to ReadWriteMany.

  3. Complete the additional steps in this section to create the persistent volume claim (PVC).

Configuring object storage on Amazon S3

Red Hat supports Amazon Simple Storage Service (S3) for automation hub. You can configure it when deploying the AutomationHub custom resource (CR), or you can configure it for an existing instance.

Prerequisites
  • Create an Amazon S3 bucket to store the objects.

  • Note the name of the S3 bucket.

Procedure
  1. Create a Kubernetes secret containing the AWS credentials and connection details, and the name of your Amazon S3 bucket. The following example creates a secret called test-s3:

    $ oc -n $HUB_NAMESPACE apply -f- <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: 'test-s3'
    stringData:
      s3-access-key-id: $S3_ACCESS_KEY_ID
      s3-secret-access-key: $S3_SECRET_ACCESS_KEY
      s3-bucket-name: $S3_BUCKET_NAME
      s3-region: $S3_REGION
    EOF
  2. Add the secret to the automation hub custom resource (CR) spec:

    spec:
      object_storage_s3_secret: test-s3
  3. If you are applying this secret to an existing instance, restart the API pods for the change to take effect. <hub-name> is the name of your hub instance.

$ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api
Configuring object storage on Azure Blob

Red Hat supports Azure Blob Storage for automation hub. You can configure it when deploying the AutomationHub custom resource (CR), or you can configure it for an existing instance.

Prerequisites
  • Create an Azure Storage blob container to store the objects.

  • Note the name of the blob container.

Procedure
  1. Create a Kubernetes secret containing the credentials and connection details for your Azure account, and the name of your Azure Storage blob container. The following example creates a secret called test-azure:

    $ oc -n $HUB_NAMESPACE apply -f- <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: 'test-azure'
    stringData:
      azure-account-name: $AZURE_ACCOUNT_NAME
      azure-account-key: $AZURE_ACCOUNT_KEY
      azure-container: $AZURE_CONTAINER
      azure-container-path: $AZURE_CONTAINER_PATH
      azure-connection-string: $AZURE_CONNECTION_STRING
    EOF
  2. Add the secret to the automation hub custom resource (CR) spec:

    spec:
      object_storage_azure_secret: test-azure
  3. If you are applying this secret to an existing instance, restart the API pods for the change to take effect. <hub-name> is the name of your hub instance.

$ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api

Configure your automation hub operator route options

The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation hub operator route options under Advanced configuration.

Procedure
  1. Click btn:[Advanced configuration].

  2. Under Ingress type, click the drop-down menu and select Route.

  3. Under Route DNS host, enter a common host name that the route answers to.

  4. Under Route TLS termination mechanism, click the drop-down menu and select Edge or Passthrough.

  5. Under Route TLS credential secret, click the drop-down menu and select a secret from the list.

Configuring the Ingress type for your automation hub operator

The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation hub operator Ingress under Advanced configuration.

Procedure
  1. Click btn:[Advanced Configuration].

  2. Under Ingress type, click the drop-down menu and select Ingress.

  3. Under Ingress annotations, enter any annotations to add to the ingress.

  4. Under Ingress TLS secret, click the drop-down menu and select a secret from the list.

After you have configured your automation hub operator, click btn:[Create] at the bottom of the form view. Red Hat OpenShift Container Platform will now create the pods. This may take a few minutes.

You can view the progress by navigating to menu:Workloads[Pods] and locating the newly created instance.

Verification

Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation hub are running:

Operator manager controllers automation controller automation hub

The operator manager controllers for each of the 3 operators, include the following:

  • automation-controller-operator-controller-manager

  • automation-hub-operator-controller-manager

  • resource-operator-controller-manager

After deploying automation controller, you will see the addition of these pods:

  • controller

  • controller-postgres

After deploying automation hub, you will see the addition of these pods:

  • hub-api

  • hub-content

  • hub-postgres

  • hub-redis

  • hub-worker

Note

A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod.

Accessing the automation hub user interface

You can access the automation hub interface once all pods have successfully launched.

Procedure
  1. Navigate to menu:Networking[Routes].

  2. Under Location, click on the URL for your automation hub instance.

The automation hub user interface launches where you can sign in with the administrator credentials specified during the operator configuration process.

Note

If you did not specify an administrator password during configuration, one was automatically created for you. To locate this password, go to your project, select menu:Workloads[Secrets] and open controller-admin-password. From there you can copy the password and paste it into the Automation hub password field.

Configuring an external database for automation hub on Red Hat Ansible Automation Platform operator

For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create command.

By default, the Red Hat Ansible Automation Platform operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment.

You can choose to use an external database instead if you prefer to use a dedicated node to ensure dedicated resources or to manually manage backups, upgrades, or performance tweaks.

Note

The same external database (PostgreSQL instance) can be used for both automation hub and automation controller as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.

The following section outlines the steps to configure an external database for your automation hub on a Ansible Automation Platform operator.

Prerequisite

The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform.

Note

Ansible Automation Platform 2.4 supports PostgreSQL 13.

Procedure

The external postgres instance credentials and connection information will need to be stored in a secret, which will then be set on the automation hub spec.

  1. Create a postgres_configuration_secret .yaml file, following the template below:

    apiVersion: v1
    kind: Secret
    metadata:
      name: external-postgres-configuration
      namespace: <target_namespace> (1)
    stringData:
      host: "<external_ip_or_url_resolvable_by_the_cluster>" (2)
      port: "<external_port>" (3)
      database: "<desired_database_name>"
      username: "<username_to_connect_as>"
      password: "<password_to_connect_with>" (4)
      sslmode: "prefer" (5)
      type: "unmanaged"
    type: Opaque
    1. Namespace to create the secret in. This should be the same namespace you wish to deploy to.

    2. The resolvable hostname for your database node.

    3. External port defaults to 5432.

    4. Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration.

    5. The variable sslmode is valid for external databases only. The allowed values are: prefer, disable, allow, require, verify-ca, and verify-full.

  2. Apply external-postgres-configuration-secret.yml to your cluster using the oc create command.

    $ oc create -f external-postgres-configuration-secret.yml
  3. When creating your AutomationHub custom resource object, specify the secret on your spec, following the example below:

    apiVersion: awx.ansible.com/v1beta1
    kind: AutomationHub
    metadata:
      name: hub-dev
    spec:
      postgres_configuration_secret: external-postgres-configuration

Finding and deleting PVCs

A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.

Procedure
  1. List the existing PVCs in your deployment namespace:

    oc get pvc -n <namespace>
  2. Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.

  3. Delete the old PVC:

    oc delete pvc -n <namespace> <pvc-name>

Additional resources

Red Hat Ansible Automation Platform installation overview

The Red Hat Ansible Automation Platform installation program offers you flexibility, allowing you to install Ansible Automation Platform using a number of supported installation scenarios. Starting with Ansible Automation Platform 2.4, the installation scenarios include the optional deployment of Event-Driven Ansible controller, which introduces the automated resolution of IT requests.

Regardless of the installation scenario you choose, installing Ansible Automation Platform involves the following steps:

Editing the Red Hat Ansible Automation Platform installer inventory file

The Ansible Automation Platform installer inventory file allows you to specify your installation scenario and describe host deployments to Ansible. The examples provided in this document show the parameter specifications needed to install that scenario for your deployment.

Running the Red Hat Ansible Automation Platform installer setup script

The setup script installs your Private Automation Hub using the required parameters defined in the inventory file.

Verifying automation controller installation

After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the automation controller.

Verifying automation hub installation

After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the automation hub.

Verifying Event-Driven Ansible controller installation

After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the Event-Driven Ansible controller.

Post-installation steps

After successful installation, you can begin using the features of Ansible Automation Platform.

Additional resources

For more information about the supported installation scenarios, see the Red Hat Ansible Automation Platform Planning Guide.

Prerequisites

Warning
You may experience errors if you do not fully upgrade your RHEL nodes prior to your Ansible Automation Platform installation.
Additional resources

For more information about obtaining a platform installer or system requirements, refer to the Red Hat Ansible Automation Platform system requirements in the Red Hat Ansible Automation Platform Planning Guide.

Inventories

An inventory is a collection of hosts managed by automation controller. Organizations are assigned to inventories, while permissions to launch playbooks against inventories are controlled at the user or team level.

For more information, see the following documentation:

Creating a new Inventory

The Inventories window displays a list of the inventories that are currently available. You can sort the inventory list by name and searched type, organization, description, owners and modifiers of the inventory, or additional criteria.

Procedure
  1. To view existing inventories, select Inventories from the navigation panel.

    • Automation controller provides a demonstration inventory for you to use as you learn how the controller works. You can use it as it is or edit it later. You can create another inventory, if necessary.

  2. To add another inventory, see Add a new inventory in the Automation Controller User Guide for more information.

  3. Click btn:[Demo Inventory] to view its details.

Demo inventory

As with organizations, inventories also have associated users and teams that you can view through the Access tab. For more information, see Inventories in the Automation Controller User Guide.

A user with the role of System Administrator has been automatically populated for this.

Managing groups and hosts

Inventories are divided into groups and hosts. Groups can represent a particular environment (such as a "Datacenter 1" or "Stage Testing"), a server type (such as "Application Servers" or "DB Servers"), or any other representation of your environment. The groups and hosts that belong to the Demo inventory are shown in the Groups and Hosts tabs.

Adding new groups and hosts

Groups are only applicable to standard inventories and are not configurable directly through a Smart Inventory. You can associate an existing group through hosts that are used with standard inventories. For more information, see Add groups in the Automation Controller User Guide.

Procedure
  1. To add new groups, select menu:Groups[Add].

  2. To add new hosts to groups, select menu:Hosts[Add].

As part of the initial setup and to test that automation controller is set up properly, a local host is added for your use.

localhost
Example

If the organization that you created has a group of web server hosts supporting a particular application, complete the following steps:

  1. Create a group and add the web server hosts, to add these hosts to the inventory.

  2. Click btn:[Cancel] (if no changes were made) or use the breadcrumb navigational links at the top of the automation controller browser to return to the Inventories list view. Clicking btn:[Save] does not exit the Details dialog.

Ansible Automation Platform containerized installation

Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.

This guide helps you to understand the installation requirements and processes behind our new containerized version of Ansible Automation Platform. This initial version is based upon Ansible Automation Platform 2.4 and is being released as a Technical Preview. Please see Technology Preview Features Support Scope to understand what a technical preview entails.

Prerequisites
  • A RHEL 9.2 based host. Minimal OS base install is recommended.

  • A non-root user for the RHEL host, with sudo or other Ansible supported privilege escalation (sudo recommended). This user is responsible for the installation of containerized Ansible Automation Platform.

  • It is recommended setting up an SSH public key authentication for the non-root user. For guidelines on setting up an SSH public key authentication for the non-root user, see How to configure SSH public key authentication for passwordless login.

  • SSH keys are only required when installing on remote hosts. If doing a self contained local VM based installation, you can use ansible_connection: local as per the example which does not require SSH.

  • Internet access from the RHEL host if using the default online installation method.

System Requirements

Your system must meet the following minimum system requirements to install and run Red Hat Containerized Ansible Automation Platform.

Memory

16Gb RAM

CPU

4 CPU

Disk space

40Gb

Disk IOPs

1500

Preparing the RHEL host for containerized installation

Procedure

Containerized Ansible Automation Platform runs the component services as podman based containers on top of a RHEL host. The installer takes care of this once the underlying host has been prepared. Use the following instructions for this.

  1. Log into your RHEL host as your non-root user.

  2. Run dnf repolist to validate only the BaseOS and appstream repos are setup and enabled on the host:

    $ dnf repolist
    Updating Subscription Management repositories.
    repo id                                                    repo name
    rhel-9-for-x86_64-appstream-rpms                           Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)
    rhel-9-for-x86_64-baseos-rpms                              Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
  3. Ensure that these repos and only these repos are available to the host OS. If you need to know how to do this use this guide: Chapter 10. Managing custom software repositories Red Hat Enterprise Linux

  4. Ensure that the host has DNS configured and can resolve hostnames and IPs using a fully qualified domain name (FQDN). This is essential to ensure services can talk to one another.

Using unbound DNS

To configure unbound DNS refer to Chapter 2. Setting up an unbound DNS server Red Hat Enterprise Linux 9.

Using BIND DNS

To configure DNS using BIND refer to Chapter 1. Setting up and configuring a BIND DNS server Red Hat Enterprise Linux 9.

Optional

To have the installer automatically pick up and apply your Ansible Automation Platform subscription manifest license, use this guide to generate a manifest file which can be downloaded for the installer: Chapter 2. Obtaining a manifest file Red Hat Ansible Automation Platform 2..

Installing ansible-core

Procedure
  1. Install ansible-core and other tools:

    sudo dnf install -y ansible-core wget git
  2. Set a fully qualified hostname:

    sudo hostnamectl set-hostname your-FQDN-hostname

Downloading Ansible Automation Platform

Procedure
  1. Download the latest installer tarball from access.redhat.com. This can be done directly within the RHEL host, which saves time.

  2. If you have downloaded the tarball and optional manifest zip file onto your laptop, copy them onto your RHEL host.

    Decide where you would like the installer to reside on the filesystem. Installation related files will be created under this location and require at least 10Gb for the initial installation.

  3. Unpack the installer tarball into your installation directory, and cd into the unpacked directory.

    1. online installer

      $ tar xfvz ansible-automation-platform-containerized-setup-2.4-1.tar.gz
    2. bundled installer

      $ tar xfvz ansible-automation-platform-containerized-setup-bundle-2.4-1-<arch name>.tar.gz

      Ansible collections will already be installed inside the directory called collections. You will have to set ANSIBLE_COLLECTIONS_PATH environment variable to the directory path to consume the ansible collections.

  4. Set ANSIBLE_COLLECTIONS_PATH:

$ export ANSIBLE_COLLECTIONS_PATH=/path/to/ansible-automation-platform-containerized-setup-2.4-1/collections

Using postinstall feature of containerized Ansible Automation Platform

Use the experimental postinstaller feature of containerized Ansible Automation Platform to define and load the configuration during the initial installation. This uses a configuration-as-code approach, where you simply define your configuration to be loaded as simple YAML files.

  1. To use this optional feature, you need to uncomment the following vars in the inventory file:

    controller_postinstall=true
  2. The default is false, so this must be enabled to activate the postinstaller. An Ansible Automation Platform license is required for this feature and must reside on the local filesystem so it can be automatically loaded:

    controller_license_file=/full_path_to/manifest_file.zip
  3. You can pull your configuration-as-code from a Git based repo. To do this, set the following variables to dictate where you pull the content from and where it will be stored for upload to the Ansible Automation Platform controller:

    controller_postinstall_repo_url=https://your_cac_scm_repo
    controller_postinstall_dir=/full_path_to_where_you_want_the pulled_content_to_reside

Definition files use the infra certified collections. The controller_configuration collection is preloaded as part of the installation and use the installation controller credentials you supply in the inventory file for access into the Ansible Automation Platform controller, so you simply need to provide the YAML configuration files. You can setup Ansible Automation Platform configuration attributes such as credentials, LDAP settings, users/teams, organizations, projects, inventory/hosts, job and workflow templates.

The following example shows a sample your-config.yml file defining and loading controller job templates. The example demonstrates a simple change to the preloaded demo example provided with an AAP installation.

/full_path_to_your_configuration_as_code/
├── controller
    	└── job_templates.yml

---
controller_templates:
 - name: Demo Job Template
   execution_environment: Default execution environment
   instance_groups:
 	- default
   inventory: Demo Inventory

Installing containerized Ansible Automation Platform

Installation of Ansible Automation Platform is controlled with inventory files. Inventory files define the hosts and containers used and created, variables for components, and other information needed to customize the installation.

For convenience an example inventory file is provided, that you can copy and modify to quickly get started.

  • Edit the inventory file by replacing the < > placeholders with your specific variables, and uncommenting any lines specific to your needs.

# This is the AAP installer inventory file
# Please consult the docs if you're unsure what to add
# For all optional variables please consult the included README.md

# This section is for your AAP Controller host(s)
# If one of these components is not being installed, comment out the <fqdn> line.
# -------------------------------------------------
[automationcontroller]
fqdn_of_your_rhel_host ansible_connection=local

# This section is for your AAP Automation Hub host(s)
# -----------------------------------------------------
[automationhub]
fqdn_of_your_rhel_host ansible_connection=local

# This section is for your AAP EDA Controller host(s)
# -----------------------------------------------------
[automationeda]
fqdn_of_your_rhel_host ansible_connection=local

# This section is for the AAP database(s)
# -----------------------------------------
# Uncomment the lines below and amend appropriately if you want AAP to install and manage the postgres databases
# Leave commented out if you intend to use your own external database and just set appropriate _pg_hosts vars
# see mandatory sections under each AAP component
#[database]
#fqdn_of_your_rhel_host ansible_connection=local

[all:vars]

# Common variables needed for installation
# ----------------------------------------
postgresql_admin_username=postgres
postgresql_admin_password=<set your own>
# If using the online (non-bundled) installer, you need to set RHN registry credentials
registry_username=<your RHN username>
registry_password=<your RHN password>
# If using the bundled installer, you need to alter defaults by using:
#bundle_install=true
#bundle_dir=</path/to/ansible-automation-platform-containerized-setup-bundle-2.4-1-<arch name>/bundle>

# AAP Controller - mandatory
# ------------------------------
controller_admin_password=<set your own>
controller_pg_host=fqdn_of_your_rhel_host
controller_pg_password=<set your own>

# AAP Controller - optional
# ------------------------------
# To use the postinstall feature you need to set these variables
#controller_postinstall=true
#controller_license_file=<full path to your manifest .zip file>
#controller_postinstall_dir=<full path to your config-as-code directory>
#controller_postinstall_repo_url=<git based config-as-code source URL>

# AAP Automation Hub - mandatory
# ------------------------------
hub_admin_password=<set your own>
hub_pg_host=fqdn_of_your_rhel_host
hub_pg_password=<set your own>

# AAP Automation Hub - optional
# -----------------------------

# AAP EDA Controller - mandatory
# ------------------------------
eda_admin_password=<set your own>
eda_pg_host=fqdn_of_your_rhel_host
eda_pg_password=<set your own>

# AAP EDA Controller - optional
# -----------------------------

Use the following command to install containerized Ansible Automation Platform:

ansible-playbook -i inventory ansible.containerized_installer.install
Note
If your privilege escalation requires a password to be entered, append -K to the command line. You will then be prompted for the BECOME password.

You can use increasing verbosity, up to 4 v’s (-vvvv) to see the details of the installation process.

Note
This can significantly increase installation time, so it is recommended that you use it only as needed or requested by Red Hat support.

Accessing controller, automation hub, and Event-Driven Ansible controller

Once the installation completes, these are the default protocol and ports used:

  • https protocol

  • Port 443 for controller

  • Port 444 for automation hub

  • Port 445 for EDA controller

These can be changed. Consult the README.md for further details. It is recommended that you leave the defaults unless you need to change them due to port conflicts or other factors.

Accessing Automation Controller UI

The automation controller UI is available by default at:

https://<your_rhel_host>:443

Login as the admin user with the password you created for controller_admin_password.

If you supplied the license manifest as part of the installation, the Ansible Automation Platform dashboard is displayed. If you did not supply a license file, the Subscription screen is displayed where you must supply your license details. This is documented here: Chapter 1. Activating Red Hat Ansible Automation Platform.

Accessing Automation hub UI

The automation hub UI is available by default at:

https://<hub node>:444

Login as the admin user with the password you created for hub_admin_password.

Accessing Event-Driven Ansible UI

The Event-Driven Ansible UI is available by default at:

https://<eda node>:445

Login as the admin user with the password you created for eda_admin_password.

Uninstalling containerized Ansible Automation Platform

To uninstall a containerized deployment, execute the uninstall.yml playbook.

$ ansible-playbook -i inventory ansible.containerized_installer.uninstall

This will stop all systemd units and containers and then delete all resources used by the containerized installer such as:

  • config and data directories/files

  • systemd unit files

  • podman containers and images

  • RPM packages

To keep container images, you can set the container_keep_images variable to true.

$ ansible-playbook -i inventory ansible.containerized_installer.uninstall -e container_keep_images=true

Activating Red Hat Ansible Automation Platform

Red Hat Ansible Automation Platform uses available subscriptions or a subscription manifest to authorize the use of Ansible Automation Platform. To obtain a subscription, you can do either of the following:

  1. Use your Red Hat customer or Satellite credentials when you launch Ansible Automation Platform.

  2. Upload a subscriptions manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible playbook.

Activate with credentials

When Ansible Automation Platform launches for the first time, the Ansible Automation Platform Subscription screen automatically displays. You can use your Red Hat credentials to retrieve and import your subscription directly into Ansible Automation Platform.

Procedures
  1. Enter your Red Hat username and password.

  2. Click btn:[Get Subscriptions].

    Note

    You can also use your Satellite username and password if your cluster nodes are registered to Satellite through Subscription Manager.

  3. Review the End User License Agreement and select I agree to the End User License Agreement.

  4. The Tracking and Analytics options are checked by default. These selections help Red Hat improve the product by delivering you a much better user experience. You can opt out by deselecting the options.

  5. Click btn:[Submit].

  6. Once your subscription has been accepted, the license screen displays and navigates you to the Dashboard of the Ansible Automation Platform interface. You can return to the license screen by clicking the btn:[Settings] icon and selecting the License tab from the Settings screen.

Activate with a manifest file

If you have a subscriptions manifest, you can upload the manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible playbook.

Prerequisites

You must have a Red Hat Subscription Manifest file exported from the Red Hat Customer Portal. For more information, see Obtaining a manifest file.

Uploading with the interface
  1. Complete steps to generate and download the manifest file

  2. Log in to Red Hat Ansible Automation Platform.

  3. If you are not immediately prompted for a manifest file, go to menu:Settings[License].

  4. Make sure the Username and Password fields are empty.

  5. Click btn:[Browse] and select the manifest file.

  6. Click btn:[Next].

Note

If the btn:[BROWSE] button is disabled on the License page, clear the USERNAME and PASSWORD fields.

Uploading manually

If you are unable to apply or update the subscription info using the Red Hat Ansible Automation Platform interface, you can upload the subscriptions manifest manually in an Ansible playbook using the license module in the ansible.controller collection.

- name: Set the license using a file
  license:
  manifest: "/tmp/my_manifest.zip"

Post-installation steps

Whether you are a new Ansible Automation Platform user looking to start automating, or an existing administrator looking to migrate old Ansible content to your latest installed version of Red Hat Ansible Automation Platform, explore the next steps to begin leveraging the new features of Ansible Automation Platform 2.4:

Migrating data to Ansible Automation Platform 2.4

For platform administrators looking to complete an upgrade to the Ansible Automation Platform 2.4, there may be additional steps needed to migrate data to a new instance:

Migrating from legacy virtual environments (venvs) to automation execution environments

Ansible Automation Platform 2.4 moves you away from custom Python virtual environments (venvs) in favor of automation execution environments - containerized images that packages the necessary components needed to execute and scale your Ansible automation. This includes Ansible Core, Ansible Content Collections, Python dependencies, Red Hat Enterprise Linux UBI 8, and any additional package dependencies.

If you are looking to migrate your venvs to execution environments, you will (1) need to use the awx-manage command to list and export a list of venvs from your original instance, then (2) use ansible-builder to create execution environments.

Migrating to Ansible Engine 2.9 images using Ansible Builder

To migrate Ansible Engine 2.9 images for use with Ansible Automation Platform 2.4, the ansible-builder tool automates the process of rebuilding images (including its custom plugins and dependencies) for use with automation execution environments.

Additional resources

For more information on using Ansible Builder to build execution environments, see the Creating and Consuming Execution Environments.

Migrating to Ansible Core 2.13

When upgrading to Ansible Core 2.13, you need to update your playbooks, plugins, or other parts of your Ansible infrastructure in order to be supported by the latest version of Ansible Core. For instructions on updating your Ansible content for Ansible Core 2.13 compatibility, see the Ansible-core 2.13 Porting Guide.

Updating execution environment image locations

If your private automation hub was installed separately, you can update your execution environment image locations to point to your private automation hub. Use this procedure to update your execution environment image locations.

Procedure
  1. Navigate to the directory containing setup.sh

  2. Create ./group_vars/automationcontroller by running the following command:

    touch ./group_vars/automationcontroller
  3. Paste the following content into ./group_vars/automationcontroller, being sure to adjust the settings to fit your environment:

    # Automation Hub Registry
    registry_username: 'your-automation-hub-user'
    registry_password: 'your-automation-hub-password'
    registry_url: 'automationhub.example.org'
    registry_verify_ssl: False
    
    ## Execution Environments
    control_plane_execution_environment: 'automationhub.example.org/ee-supported-rhel8:latest'
    
    global_job_execution_environments:
      - name: "Default execution environment"
        image: "automationhub.example.org/ee-supported-rhel8:latest"
      - name: "Ansible Engine 2.9 execution environment"
        image: "automationhub.example.org/ee-29-rhel8:latest"
      - name: "Minimal execution environment"
        image: "automationhub.example.org/ee-minimal-rhel8:latest"
  4. Run the ./setup.sh script

    $ ./setup.sh
Verification
  1. Log into Ansible Automation Platform as a user with system administrator access.

  2. Navigate to menu:Administration[Execution Environments].

  3. In the Image column, confirm that the execution environment image location has changed from the default value of <registry url>/ansible-automation-platform-<version>/<image name>:<tag> to <automation hub url>/<image name>:<tag>.

Scale up your automation using automation mesh

The automation mesh component of the Red Hat Ansible Automation Platform simplifies the process of distributing automation across multi-site deployments. For enterprises with multiple isolated IT environments, automation mesh provides a consistent and reliable way to deploy and scale up automation across your execution nodes using a peer-to-peer mesh communication network.

When upgrading from version 1.x to the latest version of the Ansible Automation Platform, you will need to migrate the data from your legacy isolated nodes into execution nodes necessary for automation mesh. You can implement automation mesh by planning out a network of hybrid and control nodes, then editing the inventory file found in the Ansible Automation Platform installer to assign mesh-related values to each of your execution nodes.

For instructions on how to migrate from isolated nodes to execution nodes, see the Red Hat Ansible Automation Platform Upgrade and Migration Guide.

For information about automation mesh and the various ways to design your automation mesh for your environment, see the Red Hat Ansible Automation Platform automation mesh guide.

Migrating to automation execution environments

Why upgrade to automation execution environments?

Red Hat Ansible Automation Platform 2.4 introduces automation execution environments. Automation execution environments are container images that allow for easier administration of Ansible by including everything needed to run Ansible automation within a single container. Automation execution environments include:

  • RHEL UBI 8

  • Ansible 2.9 or Ansible Core 2.13

  • Python 3.9 or later.

  • Any Ansible Content Collections

  • Collection python or binary dependencies

By including these elements, Ansible provides platform administrators a standardized way to define, build, and distribute the environments the automation runs in.

Due to the new automation execution environment, it is no longer necessary for administrators to create custom plugins and automation content. Administrators can now spin up smaller automation execution environments in less time to create their content.

All custom dependencies are now defined in the development phase instead of the administration and deployment phase. Decoupling from the control plane enables faster development cycles, scalability, reliability, and portability across environments. Automation execution environments enables the Ansible Automation Platform to move to a distributed architecture allowing administrators to scale automation across their organization.

About migrating legacy venvs to automation execution environments

When upgrading from older versions of automation controller to version 4.0 or later, the controller can detect previous versions of virtual environments associated with Organizations, Inventory and Job Templates and informs you to migrate to the new automation execution environments model. A new installation of automation controller creates two virtualenvs during the installation; one runs the controller and the other runs Ansible. Like legacy virtual environments, automation execution environments allow the controller to run in a stable environment, while allowing you to add or update modules to your automation execution environments as necessary to run your playbooks.

You can duplicate your setup in an automation execution environment from a previous custom virtual environment by migrating them to the new automation execution environment. Use the awx-manage commands in this section to:

  • list of all the current custom virtual environments and their paths (list_custom_venvs)

  • view the resources that rely a particular custom virtual environment (custom_venv_associations)

  • export a particular custom virtual environment to a format that can be used to migrate to an automation execution environment (export_custom_venv)

The below workflow describes how to migrate from legacy venvs to automation execution environments using the awx-manage command.

Migrating virtual envs to automation execution environments

Use the following sections to assist with additional steps in the migration process once you have upgraded to Red Hat Ansible Automation Platform 2.0 and automation controller 4.0.

Listing custom virtual environments

You can list the virtual environments on your automation controller instance using the awx-manage command.

Procedure
  1. SSH into your automation controller instance and run:

    $ awx-manage list_custom_venvs

A list of discovered virtual environments will appear.

# Discovered virtual environments:
/var/lib/awx/venv/testing
/var/lib/venv/new_env

To export the contents of a virtual environment, re-run while supplying the path as an argument:
awx-manage export_custom_venv /path/to/venv

Viewing objects associated with a custom virtual environment

View the organizations, jobs, and inventory sources associated with a custom virtual environment using the awx-manage command.

Procedure
  1. SSH into your automation controller instance and run:

    $ awx-manage custom_venv_associations /path/to/venv

A list of associated objects will appear.

inventory_sources:
- id: 15
  name: celery
job_templates:
- id: 9
  name: Demo Job Template @ 2:40:47 PM
- id: 13
  name: elephant
organizations
- id: 3
  name: alternating_bongo_meow
- id: 1
  name: Default
projects: []

Selecting the custom virtual environment to export

Select the custom virtual environment you wish to export using awx-manage export_custom_venv command.

Procedure
  1. SSH into your automation controller instance and run:

    $ awx-manage export_custom_venv /path/to/venv

The output from this command will show a pip freeze of what is in the specified virtual environment. This information can be copied into a requirements.txt file for Ansible Builder to use for creating a new automation execution environments image.

numpy==1.20.2
pandas==1.2.4
python-dateutil==2.8.1
pytz==2021.1
six==1.16.0

To list all available custom virtual environments run:
awx-manage list_custom_venvs
Note

Pass the -q flag when running awx-manage list_custom_venvs to reduce output.

Red Hat Ansible Automation Platform platform components

Ansible Automation Platform is a modular platform composed of separate components that can be connected together to meet your deployment needs. Ansible Automation Platform deployments start with automation controller which is the enterprise framework for controlling, securing, and managing Ansible automation with a user interface (UI) and RESTful application programming interface (API). Then, you can add to your deployment any combination of the following automation platform components:

Ansible automation hub

Ansible automation hub is a repository for certified content of Ansible Content Collections. It is the centralized repository for Red Hat and its partners to publish content, and for customers to discover certified, supported Ansible Content Collections. Red Hat Ansible Certified Content provides users with content that has been tested and is supported by Red Hat.

Private automation hub

Private automation hub provides both disconnected and on premise solution for synchronizing content. You can synchronize collections and execution environment images from Red Hat cloud automation hub, storing and serving your own custom automation collections and execution images. You can also use other sources such as Ansible Galaxy or other container registries to provide content to your private automation hub. Private automation hub can integrate into your enterprise directory and your CI/CD pipelines.

Event-Driven Ansible controller

The Event-Driven Ansible controller is the interface for event-driven automation and introduces automated resolution of IT requests. This component helps you connect to sources of events and act on those events using rulebooks. This technology improves IT speed and agility, and enables consistency and resilience. With Event-Driven Ansible, you can:

  • Automate decision making

  • Use numerous event sources

  • Implement event-driven automation within and across multiple IT use cases

Automation mesh

Automation mesh is an overlay network intended to ease the distribution of work across a large and dispersed collection of workers through nodes that establish peer-to-peer connections with each other using existing networks.

Automation mesh provides:

  • Dynamic cluster capacity that scales independently, allowing you to create, register, group, ungroup and deregister nodes with minimal downtime.

  • Control and execution plane separation that enables you to scale playbook execution capacity independently from control plane capacity.

  • Deployment choices that are resilient to latency, reconfigurable without outage, and that dynamically re-reroute to choose a different path when outages exist.

  • Mesh routing changes.

  • Connectivity that includes bi-directional, multi-hopped mesh communication possibilities which are Federal Information Processing Standards (FIPS) compliant.

Automation execution environments

Automation execution environments are container images on which all automation in Red Hat Ansible Automation Platform is run. They provide a solution that includes the Ansible execution engine and hundreds of modules that help users automate all aspects of IT environments and processes. Automation execution environments automate commonly used operating systems, infrastructure platforms, network devices, and clouds.

Ansible Galaxy

Ansible Galaxy is a hub for finding, reusing, and sharing Ansible content. Community-provided Galaxy content, in the form of prepackaged roles, can help start automation projects. Roles for provisioning infrastructure, deploying applications, and completing other tasks can be dropped into Ansible Playbooks and be applied immediately to customer environments.

Automation content navigator

Automation content navigator is a textual user interface (TUI) that becomes the primary command line interface into the automation platform, covering use cases from content building, running automation locally in an execution environment, running automation in Ansible Automation Platform, and providing the foundation for future integrated development environments (IDEs).

Red Hat Ansible Automation Platform Architecture

As a modular platform, Ansible Automation Platform provides the flexibility to easily integrate components and customize your deployment to best meet your automation requirements. This section provides a comprehensive architectural example of an Ansible Automation Platform deployment.

Example Ansible Automation Platform architecture

The Red Hat Ansible Automation Platform 2.4 reference architecture provides an example setup of a standard deployment of Ansible Automation Platform using automation mesh on Red Hat Enterprise Linux. The deployment shown takes advantage of the following key components to provide a simple, secure and flexible method of handling your automation workloads, a central location for content collections, and automated resolution of IT requests.

Automation controller

Provides the control plane for automation through its UI, Restful API, RBAC workflows and CI/CD integrations.

Automation mesh

Is an overlay network that provides the ability to ease the distribution of work across a large and dispersed collection of workers through nodes that establish peer-to-peer connections with each other using existing networks.

Private automation hub

Provides automation developers the ability to collaborate and publish their own automation content and streamline delivery of Ansible code within their organization.

Event-Driven Ansible (EDA)

Provides the event-handling capability needed to automate time-consuming tasks and respond to changing conditions in any IT domain.

The architecture for this example consists of the following:

  • A two node automation controller cluster

  • An optional hop node to connect automation controller to execution nodes

  • A two node automation hub cluster

  • A single node EDA controller cluster

  • A single PostgreSQL database connected to the automation controller, automation hub, and EDA controller clusters

  • Two execution nodes per automation controller cluster

Reference architecture for an example setup of a standard Ansible Automation Platform deployment
Figure 4. Example Ansible Automation Platform 2.4 architecture

Inventory file variables

The following tables contain information about the pre-defined variables used in Ansible installation inventory files. Not all of these variables are required.

General variables

Variable Description

enable_insights_collection

The default install registers the node to the Red Hat Insights for Red Hat Ansible Automation Platform Service if the node is registered with Subscription Manager. Set to False to disable.

Default = true

registry_password

registry_password is only required if a non-bundle installer is used.

Password credential for access to registry_url.

Used for both [automationcontroller] and [automationhub] groups.

Enter your Red Hat Registry Service Account credentials in registry_username and registry_password to link to the Red Hat container registry.

When registry_url is registry.redhat.io, username and password are required if not using bundle installer.

registry_url

Used for both [automationcontroller] and [automationhub] groups.

Default = registry.redhat.io.

registry_username

registry_username is only required if a non-bundle installer is used.

User credential for access to registry_url.

Used for both [automationcontroller] and [automationhub] groups, but only if the value of registry_url is registry.redhat.io.

Enter your Red Hat Registry Service Account credentials in registry_username and registry_password to link to the Red Hat container registry.

routable_hostname

routable hostname is used if the machine running the installer can only route to the target host through a specific URL, for example, if you use shortnames in your inventory, but the node running the installer can only resolve that host using FQDN.

If routable_hostname is not set, it should default to ansible_host. Then if, and only if ansible_host is not set, inventory_hostname is used as a last resort.

Note that this variable is used as a host variable for particular hosts and not under the [all:vars] section. For further information, see Assigning a variable to one machine:host variables

Ansible automation hub variables

Variable Description

automationhub_admin_password

Required

automationhub_api_token

If upgrading from Ansible Automation Platform 2.0 or earlier, you must either:

  • provide an existing Ansible automation hub token as automationhub_api_token, or

  • set generate_automationhub_token to true to generate a new token

Generating a new token invalidates the existing token.

automationhub_authentication_backend

This variable is not set by default. Set it to ldap to use LDAP authentication.

When this is set to ldap, you must also set the following variables:

  • automationhub_ldap_server_uri

  • automationhub_ldap_bind_dn

  • automationhub_ldap_bind_password

  • automationhub_ldap_user_search_base_dn

  • automationhub_ldap_group_search_base_dn

automationhub_auto_sign_collections

If a collection signing service is enabled, collections are not signed automatically by default.

Setting this parameter to true signs them by default.

Default = false.

automationhub_backup_collections

Optional

Ansible automation hub provides artifacts in /var/lib/pulp. Automation controller automatically backs up the artifacts by default.

You can also set automationhub_backup_collections = false and the backup/restore process does not then backup or restore /var/lib/pulp.

Default = true

automationhub_collection_seed_repository

When the bundle installer is run, validated content is uploaded to the validated repository, and certified content is uploaded to the rh-certified repository.

By default, both certified and validated content are uploaded.

Possible values of this variable are 'certified' or 'validated'.

If you do not want to install content, set automationhub_seed_collections to false to disable the seeding.

If you only want one type of content, set automationhub_seed_collections to true and automationhub_collection_seed_repository to the type of content you do want to include.

automationhub_collection_signing_service_key

If a collection signing service is enabled, you must provide this variable to ensure that collections can be properly signed.

/absolute/path/to/key/to/sign

automationhub_collection_signing_service_script

If a collection signing service is enabled, you must provide this variable to ensure that collections can be properly signed.

/absolute/path/to/script/that/signs

automationhub_create_default_collection_signing_service

The default install does not create a collection signing service. If set to true a signing service is created.

Default = false

automationhub_container_signing_service_key

If a container signing service is enabled, you must provide this variable to ensure that containers can be properly signed.

/absolute/path/to/key/to/sign

automationhub_container_signing_service_script

If a container signing service is enabled, you must provide this variable to ensure that containers can be properly signed.

/absolute/path/to/script/that/signs

automationhub_create_default_container_signing_service

The default install does not create a container signing service. If set to true a signing service is created.

Default = false

automationhub_disable_hsts

The default install deploys a TLS enabled Ansible automation hub. Use if automation hub is deployed with HTTP Strict Transport Security (HSTS) web-security policy enabled. Unless specified otherwise, the HSTS web-security policy mechanism is enabled. This setting allows you to disable it if required.

Default = false

automationhub_disable_https

Optional

If Ansible automation hub is deployed with HTTPS enabled.

Default = false.

automationhub_enable_api_access_log

When set to true, creates a log file at /var/log/galaxy_api_access.log that logs all user actions made to the platform, including their username and IP address.

Default = false.

automationhub_enable_analytics

A Boolean indicating whether to enable pulp analytics for the version of pulpcore used in automation hub in Ansible Automation Platform 2.4.

To enable pulp analytics, set automationhub_enable_analytics = true.

Default = false.

automationhub_enable_unauthenticated_collection_access

Enables unauthorized users to view collections.

To enable unauthorized users to view collections, set automationhub_enable_unauthenticated_collection_access = true.

Default = false.

automationhub_enable_unauthenticated_collection_download

Enables unauthorized users to download collections.

To enable unauthorized users to download collections, set automationhub_enable_unauthenticated_collection_download = true.

Default = false.

automationhub_importer_settings

Optional

Dictionary of setting to pass to galaxy-importer.

At import time collections can go through a series of checks.

Behavior is driven by galaxy-importer.cfg configuration.

Examples are ansible-doc, ansible-lint, and flake8.

This parameter enables you to drive this configuration.

automationhub_main_url

The main automation hub URL that clients connect to.

For example, https://<load balancer host>.

If not specified, the first node in the [automationhub] group is used.

Use automationhub_main_url to specify the main automation hub URL that clients connect to if you are implementing Red Hat Single Sign-On on your automation hub environment.

automationhub_pg_database

Required

The database name.

Default = automationhub

automationhub_pg_host

Required if not using internal database.

The hostname of the remote postgres database used by automation hub

Default = 127.0.0.1

automationhub_pg_password

The password for the automation hub PostgreSQL database.

Do not use special characters for automationhub_pg_password. They can cause the password to fail.

automationhub_pg_port

Required if not using internal database.

Default = 5432

automationhub_pg_sslmode

Required.

Default = prefer

automationhub_pg_username

Required

Default = automationhub

automationhub_require_content_approval

Optional

Value is true if automation hub enforces the approval mechanism before collections are made available.

By default when you upload collections to automation hub an administrator must approve it before it is made available to the users.

If you want to disable the content approval flow, set the variable to false.

Default = true

automationhub_seed_collections

A boolean that defines whether or not preloading is enabled.

When the bundle installer is run, validated content is uploaded to the validated repository, and certified content is uploaded to the rh-certified repository.

By default, both certified and validated content are uploaded.

If you do not want to install content, set automationhub_seed_collections to false to disable the seeding.

If you only want one type of content, set automationhub_seed_collections to true and automationhub_collection_seed_repository to the type of content you do want to include.

Default = true.

automationhub_ssl_cert

Optional

/path/to/automationhub.cert Same as web_server_ssl_cert but for automation hub UI and API

automationhub_ssl_key

Optional

/path/to/automationhub.key

Same as web_server_ssl_key but for automation hub UI and API

automationhub_ssl_validate_certs

For Red Hat Ansible Automation Platform 2.2 and later, this value is no longer used.

Value is true if automation hub should validate certificate when requesting itself because by default, Ansible Automation Platform deploys with self-signed certificates.

Default = false.

automationhub_upgrade

Deprecated

For Ansible Automation Platform 2.2.1 and later, the value of this has been fixed at true.

Automation hub always updates with the latest packages.

ee_from_hub_only

When deployed with automation hub the installer pushes execution environment images to automation hub and configures automation controller to pull images from the automation hub registry.

To make automation hub the only registry to pull execution environment images from, set 'ee_from_hub_only' to true.

If set to false, execution environment images are also taken directly from Red Hat.

Default = true when the bundle installer is used.

generate_automationhub_token

If upgrading from Red Hat Ansible Automation Platform 2.0 or earlier, you must either:

  • provide an existing Ansible automation hub token as automationhub_api_token or

  • set generate_automationhub_token to true to generate a new token. Generating a new token will invalidate the existing token.

nginx_hsts_max_age

This variable specifies how long, in seconds, the system should be considered as a HTTP Strict Transport Security (HSTS) host. That is, how long HTTPS is used exclusively for communication.

Default = 63072000 seconds, or two years.

nginx_tls_protocols

Defines support for ssl_protocols in Nginx.

Default = TLSv1.2.

pulp_db_fields_key

Relative or absolute path to the Fernet symmetric encryption key you want to import. The path is on the Ansible management node. It is used to encrypt certain fields in the database (such as credentials.) If not specified, a new key will be generated.

sso_automation_platform_login_theme

Optional

Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On.

Path to the directory where theme files are located. If changing this variable, you must provide your own theme files.

Default = ansible-automation-platform

sso_automation_platform_realm

Optional

Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On.

The name of the realm in SSO.

Default = ansible-automation-platform

sso_automation_platform_realm_displayname

Optional

Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On.

Display name for the realm.

Default = Ansible Automation Platform

sso_console_admin_username

Optional

Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On.

SSO administration username.

Default = admin

sso_console_admin_password

Required

Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On.

SSO administration password.

sso_custom_keystore_file

Optional

Used for Ansible Automation Platform managed Red Hat Single Sign-On only.

Customer-provided keystore for SSO.

sso_host

Required

Used for Ansible Automation Platform externally managed Red Hat Single Sign-On only.

Automation hub requires SSO and SSO administration credentials for authentication.

If SSO is not provided in the inventory for configuration, then you must use this variable to define the SSO host.

sso_keystore_file_remote

Optional

Used for Ansible Automation Platform managed Red Hat Single Sign-On only.

Set to true if the customer-provided keystore is on a remote node.

Default = false

sso_keystore_name

Optional

Used for Ansible Automation Platform managed Red Hat Single Sign-On only.

Name of keystore for SSO.

Default = ansible-automation-platform

sso_keystore_password

Password for keystore for HTTPS enabled SSO.

Required when using Ansible Automation Platform managed SSO and when HTTPS is enabled. The default install deploys SSO with sso_use_https=true.

sso_redirect_host

Optional

Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On.

If sso_redirect_host is set, it is used by the application to connect to SSO for authentication.

This must be reachable from client machines.

sso_ssl_validate_certs

Optional

Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On.

Set to true if the certificate is to be validated during connection.

Default = true

sso_use_https

Optional

Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On.

If Single Sign On uses https.

Default = true

For Ansible automation hub to connect to LDAP directly; the following variables must be configured. A list of other LDAP related variables (not covered by the automationhub_ldap_xxx variables below) that can be passed using the ldap_extra_settings variable can be found here: https://django-auth-ldap.readthedocs.io/en/latest/reference.html#settings

Variable Description

automationhub_ldap_bind_dn

The name to use when binding to the LDAP server with automationhub_ldap_bind_password.

automationhub_ldap_bind_password

Required

The password to use with automationhub_ldap_bind_dn.

automationhub_ldap_group_search_base_dn

An LDAPSearch object that finds all LDAP groups that users might belong to. If your configuration makes any references to LDAP groups, this and automationhub_ldap_group_type must be set.

Default = None

automatiohub_ldap_group_search_filter

Optional

Search filter for finding group membership.

Variable identifies what objectClass type to use for mapping groups with automation hub and LDAP. Used for installing automation hub with LDAP.

Default = (objectClass=Group)

automationhub_ldap_group_search_scope

Optional

Scope to search for groups in an LDAP tree using the django framework for LDAP authentication. Used for installing automation hub with LDAP.

Default = SUBTREE

automationhub_ldap_group_type_class

Optional

Variable identifies the group type used during group searches within the django framework for LDAP authentication. Used for installing automation hub with LDAP.

Default =django_auth_ldap.config:GroupOfNamesType

automationhub_ldap_server_uri

The URI of the LDAP server. This can be any URI that is supported by your underlying LDAP libraries.

automationhub_ldap_user_search_base_dn

An LDAPSearch object that locates a user in the directory. The filter parameter should contain the placeholder %(user)s for the username. It must return exactly one result for authentication to succeed.

automationhub_ldap_user-search_scope

Optional

Scope to search for users in an LDAP tree using django framework for LDAP authentication. Used for installing automation hub with LDAP.

Default = SUBTREE

Automation controller variables

Variable Description

admin_password

The password for an administration user to access the UI upon install completion.

automation_controller_main_url

For an alternative front end URL needed for SSO configuration, provide the URL.

automationcontroller_password

Password for your automation controller instance.

automationcontroller_username

Username for your automation controller instance.

nginx_http_port

The nginx HTTP server listens for inbound connections.

Default = 80

nginx_https_port

The nginx HTTPS server listens for secure connections.

Default = 443

nginx_hsts_max_age

This variable specifies how long, in seconds, the system should be considered as a HTTP Strict Transport Security (HSTS) host. That is, how long HTTPS is used exclusively for communication.

Default = 63072000 seconds, or two years.

nginx_tls_protocols

Defines support for ssl_protocols in Nginx.

Default = TLSv1.2.

node_state

Optional

The status of a node or group of nodes. Valid options are active, deprovision to remove a node from a cluster or iso_migrate to migrate a legacy isolated node to an execution node.

Default = active.

node_type

For [automationcontroller] group.

Two valid node_types can be assigned for this group.

A node_type=control implies that the node only runs project and inventory updates, but not regular jobs.

A node_type=hybrid has the ability to run everything.

Default for this group = hybrid.

For [execution_nodes] group

Two valid node_types can be assigned for this group.

A node_type=hop implies that the node forwards jobs to an execution node.

A node_type=execution implies that the node can run jobs.

Default for this group = execution.

peers

Optional

The peers variable is used to indicate which nodes a specific host or group connects to. Wherever the peers variable is defined, an outbound connection will be established to the specific host or group.

This variable is used to add tcp-peer entries in the receptor.conf file used for establishing network connections with other nodes. See Peering

The peers variable can be a comma-separated list of hosts and/or groups from the inventory. This is resolved into a set of hosts that is used to construct the receptor.conf file.

pg_database

The name of the postgres database.

Default = awx.

pg_host

The postgreSQL host, which can be an externally managed database.

pg_password

The password for the postgreSQL database.

Do not use special characters for pg_password. They can cause the password to fail.

NOTE

You no longer have to provide a pg_hashed_password in your inventory file at the time of installation because PostgreSQL 13 can now store user passwords more securely.

When you supply pg_password in the inventory file for the installer, PostgreSQL uses the SCRAM-SHA-256 hash to secure that password as part of the installation process.

pg_port

The postgreSQL port to use.

Default = 5432

pg_ssl_mode

One of prefer or verify-full.

Set to verify-full for client-side enforced SSL.

Default = prefer.

pg_username

Your postgres database username.

Default = awx.

postgres_ssl_cert

location of postgres ssl certificate.

/path/to/pgsql_ssl.cert

postgres_ssl_key

location of postgres ssl key.

/path/to/pgsql_ssl.key

postgres_use_cert

Location of postgres user certificate.

/path/to/pgsql.crt

postgres_use_key

Location of postgres user key.

/path/to/pgsql.key

postgres_use_ssl

If postgres is to use SSL.

receptor_listener_port

Port to use for recptor connection.

Default = 27199.

supervisor_start_retry_count

When specified (no default value exists), adds startretries = <value specified> to the supervisor config file (/etc/supervisord.d/tower.ini).

See program:x Section Values for further explanation about startretries.

web_server_ssl_cert

Optional

/path/to/webserver.cert

Same as automationhub_ssl_cert but for web server UI and API.

web_server_ssl_key

Optional

/path/to/webserver.key

Same as automationhub_server_ssl_key but for web server UI and API.

Ansible variables

The following variables control how Ansible Automation Platform interacts with remote hosts.

Additional information on variables specific to certain plugins can be found at https://docs.ansible.com/ansible-core/devel/collections/ansible/builtin/index.html

A list of global configuration options can be found at https://docs.ansible.com/ansible-core/devel/reference_appendices/config.html

Variable Description

ansible_connection

The connection plugin used for the task on the target host.

This can be the name of any of ansible connection plugin. SSH protocol types are smart, ssh or paramiko.

Default = smart

ansible_host

The ip or name of the target host to use instead of inventory_hostname.

ansible_port

The connection port number, if not, the default (22 for ssh).

ansible_user

The user name to use when connecting to the host.

ansible_password

The password to use to authenticate to the host.

Never store this variable in plain text.

Always use a vault.

ansible_ssh_private_key_file

Private key file used by ssh. Useful if using multiple keys and you do not want to use an SSH agent.

ansible_ssh_common_args

This setting is always appended to the default command line for sftp, scp, and ssh. Useful to configure a ProxyCommand for a certain host (or group).

ansible_sftp_extra_args

This setting is always appended to the default sftp command line.

ansible_scp_extra_args

This setting is always appended to the default scp command line.

ansible_ssh_extra_args

This setting is always appended to the default ssh command line.

ansible_ssh_pipelining

Determines if SSH pipelining is used. This can override the pipelining setting in ansible.cfg. If using SSH key-based authentication, then the key must be managed by an SSH agent.

ansible_ssh_executable

(added in version 2.2)

This setting overrides the default behavior to use the system ssh. This can override the ssh_executable setting in ansible.cfg.

ansible_shell_type

The shell type of the target system. You should not use this setting unless you have set the ansible_shell_executable to a non-Bourne (sh) compatible shell. By default commands are formatted using sh-style syntax. Setting this to csh or fish causes commands executed on target systems to follow the syntax of those shells instead.

ansible_shell_executable

This sets the shell that the ansible controller uses on the target machine, and overrides the executable in ansible.cfg which defaults to /bin/sh.

You should only change if it is not possible to use /bin/sh, that is, if /bin/sh is not installed on the target machine or cannot be run from sudo.

inventory_hostname

This variable takes the hostname of the machine from the inventory script or the ansible configuration file.

You cannot set the value of this variable.

Because the value is taken from the configuration file, the actual runtime hostname value can vary from what is returned by this variable.

Event-Driven Ansible controller variables

Variable Description

automationedacontroller_admin_password

The admin password used by the Event-Driven Ansible controller instance.

automationedacontroller_admin_username

Username used by django to identify and create the admin superuser in Event-Driven Ansible controller.

Default = admin

automationedacontroller_admin_email

Email address used by django for the admin user for Event-Driven Ansible controller.

Default = admin@example.com

automationedacontroller_disable_https

Boolean flag to disable HTTPS Event-Driven Ansible controller.

Default = false

automationedacontroller_disable_hsts

Boolean flag to disable HSTS Event-Driven Ansible controller.

Default = false

automationedacontroller_user_headers

List of additional nginx headers to add to Event-Driven Ansible controller’s nginx configuration.

Default = empty list

automationedacontroller_nginx_tls_files_remote

Boolean flag to specify whether cert sources are on the remote host (true) or local (false).

Default = false

automationedacontroller_allowed_hostnames

List of additional addresses to enable for user access to Event-Driven Ansible controller.

Default = empty list

automationedacontroller_controller_verify_ssl

Boolean flag used to verify Automation Controller’s web certificates when making calls from Event-Driven Ansible controller. Verified is true; not verified is false.

Default = false

automationedacontroller_gunicorn_workers

Number of workers for the API served through gunicorn.

Default = (# of cores or threads) * 2 + 1

automationedacontroller_pg_database

The postgres database used by Event-Driven Ansible controller.

Default = automtionedacontroller.

automationnedacontroller_pg_host

The hostname of the postgres database used by Event-Driven Ansible controller, which can be an externally managed database.

automationedacontroller_pg_password

The password for the postgres database used by Event-Driven Ansible controller.

Do not use special characters for edacontroller_pg_password. They can cause the password to fail.

automationedacontroller_pg_port

The port number of the postgres database used by Event-Driven Ansible controller.

Default = 5432.

automationedacontroller_pg_username

The username for your Event-Driven Ansible controller postgres database.

Default = automationedacontroller.

automationedacontroller_rq_workers

Number of rq workers (Python processes that run in the background) used by Event-Driven Ansible controller.

Default = (# of cores or threads) * 2 + 1

Planning for automation mesh in your Red Hat Ansible Automation Platform environment

The following topics contain information to help plan an automation mesh deployment in your Ansible Automation Platform environment. The subsequent sections explain the concepts that comprise automation mesh in addition to providing examples on how you can design automation mesh topologies. Simple to complex topology examples are included to illustrate the various ways you can deploy automation mesh.

About automation mesh

Automation mesh is an overlay network intended to ease the distribution of work across a large and dispersed collection of workers through nodes that establish peer-to-peer connections with each other using existing networks.

Red Hat Ansible Automation Platform 2 replaces Ansible Tower and isolated nodes with automation controller and automation hub. Automation controller provides the control plane for automation through its UI, Restful API, RBAC, workflows and CI/CD integration, while Automation Mesh can be used for setting up, discovering, changing or modifying the nodes that form the control and execution layers.

Automation Mesh introduces:

  • Dynamic cluster capacity that scales independently, allowing you to create, register, group, ungroup and deregister nodes with minimal downtime.

  • Control and execution plane separation that enables you to scale playbook execution capacity independently from control plane capacity.

  • Deployment choices that are resilient to latency, reconfigurable without outage, and that dynamically re-reroute to choose a different path when outages may exist. mesh routing changes.

  • Connectivity that includes bi-directional, multi-hopped mesh communication possibilities which are Federal Information Processing Standards (FIPS) compliant.

Control and execution planes

Automation mesh makes use of unique node types to create both the control and execution plane. Learn more about the control and execution plane and their node types before designing your automation mesh topology.

Control plane

The control plane consists of hybrid and control nodes. Instances in the control plane run persistent automation controller services such as the the web server and task dispatcher, in addition to project updates, and management jobs.

  • Hybrid nodes - this is the default node type for control plane nodes, responsible for automation controller runtime functions like project updates, management jobs and ansible-runner task operations. Hybrid nodes are also used for automation execution.

  • Control nodes - control nodes run project and inventory updates and system jobs, but not regular jobs. Execution capabilities are disabled on these nodes.

Execution plane

The execution plane consists of execution nodes that execute automation on behalf of the control plane and have no control functions. Hop nodes serve to communicate. Nodes in the execution plane only run user-space jobs, and may be geographically separated, with high latency, from the control plane.

  • Execution nodes - Execution nodes run jobs under ansible-runner with podman isolation. This node type is similar to isolated nodes. This is the default node type for execution plane nodes.

  • Hop nodes - similar to a jump host, hop nodes will route traffic to other execution nodes. Hop nodes cannot execute automation.

Peers

Peer relationships define node-to-node connections. You can define peers within the [automationcontroller] and [execution_nodes] groups or using the [automationcontroller:vars] or [execution_nodes:vars] groups

Defining automation mesh node types

The examples in this section demonstrate how to set the node type for the hosts in your inventory file.

You can set the node_type for single nodes in the control plane or execution plane inventory groups. To define the node type for an entire group of nodes, set the node_type in the vars stanza for the group.

  • The allowed values for node_type in the control plane [automationcontroller] group are hybrid (default) and control.

  • The allowed values for node_type in the [execution_nodes] group are execution (default) and hop.

Hybrid node

The following inventory consists of a single hybrid node in the control plane:

[automationcontroller]
control-plane-1.example.com
Control node

The following inventory consists of a single control node in the control plane:

[automationcontroller]
control-plane-1.example.com node_type=control

If you set node_type to control in the vars stanza for the control plane nodes, then all of the nodes in control plane are control nodes.

[automationcontroller]
control-plane-1.example.com

[automationcontroller:vars]
node_type=control
Execution node

The following stanza defines a single execution node in the execution plane:

[execution_nodes]
execution-plane-1.example.com
Hop node

The following stanza defines a single hop node and an execution node in the execution plane. The node_type variable is set for every individual node.

[execution_nodes]
execution-plane-1.example.com node_type=hop
execution-plane-2.example.com

If you want to set the node_type at the group level, you must create separate groups for the execution nodes and the hop nodes.

[execution_nodes]
execution-plane-1.example.com
execution-plane-2.example.com

[execution_group]
execution-plane-2.example.com

[execution_group:vars]
node_type=execution

[hop_group]
execution-plane-1.example.com

[hop_group:vars]
node_type=hop
Peer connections

Create node-to-node connections using the peers= host variable. The following example connects control-plane-1.example.com to execution-node-1.example.com and execution-node-1.example.com to execution-node-2.example.com:

[automationcontroller]
control-plane-1.example.com peers=execution-node-1.example.com

[automationcontroller:vars]
node_type=control

[execution_nodes]
execution-node-1.example.com peers=execution-node-2.example.com
execution-node-2.example.com
Additional resources
  • See the example automation mesh topologies in this guide for more examples of how to implement mesh nodes.

The User Interface

The automation controller User Interface (UI) provides a graphical framework for your IT orchestration requirements. The navigation panel provides quick access to automation controller resources, such as Projects, Inventories, Job Templates, and Jobs.

Note

The automation controller UI is available for technical preview and is subject to change in future releases. To preview the new UI, click the Enable Preview of New User Interface toggle to On from the Miscellaneous System option of the Settings menu.

After saving, logout and log back in to access the new UI from the preview banner. To return to the current UI, click the link on the top banner where indicated.

Access your user profile, the About page, view related documentation, or log out using the icons in the page header.

You can view the activity stream for that user by clicking the btn:[Activity Stream] activitystream icon.

Views

The automation controller UI provides several options for viewing information.

Jobs view

  • From the navigation panel, select btn:[Jobs]. This view displays the jobs that have run, including projects, templates, management jobs, SCM updates, and playbook runs.

Jobs view

Schedules view

This view shows all the scheduled jobs that are configured.

image

Activity Stream

  • From the navigation panel, select btn:[Activity Stream] to display Activity Streams. Most screens have an Activity Stream activitystream icon.

Activity Stream

An Activity Stream shows all changes for a particular object. For each change, the Activity Stream shows the time of the event, the user that initiated the event, and the action. The information displayed varies depending on the type of event. Click the btn:[Examine] View Event Details icon to display the event log for the change.

event log

You can filter the Activity Stream by the initiating user, by system (if it was system initiated), or by any related object, such as a credential, job template, or schedule.

The Activity Stream on the main Dashboard shows the Activity Stream for the entire instance. Most pages permit viewing an activity stream filtered for that specific object.

Workflow Approvals

  • From the navigation panel, select btn:[Workflow Approvals] to see your workflow approval queue. The list contains actions that require you to approve or deny before a job can proceed.

Host Metrics

  • From the navigation panel, select btn:[Host Metrics] to see the activity associated with hosts, which includes counts on those that have been automated, used in inventories, and deleted.

Host Metrics

Resources Menu

The Resources menu provides access to the following components of automation controller:

  • Templates

  • Credentials

  • Projects

  • Inventories

  • Hosts

Access Menu

The Access menu enables you to configure who has permissions to automation controller resources:

  • Organizations

  • Users

  • Teams

Administration

The Administration menu provides access to the administrative options of automation controller. From here, you can create, view, and edit:

  • Credential_types

  • Notifications

  • Management_jobs

  • Instance_groups

  • Instances

  • Applications

  • Execution_environments

  • Topology_view

The Settings menu

Configure global and system-level settings using the Settings menu. The Settings menu provides access to automation controller configuration settings.

The Settings page enables administrators to configure the following settings:

  • Authentication

  • Jobs

  • System-level attributes

  • Customize the UI

  • Product license information

Troubleshooting

Use this information to diagnose and resolve issues during backup and recovery.

Automation controller custom resource has the same name as an existing deployment

The name specified for the new AutomationController custom resource must not match an existing deployment or the recovery process will fail.

If your AutomationController customer resource matches an existing deployment, perform the following steps to resolve the issue.

Procedure
  1. Delete the existing AutomationController and the associated postgres PVC:

    oc delete automationcontroller <YOUR_DEPLOYMENT_NAME> -n <YOUR_NAMESPACE>
    
    oc delete pvc postgres-13-<YOUR_DEPLOYMENT_NAME>-13-0 -n <YOUR_NAMESPACE>
  2. Use AutomationControllerRestore with the same deployment_name in it:

    oc apply -f restore.yaml

Attaching your Red Hat Ansible Automation Platform subscription

You must have valid subscriptions attached on all nodes before installing Red Hat Ansible Automation Platform. Attaching your Ansible Automation Platform subscription allows you to access subcription-only resources necessary to proceed with the installation.

Note
Attaching a subscription is unnecessary if you have enabled Simple Content Access Mode on your Red Hat account. Once enabled, you will need to register your systems to either Red Hat Subscription Management (RHSM) or Satellite before installing the Ansible Automation Platform. See Simple Content Access Mode for more information.
Procedure
  1. Obtain the pool_id for your Red Hat Ansible Automation Platform subscription:

    # subscription-manager list --available --all | grep "Ansible Automation Platform" -B 3 -A 6
    Note

    Do not use MCT4022 as a pool_id for your subscription because it can cause Ansible Automation Platform subscription attachment to fail.

    Example

    An example output of the subsciption-manager list command. Obtain the pool_id as seen in the Pool ID: section:

    Subscription Name: Red Hat Ansible Automation, Premium (5000 Managed Nodes)
      Provides: Red Hat Ansible Engine
      Red Hat Ansible Automation Platform
      SKU: MCT3695
      Contract: ````
      Pool ID: <pool_id>
      Provides Management: No
      Available: 4999
      Suggested: 1
  2. Attach the subscription:

    # subscription-manager attach --pool=<pool_id>

You have now attached your Red Hat Ansible Automation Platform subscriptions to all nodes.

Verification
  • Verify the subscription was successfully attached:

# subscription-manager list --consumed
Troubleshooting
  • If you are unable to locate certain packages that came bundled with the Ansible Automation Platform installer, or if you are seeing a Repositories disabled by configuration message, try enabling the repository using the command:

    Red Hat Ansible Automation Platform 2.4 for RHEL 8

    subscription-manager repos --enable ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms

    Red Hat Ansible Automation Platform 2.4 for RHEL 9

    subscription-manager repos --enable ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms

Recovering a Red Hat Ansible Automation Platform deployment

If you lose information on your system or issues with an upgrade, you can use the backup resources of your deployment instances. Use these procedures to recover your automation controller and automation hub deployment files.

Recovering the Automation controller deployment

Use this procedure to restore a previous controller deployment from an AutomationControllerBackup. The deployment name you provide will be the name of the new AutomationController custom resource that will be created.

Note

The name specified for the new AutomationController custom resource must not match an existing deployment or the recovery process will fail. If the name specified does match an existing deployment, see Troubleshooting for steps to resolve the issue.

Prerequisites
  • You must be authenticated with an Openshift cluster.

  • The automation controller has been deployed to the cluster.

  • An AutomationControllerBackup is available on a PVC in your cluster.

Procedure
  1. Log in to Red Hat OpenShift Container Platform.

  2. Navigate to menu:Operators[Installed Operators].

  3. Select the Ansible Automation Platform Operator installed on your project namespace.

  4. Select the Automation Controller Restore tab.

  5. Click btn:[Create AutomationControllerRestore].

  6. Enter a Name for the recovery deployment.

  7. Enter a New Deployment name for the restored deployment.

    Note

    This should be different from the original deployment name.

  8. Select the Backup source to restore from. Backup CR is recommended.

  9. Enter the Backup Name of the AutomationControllerBackup object.

  10. Click btn:[Create].

    A new deployment is created and your backup is restored to it. This can take approximately 5 to 15 minutes depending on the size of your database.

Recovering the Automation hub deployment

Use this procedure to restore a previous hub deployment into the namespace. The deployment name you provide will be the name of the new AutomationHub custom resource that will be created.

Note

The name specified for the new AutomationHub custom resource must not match an existing deployment or the recovery process will fail.

Prerequisites
  • You must be authenticated with an Openshift cluster.

  • The automation hub has been deployed to the cluster.

  • An AutomationHubBackup is available on a PVC in your cluster.

Procedure
  1. Log in to Red Hat OpenShift Container Platform.

  2. Navigate to menu:Operators[Installed Operators].

  3. Select the Ansible Automation Platform Operator installed on your project namespace.

  4. Select the Automation Hub Restore tab.

  5. Click btn:[Create AutomationHubRestore].

  6. Enter a Name for the recovery deployment.

  7. Select the Backup source to restore from. Backup CR is recommended.

  8. Enter the Backup Name of the AutomationHubBackup object.

  9. Click btn:[Create].

    A new deployment is created and your backup is restored to it.

Obtaining a manifest file

You can obtain a subscription manifest in the Subscription Allocations section of Red Hat Subscription Management. After you obtain a subscription allocation, you can download its manifest file and upload it to activate Ansible Automation Platform.

To begin, login to the Red Hat Customer Portal using your administrator user account and follow the procedures in this section.

Create a subscription allocation

Creating a new subscription allocation allows you to set aside subscriptions and entitlements for a system that is currently offline or air-gapped. This is necessary before you can download its manifest and upload it to Ansible Automation Platform.

Procedure
  1. From the Subscription Allocations page, click btn:[New Subscription Allocation].

  2. Enter a name for the allocation so that you can find it later.

  3. Select Type: Satellite 6.8 as the management application.

  4. Click btn:[Create].

Adding subscriptions to a subscription allocation

Once an allocation is created, you can add the subscriptions you need for Ansible Automation Platform to run properly. This step is necessary before you can download the manifest and add it to Ansible Automation Platform.

Procedure
  1. From the Subscription Allocations page, click on the name of the Subscription Allocation to which you would like to add a subscription.

  2. Click the Subscriptions tab.

  3. Click btn:[Add Subscriptions].

  4. Enter the number of Ansible Automation Platform Entitlement(s) you plan to add.

  5. Click btn:[Submit].

Verification

After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status will show as Out of Compliance, indicating you have exceeded the number of hosts in your subscription.

Other important information displayed include the following:

Hosts automated

Host count automated by the job, which consumes the license count

Hosts imported

Host count considering all inventory sources (does not impact hosts remaining)

Hosts remaining

Total host count minus hosts automated

Downloading a manifest file

After an allocation is created and has the appropriate subscriptions on it, you can download the manifest from Red Hat Subscription Management.

Procedure
  1. From the Subscription Allocations page, click on the name of the Subscription Allocation to which you would like to generate a manifest.

  2. Click the Subscriptions tab.

  3. Click btn:[Export Manifest] to download the manifest file.

Note

The file is saved to your default downloads folder and can now be uploaded to activate Red Hat Ansible Automation Platform.

Managing projects

A Project is a logical collection of Ansible playbooks, represented in the controller. You can manage playbooks and playbook directories different ways:

  • By placing them manually under the Project Base Path on your automation controller server.

  • By placing your playbooks into a source code management (SCM) system supported by the automation controller. These include Git, Subversion, and Mercurial.

Note

This Getting Started Guide uses lightweight examples to get you up and running. But for production purposes, you must use source control to manage your playbooks. It is best practice to treat your infrastructure as code, and this practice is in line with DevOps ideals.

Setting up a project

Automation controller simplifies the startup process by providing you with a Demo Project that you can work with initially.

Procedure
  1. To review existing projects, select Projects from the navigation panel.

  2. Click btn:[Demo Project] to view its details.

Demo Project

Editing a project

As part of the initial setup you can leave the default Demo Project as it is. You can edit it later.

Procedure
  1. Open the project to edit it by using one of these methods:

    • Go to menu:Details[Edit].

    • From the navigation panel, select menu:Projects[Edit Project] next to the project name and edit the appropriate details.

  2. Save your changes

Syncing a project

If you want to fetch the latest changes in a project, you can manually start an SCM sync for this project.

Procedure
  1. Open the project to update the SCM-based demo project by using one of these methods:

    • Go to menu:Details[Sync].

    • From the navigation panel, select menu:Projects[Sync Project].

Project sync
Note

When you add a project set up to use source control, a "sync" starts. This fetches the project details from the configured source control.

Managing organizations in automation controller

An organization is a logical collection of users, teams, projects, and inventories. It is the highest level object in the controller object hierarchy. After you have created an organization, automation controller displays the organization details. You can then manage access and execution environments for the organization.

Hierarchy

Reviewing the organization

The Organizations page displays the existing organizations for your installation.

Procedure
  • From the navigation panel, select Organizations.

    Note

    Automation controller automatically creates a default organization. If you have a Self-support level license, you have only the default organization available and must not delete it.

    You can use the default organization as it is initially set up and edit it later.

    Note

    Only Enterprise or Premium licenses can add new organizations.

Enterprise and Premium license users who want to add a new organization should refer to the Organizations section in the Automation Controller User Guide.

Editing an organization

During initial setup, you can leave the default organization as it is, but you can edit it later.

Procedure
  1. Edit the organization by using one of these methods:

    • Go to menu:Details[Edit].

    • From the navigation panel, select menu:Organizations[Edit Organization] next to the organization name and edit the appropriate details.

  2. Save your changes.

Automation controller licensing, updates and support

Red Hat Ansible Automation Platform controller ("{Controller Name}") is a software product provided as part of an annual Red Hat Ansible Automation Platform subscription entered into between you and Red Hat, Inc. ("Red Hat").

Ansible is an open source software project and is licensed under the GNU General Public License version 3, as described in the Ansible Source Code

You must have valid subscriptions attached before installing Ansible Automation Platform. See Attaching Subscriptions for detail.

Support

Red Hat offers support to paid Red Hat Ansible Automation Platform customers.

If you or your company has purchased a subscription for Ansible Automation Platform, you can contact the support team at https://access.redhat.com

For more information on the levels of support for your Ansible Automation Platform subscription, see Subscription Types.

For information on what is covered under an Ansible Automation Platform subscription, see Scope of Coverage and Scope of Support

Trial and evaluation

You require a license to run automation controller. However, there is no fee for a trial license.

Trial licenses for Ansible Automation Platform are available at: http://ansible.com/license

Support is not included in a trial license or during an evaluation of the automation controller software.

Ansible Automation Platform component licenses

To view the license information for the components included in automation controller, refer to /usr/share/doc/automation-controller-<version>/README

where <version> refers to the version of automation controller you have installed.

To view a specific license, refer to /usr/share/doc/automation-controller-<version>/*.txt

where * is the license file name to which you are referring.

Node counting in licenses

The automation controller license defines the number of Managed Nodes that can be managed as part of a Red Hat Ansible Automation Platform subscription.

A typical license says 'License Count: 500', which sets the maximum number of Managed Nodes at 500.

For more information on managed node requirements for licensing, see https://access.redhat.com/articles/3331481

Note

Ansible does not recycle node counts or reset automated hosts.

User roles in automation controller

Users associated with an organization are shown in the Access tab of the organization.

Access view

A default administrator user with the role of System Administrator is automatically created and is available to all users of automation controller. You can use it as it is or edit it later. You can add other users to an organization, including a Normal User, System Auditor, or System Administrator, but you must create them first.

For more information, see the Users section in the Automation Controller User Guide.

For the purpose of the getting started guide, leave the default user as it is.

Supported installation scenarios

Red Hat supports the following installations scenarios for Red Hat Ansible Automation Platform:

Additional resources

To edit inventory file parameters to specify a supported installation scenario, see Inventory file examples based on installation scenarios in the Red Hat Ansible Automation Platform Installation Guide.

Standalone automation controller with a database on the same node, or a non-installer managed database

This scenario includes installation of automation controller, including the web frontend, REST API backend, and database on a single machine. It installs PostgreSQL, and configures the automation controller to use that as its database. This is considered the standard automation controller installation scenario.

Standalone automation controller with an external managed database

This scenario includes installation of the automation controller server on a single machine and configures communication with a remote PostgreSQL instance as its database. This remote PostgreSQL can be a server you manage, or can be provided by a cloud service such as Amazon RDS.

Single Event-Driven Ansible controller node with internal database

This scenario includes installation of Event-Driven Ansible controller on a single machine with an internal database. It installs an installer managed PostgreSQL that is similar to the automation controller installation scenario.

Important

Automation controller must be installed before you populate the inventory file with the appropriate Event-Driven Ansible variables.

Standalone automation hub with a database on the same node, or a non-installer managed database

This scenario includes installation of automation hub, including the web frontend, REST API backend, and database on a single machine. It installs PostgreSQL, and configures the automation hub to use that as its database.

Standalone automation hub with an external managed database

This scenario includes installation of the automation hub server on a single machine, and installs a remote PostgreSQL database, managed by the Red Hat Ansible Automation Platform installer.

Platform installation with a database on the automation controller node, or non-installer managed database

This scenario includes installation of automation controller and automation hub with a database on the automation controller node, or a non-installer managed database.

Platform installation with an external managed database

This scenario includes installation of automation controller and automation hub and configures communication with a remote PostgreSQL instance as its database. This remote PostgreSQL can be a server you manage, or can be provided by a cloud service such as Amazon RDS.

Multi-machine cluster installation with an external managed database

This scenario includes installation of multiple automation controller nodes and an automation hub instance and configures communication with a remote PostgreSQL instance as its database. This remote PostgreSQL can be a server you manage, or can be provided by a cloud service such as Amazon RDS. In this scenario, all automation controller are active and can execute jobs, and any node can receive HTTP requests.

Note
  • Running in a cluster setup requires any database that automation controller uses to be external—​PostgreSQL must be installed on a machine that is not one of the primary or secondary tower nodes. When in a redundant setup, the remote PostgreSQL version requirements is PostgreSQL 13.

    • See Clustering for more information on configuring a clustered setup.

  • Provide a reachable IP address for the [automationhub] host to ensure users can sync content from Private Automation Hub from a different node.

Configuring proxy support for Red Hat Ansible Automation Platform

You can configure Red Hat Ansible Automation Platform to communicate with traffic using a proxy. Proxy servers act as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service or available resource from a different server, and the proxy server evaluates the request as a way to simplify and control its complexity. The following sections describe the supported proxy configurations and how to set them up.

Enable proxy support

To provide proxy server support, automation controller handles proxied requests (such as ALB, NLB , HAProxy, Squid, Nginx and tinyproxy in front of automation controller) via the REMOTE_HOST_HEADERS list variable in the automation controller settings. By default, REMOTE_HOST_HEADERS is set to ["REMOTE_ADDR", "REMOTE_HOST"].

To enable proxy server support, edit the REMOTE_HOST_HEADERS field in the settings page for your automation controller:

Procedure
  1. On your automation controller, navigate to menu:Settings[Miscellaneous System].

  2. In the REMOTE_HOST_HEADERS field, enter the following values:

    [
      "HTTP_X_FORWARDED_FOR",
      "REMOTE_ADDR",
      "REMOTE_HOST"
    ]

Automation controller determines the remote host’s IP address by searching through the list of headers in REMOTE_HOST_HEADERS until the first IP address is located.

Known proxies

When automation controller is configured with REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', 'REMOTE_ADDR', 'REMOTE_HOST'], it assumes that the value of X-Forwarded-For has originated from the proxy/load balancer sitting in front of automation controller. If automation controller is reachable without use of the proxy/load balancer, or if the proxy does not validate the header, the value of X-Forwarded-For can be falsified to fake the originating IP addresses. Using HTTP_X_FORWARDED_FOR in the REMOTE_HOST_HEADERS setting poses a vulnerability.

To avoid this, you can configure a list of known proxies that are allowed using the PROXY_IP_ALLOWED_LIST field in the settings menu on your automation controller. Load balancers and hosts that are not on the known proxies list will result in a rejected request.

Configuring known proxies

To configure a list of known proxies for your automation controller, add the proxy IP addresses to the PROXY_IP_ALLOWED_LIST field in the settings page for your automation controller.

Procedure
  1. On your automation controller, navigate to menu:Settings[Miscellaneous System].

  2. In the PROXY_IP_ALLOWED_LIST field, enter IP addresses that are allowed to connect to your automation controller, following the syntax in the example below:

    Example PROXY_IP_ALLOWED_LIST entry
    [
      "example1.proxy.com:8080",
      "example2.proxy.com:8080"
    ]
Important
  • PROXY_IP_ALLOWED_LIST requires proxies in the list are properly sanitizing header input and correctly setting an X-Forwarded-For value equal to the real source IP of the client. Automation controller can rely on the IP addresses and hostnames in PROXY_IP_ALLOWED_LIST to provide non-spoofed values for the X-Forwarded-For field.

  • Do not configure HTTP_X_FORWARDED_FOR as an item in `REMOTE_HOST_HEADERS`unless all of the following conditions are satisfied:

    • You are using a proxied environment with ssl termination;

    • The proxy provides sanitization or validation of the X-Forwarded-For header to prevent client spoofing;

    • /etc/tower/conf.d/remote_host_headers.py defines PROXY_IP_ALLOWED_LIST that contains only the originating IP addresses of trusted proxies or load balancers.

Configuring a reverse proxy

You can support a reverse proxy server configuration by adding HTTP_X_FORWARDED_FOR to the REMOTE_HOST_HEADERS field in your automation controller settings. The X-Forwarded-For (XFF) HTTP header field identifies the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer.

Procedure
  1. On your automation controller, navigate to menu:Settings[Miscellaneous System].

  2. In the REMOTE_HOST_HEADERS field, enter the following values:

    [
      "HTTP_X_FORWARDED_FOR",
      "REMOTE_ADDR",
      "REMOTE_HOST"
    ]
  3. Add the lines below to /etc/tower/conf.d/custom.py to ensure the application uses the correct headers:

USE_X_FORWARDED_PORT = True
USE_X_FORWARDED_HOST = True

Enable sticky sessions

By default, an Application Load Balancer routes each request independently to a registered target based on the chosen load-balancing algorithm. To avoid authentication errors when running multiple instances of automation hub behind a load balancer, you must enable sticky sessions. Enabling sticky sessions sets a custom application cookie that matches the cookie configured on the load balancer to enable stickiness. This custom cookie can include any of the cookie attributes required by the application.

Additional resources

Disclaimer: Links contained in this note to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.

Upgrading to Red Hat Ansible Automation Platform 2.4

To upgrade your Red Hat Ansible Automation Platform, start by reviewing planning information to ensure a successful upgrade. You can then download the desired version of the Ansible Automation Platform installer, configure the inventory file in the installation bundle to reflect your environment, and then run the installer.

Ansible Automation Platform upgrade planning

Before you begin the upgrade process, review the following considerations to plan and prepare your Ansible Automation Platform deployment:

Automation controller

  • Even if you have a valid license from a previous version, you must provide your credentials or a subscriptions manifest upon upgrading to the latest version of automation controller.

  • If you need to upgrade Red Hat Enterprise Linux and automation controller, you must first backup and restore your automation controller data.

  • Clustered upgrades require special attention to instance and instance groups before upgrading.

Automation hub

  • When upgrading to Ansible Automation Platform 2.4, you can either add an existing automation hub API token or generate a new one and invalidate any existing tokens.

Additional resources

Choosing and obtaining a Red Hat Ansible Automation Platform installer

Choose the Red Hat Ansible Automation Platform installer you need based on your Red Hat Enterprise Linux environment internet connectivity. Review the scenarios below and determine which Red Hat Ansible Automation Platform installer meets your needs.

Note

A valid Red Hat customer account is required to access Red Hat Ansible Automation Platform installer downloads on the Red Hat Customer Portal.

Installing with internet access

Choose the Red Hat Ansible Automation Platform (AAP) installer if your Red Hat Enterprise Linux environment is connected to the internet. Installing with internet access retrieves the latest required repositories, packages, and dependencies. Choose one of the following ways to set up your AAP installer.

Tarball install

  1. Navigate to the Red Hat Ansible Automation Platform download page.

  2. Click btn:[Download Now] for the Ansible Automation Platform <latest-version> Setup.

  3. Extract the files:

    $ tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz

RPM install

  1. Install Ansible Automation Platform Installer Package

    v.2.4 for RHEL 8 for x86_64

    $ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-automation-platform-installer

    v.2.4 for RHEL 9 for x86-64

    $ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-automation-platform-installer
Note
dnf install enables the repo as the repo is disabled by default.

When you use the RPM installer, the files are placed under the /opt/ansible-automation-platform/installer directory.

Installing without internet access

Use the Red Hat Ansible Automation Platform (AAP) Bundle installer if you are unable to access the internet, or would prefer not to install separate components and dependencies from online repositories. Access to Red Hat Enterprise Linux repositories is still needed. All other dependencies are included in the tar archive.

  1. Navigate to the Red Hat Ansible Automation Platform download page.

  2. Click btn:[Download Now] for the Ansible Automation Platform <latest-version> Setup Bundle.

  3. Extract the files:

    $ tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz

Setting up the inventory file

Before upgrading your Red Hat Ansible Automation Platform installation, edit the inventory file so that it matches your desired configuration. You can keep the same parameters from your existing Ansible Automation Platform deployment or you can modify the parameters to match any changes to your environment.

Procedure
  1. Navigate to the installation program directory.

    Bundled installer
    $ cd ansible-automation-platform-setup-bundle-2.4-1-x86_64
    Online installer
    $ cd ansible-automation-platform-setup-2.4-1
  2. Open the inventory file for editing.

  3. Modify the inventory file to provision new nodes, deprovision nodes or groups, and import or generate automation hub API tokens.

    You can use the same inventory file from an existing Ansible Automation Platform 2.1 installation if there are no changes to the environment.

    Note

    Provide a reachable IP address or fully qualified domain name (FDQN) for the [automationhub] and [automationcontroller] hosts to ensure that users can synchronize and install content from Ansible automation hub from a different node. Do not use localhost. If localhost is used, the upgrade will be stopped as part of preflight checks.

Provisioning new nodes in a cluster
  • Add new nodes alongside existing nodes in the inventory file as follows:

    [controller]
    clusternode1.example.com
    clusternode2.example.com
    clusternode3.example.com
    
    [all:vars]
    admin_password='password'
    
    pg_host=''
    pg_port=''
    
    pg_database='<database_name>'
    pg_username='<your_username>'
    pg_password='<your_password>'
Deprovisioning nodes or groups in a cluster
  • Append node_state-deprovision to the node or group within the inventory file.

Importing and generating API tokens

When upgrading from Red Hat Ansible Automation Platform 2.0 or earlier to Red Hat Ansible Automation Platform 2.1 or later, you can use your existing automation hub API token or generate a new token. In the inventory file, edit one of the following fields before running the Red Hat Ansible Automation Platform installer setup script setup.sh:

  • Import an existing API token with the automationhub_api_token flag as follows:

    automationhub_api_token=<api_token>
  • Generate a new API token, and invalidate any existing tokens, with the generate_automationhub_token flag as follows:

    generate_automationhub_token=True

Running the Red Hat Ansible Automation Platform installer setup script

You can run the setup script once you have finished updating the inventory file.

Procedure
  1. Run the setup.sh script

    $ ./setup.sh

The installation will begin.

Procedure
  1. With the login information provided after your installation completed, open a web browser and log in to the automation controller by navigating to its server URL at: https://<CONTROLLER_SERVER_NAME>/

  2. After you have accessed the controller user interface (UI), use the credentials specified during the installation process to login. The default username is admin. The password for admin is the value specified for admin_password in your inventory file.

  3. Edit these defaults by selecting Users from the navigation panel:

    • Click the the btn:[More Actions] icon next to the desired user.

    • Click btn:[Edit].

    • Edit the required details and click btn:[Save].

Deprovisioning individual nodes or groups

You can deprovision automation mesh nodes and instance groups using the Ansible Automation Platform installer. The procedures in this section describe how to deprovision specific nodes or entire groups, with example inventory files for each procedure.

Deprovisioning individual nodes using the installer

You can deprovision nodes from your automation mesh using the Ansible Automation Platform installer. Edit the inventory file to mark the nodes to deprovision, then run the installer. Running the installer also removes all configuration files and logs attached to the node.

Note

You can deprovision any of your inventory’s hosts except for the first host specified in the [automationcontroller] group.

Procedure
  • Append node_state=deprovision to nodes in the installer file you want to deprovision.

Example

This example inventory file deprovisions two nodes from an automation mesh configuration.

[automationcontroller]
126-addr.tatu.home ansible_host=192.168.111.126  node_type=control
121-addr.tatu.home ansible_host=192.168.111.121  node_type=hybrid  routable_hostname=121-addr.tatu.home
115-addr.tatu.home ansible_host=192.168.111.115  node_type=hybrid node_state=deprovision

[automationcontroller:vars]
peers=connected_nodes

[execution_nodes]
110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928
108-addr.tatu.home ansible_host=192.168.111.108 receptor_listener_port=29182 node_state=deprovision
100-addr.tatu.home ansible_host=192.168.111.100 peers=110-addr.tatu.home node_type=hop

Deprovisioning isolated nodes

You have the option to manually remove any isolated nodes using the awx-manage deprovisioning utility.

Warning
Use the deprovisioning command to remove only isolated nodes that have not migrated to execution nodes. To deprovision execution nodes from your automation mesh architecture, use the installer method instead.
Procedure
  1. Shut down the instance:

    $ automation-controller-service stop
  2. Run the deprovision command from another instance, replacing host_name with the name of the node as listed in the inventory file:

    $ awx-manage deprovision_instance --hostname=<host_name>

Deprovisioning groups using the installer

You can deprovision entire groups from your automation mesh using the Ansible Automation Platform installer. Running the installer will remove all configuration files and logs attached to the nodes in the group.

Note

You can deprovision any hosts in your inventory except for the first host specified in the [automationcontroller] group.

Procedure
  • Add node_state=deprovision to the [group:vars] associated with the group you wish to deprovision.

Example
[execution_nodes]
execution-node-1.example.com peers=execution-node-2.example.com
execution-node-2.example.com peers=execution-node-3.example.com
execution-node-3.example.com peers=execution-node-4.example.com
execution-node-4.example.com peers=execution-node-5.example.com
execution-node-5.example.com peers=execution-node-6.example.com
execution-node-6.example.com peers=execution-node-7.example.com
execution-node-7.example.com

[execution_nodes:vars]
node_state=deprovision

Deprovisioning isolated instance groups

You have the option to manually remove any isolated instance groups using the awx-manage deprovisioning utility.

Warning
Use the deprovisioning command to only remove isolated instance groups. To deprovision instance groups from your automation mesh architecture, use the installer method instead.
Procedure
  • Run the following command, replacing <name> with the name of the instance group:

    $ awx-manage unregister_queue --queuename=<name>

Automation controller overview

Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform makes it possible for users across an organization to share, vet, and manage automation content by means of a simple, powerful, and agentless technical implementation. IT managers can provide guidelines on how automation is applied to individual teams. Meanwhile, automation developers retain the freedom to write tasks that use existing knowledge, without the operational overhead of conforming to complex tools and frameworks. It is a more secure and stable foundation for deploying end-to-end automation solutions, from hybrid cloud to the edge.

Ansible Automation Platform includes automation controller, which enables users to define, operate, scale, and delegate automation across their enterprise.

Real-time playbook output and exploration

Automation controller enables you to watch playbooks run in real time, seeing each host as they check in. You can go back and explore the results for specific tasks and hosts in great detail; search for specific plays or hosts and see just those results, or locate errors that need to be corrected.

“Push Button” automation

Automation controller enables you to access your favorite projects and re-trigger execution from the web interface. Automation controller asks for input variables, prompts for your credentials, starts and monitors jobs, and displays results and host history.

Simplified role-based access control and auditing

Automation controller enables you to:

  • Grant permissions to perform a specific task (such as to view, create, or modify a file) to different teams or explicit users through role-based access control (RBAC).

  • Keep some projects private, while enabling some users to edit inventories, and others to run playbooks against certain systems, either in check (dry run) or live mode.

  • Enable certain users to use credentials without exposing the credentials to them.

Regardless of what you do, automation controller records the history of operations and who made them, including objects edited and jobs launched.

If you want to give any user or team permissions to use a job template, you can assign permissions directly on the job template. Credentials are full objects in the automation controller RBAC system, and can be assigned to multiple users or teams for use.

Automation controller includes an Auditor type, who can see all aspects of the systems automation, but has no permission to run or change automation, for those that need a system-level auditor. This can also be useful for a service account that scrapes automation information from the REST API. For more information, see Role-Based Access Controls.

Cloud and autoscaling flexibility

Automation controller features a powerful optional provisioning callback feature that enables nodes to request configuration on demand. This is an ideal solution for a cloud auto-scaling scenario, integrating with provisioning servers like Cobbler, or when dealing with managed systems with unpredictable uptimes. It requires no management software to be installed on remote nodes, the callback solution can be triggered by a call to curl or wget, and can be embedded in init scripts, kickstarts, or preseeds. Access is controlled such that only machines listed in the inventory can request configuration.

The ideal RESTful API

The automation controller REST API is the ideal RESTful API for a systems management application, with all resources fully discoverable, paginated, searchable, and well modeled. A styled API browser enables API exploration from the API root at http://<server name>/api/, showing off every resource and relation. Everything that can be done in the user interface can be done in the API.

Backup and restore

Ansible Automation Platform can backup and restore your systems or systems, making it easy for you to backup and replicate your instance as required.

Ansible Galaxy integration

By including an Ansible Galaxy requirements.yml file in your project directory, automation controller automatically fetches the roles your playbook needs from Galaxy, GitHub, or your local source control. For more information, see Ansible Galaxy Support.

Inventory support for OpenStack

Dynamic inventory support is available for OpenStack. This enables you to easily target any of the virtual machines or images running in your OpenStack cloud.

Remote command execution

When you want to perform a simple task on a few hosts, whether adding a single user, updating a single security vulnerability, or restarting a misbehaving service, automation controller includes remote command execution. Any task that you can describe as a single Ansible play can be run on a host or group of hosts in your inventory, enabling you to manage your systems quickly and easily. Plus, it is all backed by an RBAC engine and detailed audit logging, removing any questions regarding who has done what to what machines.

System tracking

You can collect facts using the fact caching feature. For more information, see Fact Caching.

Integrated notifications

Automation controller enables you to easily keep track of the status of your automation.

You can configure the following notifications:

  • stackable notifications for job templates, projects, or entire organizations

  • different notifications for job start, job success, job failure, and job approval (for workflow nodes)

The following notification sources are supported:

  • Email

  • Grafana

  • IRC

  • Mattermost

  • PagerDuty

  • Rocket.Chat

  • Slack

  • Twilio

  • Webhook (post to an arbitrary webhook, for integration into other tools)

You can also customize notification messages for each of the preceding notification types.

Red Hat Satellite integration

Dynamic inventory sources for Red Hat Satellite 6 are supported.

Red Hat Insights integration

Automation controller supports integration with Red Hat Insights, enabling Insights playbooks to be used as an Ansible Automation Platform project.

User Interface

The user interface is organized with intuitive navigational elements. Information is displayed at-a-glance, so you can find and use the automation you need. Compact and expanded viewing modes show and hide information as required, and built-in attributes make it easy to sort.

Custom Virtual Environments

Custom Ansible environment support enables you to have different Ansible environments and specify custom paths for different teams and jobs.

Authentication enhancements

Automation controller supports:

  • LDAP

  • SAML

  • token-based authentication

LDAP and SAML support enable you to integrate your enterprise account information in a more flexible manner.

Token-based authentication permits authentication of third-party tools and services with automation controller through integrated OAuth 2 token support.

Cluster management

Run-time management of cluster groups enables configurable scaling.

Container platform support

Ansible Automation Platform is available as a containerized pod service for Red Hat OpenShift Container Platform that can be scaled up and down as required.

Workflow enhancements

To model your complex provisioning, deployment, and orchestration workflows, you can use automation controller expanded workflows in several ways:

  • Inventory overrides for Workflows You can override an inventory across a workflow at workflow definition time, or at launch time. Automation controller enables you to define your application deployment workflows, and then re-use them in multiple environments.

  • Convergence nodes for Workflows When modeling complex processes, you must sometimes wait for multiple steps to finish before proceeding. Automation controller workflows can replicate this; workflow steps can wait for any number of previous workflow steps to complete properly before proceeding.

  • Workflow Nesting You can re-use individual workflows as components of a larger workflow. Examples include combining provisioning and application deployment workflows into a single workflow.

  • Workflow Pause and Approval You can build workflows containing approval nodes that require user intervention. This makes it possible to pause workflows in between playbooks so that a user can give approval (or denial) for continuing on to the next step in the workflow.

Job distribution

Automation controller offers the ability to take a fact gathering or configuration job running across thousands of machines and divide it into slices that can be distributed across your automation controller cluster for increased reliability, faster job completion, and improved cluster use. If you need to change a parameter across 15,000 switches at scale, or gather information across your multi-thousand-node RHEL estate, automation controller provides the means.

Support for deployment in a FIPS-enabled environment

Automation controller deploys and runs in restricted modes such as FIPS.

Limit the number of hosts per organization

Many large organizations have instances shared among many organizations. To ensure that one organization cannot use all the licensed hosts, this feature enables superusers to set a specified upper limit on how many licensed hosts can be allocated to each organization. The automation controller algorithm factors changes in the limit for an organization and the number of total hosts across all organizations. Inventory updates fail if an inventory synchronization brings an organization out of compliance with the policy. Additionally, superusers are able to over-allocate their licenses, with a warning.

Inventory plugins

Beginning with Ansible 2.9, the following inventory plugins are used from upstream collections:

  • amazon.aws.aws_ec2

  • community.vmware.vmware_vm_inventory

  • azure.azcollection.azure_rm

  • google.cloud.gcp_compute

  • theforeman.foreman.foreman

  • openstack.cloud.openstack

  • ovirt.ovirt.ovirt

  • awx.awx.tower

Secret management system

With a secret management system, external credentials are stored and supplied for use in automation controller so you need not provide them directly.

Automation hub integration

Automation hub acts as a content provider for automation controller, requiring both an automation controller deployment and an automation hub deployment running alongside each other.

Installing Ansible Automation Platform Operator from the OpenShift Container Platform CLI

Use these instructions to install the Ansible Automation Platform Operator on Red Hat OpenShift Container Platform from the OpenShift Container Platform command-line interface (CLI) using the oc command.

Prerequisites

  • Access to Red Hat OpenShift Container Platform using an account with operator installation permissions.

  • The OpenShift Container Platform CLI oc command is installed on your local system. Refer to Installing the OpenShift CLI in the Red Hat OpenShift Container Platform product documentation for further information.

Subscribing a namespace to an operator using the OpenShift Container Platform CLI

Use this procedure to subscribe a namespace to an operator.

Procedure
  1. Create a project for the operator

    oc new-project ansible-automation-platform
  2. Create a file called sub.yaml.

  3. Add the following YAML code to the sub.yaml file.

    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        openshift.io/cluster-monitoring: "true"
      name: ansible-automation-platform
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: ansible-automation-platform-operator
      namespace: ansible-automation-platform
    spec:
      targetNamespaces:
        - ansible-automation-platform
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: ansible-automation-platform
      namespace: ansible-automation-platform
    spec:
      channel: 'stable-2.4'
      installPlanApproval: Automatic
      name: ansible-automation-platform-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    ---
    apiVersion: automationcontroller.ansible.com/v1beta1
    kind: AutomationController
    metadata:
      name: example
      namespace: ansible-automation-platform
    spec:
      replicas: 1

    This file creates a Subscription object called ansible-automation-platform that subscribes the ansible-automation-platform namespace to the ansible-automation-platform-operator operator.

    It then creates an AutomationController object called example in the ansible-automation-platform namespace.

    To change the Automation controller name from example, edit the name field in the kind: AutomationController section of sub.yaml and replace <automation_controller_name> with the name you want to use:

    apiVersion: automationcontroller.ansible.com/v1beta1
    kind: AutomationController
    metadata:
      name: <automation_controller_name>
      namespace: ansible-automation-platform
  4. Run the oc apply command to create the objects specified in the sub.yaml file:

    oc apply -f sub.yaml

To verify that the namespace has been successfully subscribed to the ansible-automation-platform-operator operator, run the oc get subs command:

$ oc get subs -n ansible-automation-platform

For further information about subscribing namespaces to operators, see Installing from OperatorHub using the CLI in the Red Hat OpenShift Container Platform Operators guide.

You can use the OpenShift Container Platform CLI to fetch the web address and the password of the Automation controller that you created.

Fetching Automation controller login details from the OpenShift Container Platform CLI

To login to the Automation controller, you need the web address and the password.

Fetching the automation controller web address

A Red Hat OpenShift Container Platform route exposes a service at a host name, so that external clients can reach it by name. When you created the automation controller instance, a route was created for it. The route inherits the name that you assigned to the automation controller object in the YAML file.

Use the following command to fetch the routes:

oc get routes -n <controller_namespace>

In the following example, the example automation controller is running in the ansible-automation-platform namespace.

$ oc get routes -n ansible-automation-platform

NAME      HOST/PORT                                              PATH   SERVICES          PORT   TERMINATION     WILDCARD
example   example-ansible-automation-platform.apps-crc.testing          example-service   http   edge/Redirect   None

The address for the automation controller instance is example-ansible-automation-platform.apps-crc.testing.

Fetching the automation controller password

The YAML block for the automation controller instance in sub.yaml assigns values to the name and admin_user keys. Use these values in the following command to fetch the password for the automation controller instance.

oc get secret/<controller_name>-<admin_user>-password -o yaml

The default value for admin_user is admin. Modify the command if you changed the admin username in sub.yaml.

The following example retrieves the password for an automation controller object called example:

oc get secret/example-admin-password -o yaml

The password for the automation controller instance is listed in the metadata field in the output:

$ oc get secret/example-admin-password -o yaml

apiVersion: v1
data:
  password: ODzLODzLODzLODzLODzLODzLODzLODzLODzLODzLODzL
kind: Secret
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","kind":"Secret","metadata":{"labels":{"app.kubernetes.io/component":"automationcontroller","app.kubernetes.io/managed-by":"automationcontroller-operator","app.kubernetes.io/name":"example","app.kubernetes.io/operator-version":"","app.kubernetes.io/part-of":"example"},"name":"example-admin-password","namespace":"ansible-automation-platform"},"stringData":{"password":"88TG88TG88TG88TG88TG88TG88TG88TG"}}'
  creationTimestamp: "2021-11-03T00:02:24Z"
  labels:
    app.kubernetes.io/component: automationcontroller
    app.kubernetes.io/managed-by: automationcontroller-operator
    app.kubernetes.io/name: example
    app.kubernetes.io/operator-version: ""
    app.kubernetes.io/part-of: example
  name: example-admin-password
  namespace: ansible-automation-platform
  resourceVersion: "185185"
  uid: 39393939-5252-4242-b929-665f665f665f

For this example, the password is 88TG88TG88TG88TG88TG88TG88TG88TG.

Additional resources

Configuring Ansible automation controller on OpenShift Container Platform

During a Kubernetes upgrade, automation controller must be running.

Minimizing downtime during OpenShift Container Platform upgrade

Make the following configuration changes in automation controller to minimize downtime during the upgrade.

Prerequisites
  • Ansible Automation Platform 2.4

  • Ansible automation controller 4.4

  • OpenShift Container Platform

    • > 4.10.42

    • > 4.11.16

    • > 4.12.0

  • High availability (HA) deployment of Postgres

  • Multiple worker node that automation controller pods can be scheduled on

Procedure
  1. Enable RECEPTOR_KUBE_SUPPORT_RECONNECT in AutomationController specification:

    apiVersion: automationcontroller.ansible.com/v1beta1
    kind: AutomationController
    metadata:
      ...
    spec:
      ...
      ee_extra_env: |
        - name: RECEPTOR_KUBE_SUPPORT_RECONNECT
          value: enabled
        ```
  2. Enable the graceful termination feature in AutomationController specification:

    termination_grace_period_seconds: <time to wait for job to finish>
  3. Configure podAntiAffinity for web and task pod to spread out the deployment in AutomationController specification:

    task_affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/name
                  operator: In
                  values:
                  - awx-task
              topologyKey: topology.kubernetes.io/zone
            weight: 100
      web_affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/name
                  operator: In
                  values:
                  - awx-web
              topologyKey: topology.kubernetes.io/zone
            weight: 100
  4. Configure PodDisruptionBudget in OpenShift Container Platform:

    ---
    apiVersion: policy/v1
    kind: PodDisruptionBudget
    metadata:
      name: automationcontroller-job-pods
    spec:
      maxUnavailable: 0
      selector:
        matchExpressions:
          - key: ansible-awx-job-id
            operator: Exists
    ---
    apiVersion: policy/v1
    kind: PodDisruptionBudget
    metadata:
      name: automationcontroller-web-pods
    spec:
      minAvailable: 1
      selector:
        matchExpressions:
          - key: app.kubernetes.io/name
            operator: In
            values:
              - <automationcontroller_instance_name>-web
    ---
    apiVersion: policy/v1
    kind: PodDisruptionBudget
    metadata:
      name: automationcontroller-task-pods
    spec:
      minAvailable: 1
      selector:
        matchExpressions:
          - key: app.kubernetes.io/name
            operator: In
            values:
              - <automationcontroller_instance_name>-task

Backup and recovery of Red Hat Ansible Automation Platform

To safeguard against unexpected data loss and application errors, it is critical that you perform periodic backups of your Red Hat Ansible Automation Platform deployment. In addition to data loss prevention, backups allow you to fall back to a different deployment state.

About backup and recovery

Red Hat recommends backing up deployments of Red Hat Ansible Automation Platform in your Red Hat OpenShift Container Platform environment to prevent data loss.

A backup resource of your Red Hat Ansible Automation Platform deployment includes the following:

  • Custom deployment of specific values in the spec section of the Ansible Automation Platform custom resource object

  • Back up of the postgresql database

  • secret_key, admin_password, and broadcast_websocket secrets

  • Database configuration

Note

Be sure to secure your backup resources because they can include sensitive information.

Backup recommendations

Recovering from data loss requires that you plan for and create backup resources of your Red Hat Ansible Automation Platform deployments on a regular basis. At a minimum, Red Hat recommends backing up deployments of Red Hat Ansible Automation Platform under the following circumstances:

  • Before upgrading your Red Hat Ansible Automation Platform deployments

  • Before upgrading your Openshift cluster

  • Once per week. This is particularly important if your environment is configured for automatic upgrades.

Encrypting plaintext passwords in automation controller configuration files

Passwords stored in automation controller configuration files are stored in plain text. A user with access to the /etc/tower/conf.d/ directory can view the passwords used to access the database. Access to the directories is controlled with permissions, so they are protected, but some security findings deem this protection to be inadequate. The solution is to encrypt the passwords individually.

Creating PostgreSQL password hashes

Procedure
  1. On your automation controller node, run the following:

    # awx-manage shell_plus
  2. Then run the following from the python prompt:

    >>> from awx.main.utils import encrypt_value, get_encryption_key \
    >>> postgres_secret = encrypt_value('$POSTGRES_PASS') \
    >>> print(postgres_secret)
    Note

    Replace the $POSTGRES_PASS variable with the actual plain text password you wish to encrypt.

    The output should resemble the following:

    $encrypted$UTF8$AESCBC$Z0FBQUFBQmtLdGNRWXFjZGtkV1ZBR3hkNGVVbFFIU3hhY21UT081eXFkR09aUWZLcG9TSmpndmZYQXFyRHVFQ3ZYSE15OUFuM1RHZHBqTFU3S0MyNEo2Y2JWUURSYktsdmc9PQ==
  3. Copy the full values of these hashes and save them.

    • The hash value begins with $encrypted$, and is not just the string of characters, as shown in the following example:

      $encrypted$AESCBC$Z0FBQUFBQmNONU9BbGQ1VjJyNDJRVTRKaFRIR09Ib2U5TGdaYVRfcXFXRjlmdmpZNjdoZVpEZ21QRWViMmNDOGJaM0dPeHN2b194NUxvQ1M5X3dSc1gxQ29TdDBKRkljWHc9PQ==

      Note that the $*_PASS values are already in plain text in your inventory file.

These steps supply the hash values that replace the plain text passwords within the automation controller configuration files.

Encrypting the Postgres password

The following procedure replaces the plain text passwords with encrypted values. Perform the following steps on each node in the cluster:

Procedure
  1. Edit /etc/tower/conf.d/postgres.py using:

    $ vim /etc/tower/conf.d/postgres.py
  2. Add the following line to the top of the file.

    from awx.main.utils import decrypt_value, get_encryption_key
  3. Remove the password value listed after 'PASSWORD': and replace it with the following line, replacing the supplied value of $encrytpted.. with your own hash value:

    decrypt_value(get_encryption_key('value'),'$encrypted$AESCBC$Z0FBQUFBQmNONU9BbGQ1VjJyNDJRVTRKaFRIR09Ib2U5TGdaYVRfcXFXRjlmdmpZNjdoZVpEZ21QRWViMmNDOGJaM0dPeHN2b194NUxvQ1M5X3dSc1gxQ29TdDBKRkljWHc9PQ=='),
    Note

    The hash value in this step is the output value of postgres_secret.

  4. The full postgres.py resembles the following:

    # Ansible Automation platform controller database settings. from awx.main.utils import decrypt_value, get_encryption_key DATABASES = { 'default': { 'ATOMIC_REQUESTS': True, 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'awx', 'USER': 'awx', 'PASSWORD': decrypt_value(get_encryption_key('value'),'$encrypted$AESCBC$Z0FBQUFBQmNONU9BbGQ1VjJyNDJRVTRKaFRIR09Ib2U5TGdaYVRfcXFXRjlmdmpZNjdoZVpEZ21QRWViMmNDOGJaM0dPeHN2b194NUxvQ1M5X3dSc1gxQ29TdDBKRkljWHc9PQ=='), 'HOST': '127.0.0.1', 'PORT': 5432, } }

Restarting automation controller services

Procedure
  1. When encryption is completed on all nodes, perform a restart of services across the cluster using:

    # automation-controller-service restart
  2. Navigate to the UI, and verify you are able to run jobs across all nodes.

Deploying Event-Driven Ansible controller with Ansible Automation Platform Operator on OpenShift Container Platform

Event-Driven Ansible controller is the interface for event-driven automation and introduces automated resolution of IT requests. This component helps you connect to sources of events and acts on those events using rulebooks. When you deploy Event-Driven Ansible controller, you can automate decision making, use numerous event sources, implement event-driven automation within and across multiple IT use cases, and achieve more efficient service delivery.

Use the following instructions to install Event-Driven Ansible with your Ansible Automation Platform Operator on OpenShift Container Platform.

Prerequisites
  • You have installed Ansible Automation Platform Operator on OpenShift Container Platform.

  • You have installed and configured automation controller.

Procedure
  1. Select menu:Operators[Installed Operators].

  2. Locate and select your installation of Ansible Automation Platform.

  3. Under Provided APIs, locate the Event-Driven Ansible modal and click Create instance.

    This takes you to the Form View to customize your installation.

  4. Specify your controller URL.

    If you deployed automation controller in Openshift as well, you can find the URL on the Routes page.

    Note

    This is the only required customization, but you can customize other options using the UI form or directly in the YAML configuration tab, if desired.

  5. Click btn:[Create]. This deploys Event-Driven Ansible controller in the namespace you specified.

    After a couple minutes when the installation is marked as Successful, you can find the URL for the Event-Driven Ansible UI on the Routes page in the Openshift UI.

  6. From the navigation panel, select menu:Networking[Routes] to find the new Route URL that has been created for you.

    Routes are listed according to the name of your custom resource.

  7. Click the new URL to navigate to Event-Driven Ansible in the browser.

  8. From the navigation panel, select menu:Workloads[Secrets] and locate the Admin Password k8s secret that was created for you, unless you specified a custom one.

    Secrets are listed according to the name of your custom resource and appended with -admin-password.

    Note

    You can use the password value in the secret to log in to the Event-Driven Ansible controller UI. The default user is admin.

Migrating data to Ansible Automation Platform 2.4

For platform administrators looking to complete an upgrade to the Ansible Automation Platform 2.4, there may be additional steps needed to migrate data to a new instance:

Migrating from legacy virtual environments (venvs) to automation execution environments

Ansible Automation Platform 2.4 moves you away from custom Python virtual environments (venvs) in favor of automation execution environments - containerized images that packages the necessary components needed to execute and scale your Ansible automation. This includes Ansible Core, Ansible Content Collections, Python dependencies, Red Hat Enterprise Linux UBI 8, and any additional package dependencies.

If you are looking to migrate your venvs to execution environments, you will (1) need to use the awx-manage command to list and export a list of venvs from your original instance, then (2) use ansible-builder to create execution environments.

Migrating to Ansible Engine 2.9 images using Ansible Builder

To migrate Ansible Engine 2.9 images for use with Ansible Automation Platform 2.4, the ansible-builder tool automates the process of rebuilding images (including its custom plugins and dependencies) for use with automation execution environments.

Additional resources

For more information on using Ansible Builder to build execution environments, see the Creating and Consuming Execution Environments.

Migrating to Ansible Core 2.13

When upgrading to Ansible Core 2.13, you need to update your playbooks, plugins, or other parts of your Ansible infrastructure in order to be supported by the latest version of Ansible Core. For instructions on updating your Ansible content for Ansible Core 2.13 compatibility, see the Ansible-core 2.13 Porting Guide.

Migrating isolated nodes to execution nodes

Upgrading from version 1.x to the latest version of the Red Hat Ansible Automation Platform requires platform administrators to migrate data from isolated legacy nodes to execution nodes. This migration is necessary to deploy the automation mesh.

This guide explains how to perform a side-by-side migration. This ensures that the data on your original automation environment remains untouched during the migration process.

The migration process involves the following steps:

  1. Verify upgrade configurations.

  2. Backup original instance.

  3. Deploy new instance for a side-by-side upgrade.

  4. Recreate instance groups in the new instance using ansible controller.

  5. Restore original backup to new instance.

  6. Set up execution nodes and upgrade instance to Red Hat Ansible Automation Platform 2.4.

  7. Configure upgraded controller instance.

Prerequisites for upgrading Ansible Automation Platform

Before you begin to upgrade Ansible Automation Platform, ensure your environment meets the following node and configuration requirements.

Node requirements

The following specifications are required for the nodes involved in the Ansible Automation Platform upgrade process:

  • 16 GB of RAM for controller nodes, database node, execution nodes and hop nodes.

  • 4 CPUs for controller nodes, database nodes, execution nodes, and hop nodes.

  • 150 GB+ disk space for database node.

  • 40 GB+ disk space for non-database nodes.

  • DHCP reservations use infinite leases to deploy the cluster with static IP addresses.

  • DNS records for all nodes.

  • Red Hat Enterprise Linux 8 or later 64-bit (x86) installed for all nodes.

  • Chrony configured for all nodes.

  • Python 3.9 or later for all content dependencies.

Automation controller configuration requirements

The following automation controller configurations are required before you proceed with the Ansible Automation Platform upgrade process:

Configuring NTP server using Chrony

Each Ansible Automation Platform node in the cluster must have access to an NTP server. Use the chronyd to synchronize the system clock with NTP servers. This ensures that cluster nodes using SSL certificates that require validation do not fail if the date and time between nodes are not in sync.

This is required for all nodes used in the upgraded Ansible Automation Platform cluster:

  1. Install chrony:

    # dnf install chrony --assumeyes
  2. Open /etc/chrony.conf using a text editor.

  3. Locate the public server pool section and modify it to include the appropriate NTP server addresses. Only one server is required, but three are recommended. Add the 'iburst' option to speed up the time it takes to properly sync with the servers:

    # Use public servers from the pool.ntp.org project.
    # Please consider joining the pool (http://www.pool.ntp.org/join.html).
    server <ntp-server-address> iburst
  4. Save changes within the /etc/chrony.conf file.

  5. Start the host and enable the chronyd daemon:

    # systemctl --now enable chronyd.service
  6. Verify the chronyd daemon status:

    # systemctl status chronyd.service
Attaching Red Hat subscription on all nodes

Red Hat Ansible Automation Platform requires you to have valid subscriptions attached to all nodes. You can verify that your current node has a Red Hat subscription by running the following command:

# subscription-manager list --consumed

If there is no Red Hat subscription attached to the node, see Attaching your Ansible Automation Platform subscription for more information.

Creating non-root user with sudo privileges

Before you upgrade Ansible Automation Platform, it is recommended to create a non-root user with sudo privileges for the deployment process. This user is used for:

  • SSH connectivity.

  • Passwordless authentication during installation.

  • Privilege escalation (sudo) permissions.

The following example uses ansible to name this user. On all nodes used in the upgraded Ansible Automation Platform cluster, create a non-root user named ansible and generate an ssh key:

  1. Create a non-root user:

    # useradd ansible
  2. Set a password for your user:

    # passwd ansible (1)
    Changing password for ansible.
    Old Password:
    New Password:
    Retype New Password:
    1. Replace ansible with the non-root user from step 1, if using a different name

  3. Generate an ssh key as the user:

    $ ssh-keygen -t rsa
  4. Disable password requirements when using sudo:

    # echo "ansible ALL=(ALL) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/ansible
Copying SSH keys to all nodes

With the ansible user created, copy the ssh key to all the nodes used in the upgraded Ansible Automation Platform cluster. This ensures that when the Ansible Automation Platform installation runs, it can ssh to all the nodes without a password:

$ ssh-copy-id ansible@node-1.example.com
Note
If running within a cloud provider, you might need to instead create an ~/.ssh/authorized_keys file containing the public key for the ansible user on all your nodes and set the permissions to the authorized_keys file to only the owner (ansible) having read and write access (permissions 600).
Configuring firewall settings

Configure the firewall settings on all the nodes used in the upgraded Ansible Automation Platform cluster to permit access to the appropriate services and ports for a successful Ansible Automation Platform upgrade. For Red Hat Enterprise Linux 8 or later, enable the firewalld daemon to enable the access needed for all nodes:

  1. Install the firewalld package:

    # dnf install firewalld --assumeyes
  2. Start the firewalld service:

    # systemctl start firewalld
  3. Enable the firewalld service:

    # systemctl enable --now firewalld

Ansible Automation Platform configuration requirements

The following Ansible Automation Platform configurations are required before you proceed with the Ansible Automation Platform upgrade process:

Configuring firewall settings for execution and hop nodes

After upgrading your Red Hat Ansible Automation Platform instance, add the automation mesh port on the mesh nodes (execution and hop nodes) to enable automation mesh functionality. The default port used for the mesh networks on all nodes is 27199/tcp. You can configure the mesh network to use a different port by specifying recptor_listener_port as the variable for each node within your inventory file.

Within your hop and execution node set the firewalld port to be used for installation.

  1. Ensure that firewalld is running:

    $ sudo systemctl status firewalld
  2. Add the firewalld port to your controller database node (e.g. port 27199):

    $ sudo firewall-cmd --permanent --zone=public --add-port=27199/tcp
  3. Reload firewalld:

    $ sudo firewall-cmd --reload
  4. Confirm that the port is open:

    $ sudo firewall-cmd --list-ports

Back up your Ansible Automation Platform instance

Back up an existing Ansible Automation Platform instance by running the .setup.sh script with the backup_dir flag, which saves the content and configuration of your current environment:

  1. Navigate to your ansible-tower-setup-latest directory.

  2. Run the ./setup.sh script following the example below:

    $ ./setup.sh -e ‘backup_dir=/ansible/mybackup’ -e ‘use_compression=True’ @credentials.yml -b (1)(2)
    1. backup_dir specifies a directory to save your backup to.

    2. @credentials.yml passes the password variables and their values encrypted via ansible-vault.

With a successful backup, a backup file is created at /ansible/mybackup/tower-backup-latest.tar.gz .

This backup will be necessary later to migrate content from your old instance to the new one.

Deploy a new instance for a side-by-side upgrade

To proceed with the side-by-side upgrade process, deploy a second instance of Ansible Tower 3.8.x with the same instance group configurations. This new instance will receive the content and configuration from your original instance, and will later be upgraded to Red Hat Ansible Automation Platform 2.4.

Deploy a new instance of Ansible Tower

To deploy a new Ansible Tower instance, do the following:

  1. Download the Tower installer version that matches your original Tower instance by navigating to the Ansible Tower installer page.

  2. Navigate to the installer, then open the inventory file using a text editor to configure the inventory file for a Tower installation:

    1. In addition to any Tower configurations, remove any fields containing isolated_group or instance_group.

      Note
      For more information about installing Tower using the Ansible Automation Platform installer, see the Ansible Automation Platform Installation Guide for your specific installation scenario.
  3. Run the setup.sh script to begin the installation.

Once the new instance is installed, configure the Tower settings to match the instance groups from your original Tower instance.

Recreate instance groups in the new instance

To recreate your instance groups in the new instance, do the following:

Note
Make note of all instance groups from your original Tower instance. You will need to recreate these groups in your new instance.
  1. Log in to your new instance of Tower.

  2. In the navigation pane, select menu:Administration[Instance groups].

  3. Click btn:[Create instance group].

  4. Enter a Name that matches an instance group from your original instance, then click btn:[Save]

  5. Repeat until all instance groups from your original instance have been recreated.

Restore backup to new instance

Running the ./setup.sh script with the restore_backup_file flag migrates content from the backup file of your original 1.x instance to the new instance. This effectively migrates all job histories, templates, and other Ansible Automation Platform related content.

Procedure
  1. Run the following command:

    $ ./setup.sh -r -e ‘restore_backup_file=/ansible/mybackup/tower-backup-latest.tar.gz’ -e ‘use_compression=True’ -e @credentials.yml -r -- --ask-vault-pass (1)(2)(3)
    1. restore_backup_file specifies the location of the Ansible Automation Platform backup database

    2. use_compression is set to True due to compression being used during the backup process

    3. -r sets the restore database option to True

  2. Log in to your new RHEL 8 Tower 3.8 instance to verify whether the content from your original instance has been restored:

    1. Navigate to menu:Administration[Instance groups]. The recreated instance groups should now contain the Total Jobs from your original instance.

    2. Using the side navigation panel, check that your content has been imported from your original instance, including Jobs, Templates, Inventories, Credentials, and Users.

You now have a new instance of Ansible Tower with all the Ansible content from your original instance.

You will upgrade this new instance to Ansible Automation Platform 2.4 so that you keep all your previous data without overwriting your original instance.

Upgrading to Ansible Automation Platform 2.4

To upgrade your instance of Ansible Tower to Ansible Automation Platform 2.4, copy the inventory file from your original Tower instance to your new Tower instance and run the installer. The Red Hat Ansible Automation Platform installer detects a pre-2.4 and offers an upgraded inventory file to continue with the upgrade process:

  1. Download the latest installer for Red Hat Ansible Automation Platform from the Red Hat Ansible Automation Platform download page.

  2. Extract the files:

    $ tar xvzf ansible-automation-platform-setup-<latest_version>.tar.gz
  3. Navigate into your Ansible Automation Platform installation directory:

    $ cd ansible-automation-platform-setup-<latest_version>/
  4. Copy the inventory file from your original instance into the directory of the latest installer:

    $ cp ansible-tower-setup-3.8.x.x/inventory ansible-automation-platform-setup-<latest_version>
  5. Run the setup.sh script:

    $ ./setup.sh

    The setup script pauses and indicates that a "pre-2.x" inventory file was detected, but offers a new file called inventory.new.ini allowing you to continue to upgrade your original instance.

  6. Open inventory.new.ini with a text editor.

    Note
    By running the setup script, the Installer modified a few fields from your original inventory file, such as renaming [tower] to [automationcontroller].
  7. Modify the newly generated inventory.new.ini file to configure your automation mesh by assigning relevant variables, nodes, and relevant node-to-node peer connections:

    Note

    The design of your automation mesh topology depends on the automation needs of your environment. It is beyond the scope of this document to provide designs for all possible scenarios. The following is one example automation mesh design.

    Example inventory file with a standard control plane consisting of three nodes utilizing hop nodes:
    [automationcontroller]
    control-plane-1.example.com
    control-plane-2.example.com
    control-plane-3.example.com
    
    [automationcontroller:vars]
    node_type=control (1)
    peers=execution_nodes (2)
    
    
    [execution_nodes]
    execution-node-1.example.com peers=execution-node-2.example.com
    execution-node-2.example.com peers=execution-node-3.example.com
    execution-node-3.example.com peers=execution-node-4.example.com
    execution-node-4.example.com peers=execution-node-5.example.com node_type=hop
    execution-node-5.example.com peers=execution-node-6.example.com node_type=hop (3)
    execution-node-6.example.com peers=execution-node-7.example.com
    execution-node-7.example.com
    
    [execution_nodes:vars]
    node_type=execution
    1. Specifies a control node that runs project and inventory updates and system jobs, but not regular jobs. Execution capabilities are disabled on these nodes.

    2. Specifies peer relationships for node-to-node connections in the [execution_nodes] group.

    3. Specifies hop nodes that route traffic to other execution nodes. Hop nodes cannot execute automation.

  8. Import or generate a automation hub API token.

    • Import an existing API token with the automationhub_api_token flag:

      automationhub_api_token=<api_token>
    • Generate a new API token, and invalidate any existing tokens, by setting the generate_automationhub_token flag to True:

      generate_automationhub_token=True
  9. Once you have finished configuring your inventory.new.ini for automation mesh, run the setup script using inventory.new.ini:

    $ ./setup.sh -i inventory.new.ini -e @credentials.yml -- --ask-vault-pass
  10. Once the installation completes, verify that your Ansible Automation Platform has been installed successfully by logging in to the Ansible Automation Platform dashboard UI across all automation controller nodes.

Additional resources

Configuring your upgraded Ansible Automation Platform

Configuring automation controller instance groups

After upgrading your Red Hat Ansible Automation Platform instance, associate your original instances to its corresponding instance groups by configuring settings in the automation controller UI:

  1. Log into the new Controller instance.

  2. Content from old instance, such as credentials, jobs, inventories should now be visible on your Controller instance.

  3. Navigate to menu:Administration[Instance Groups].

  4. Associate execution nodes by clicking on an instance group, then click the Instances tab.

  5. Click btn:[Associate]. Select the node(s) to associate to this instance group, then click btn:[Save].

  6. You can also modify the default instance to disassociate your new execution nodes.

Converting-playbooks-for-AAP2

With Ansible Automation Platform 2 and its containerised execution environments, the usage of localhost has been altered. In previous versions of Ansible Automation Platform, a job would run against localhost, which translated into running on the underlying Automation Controller host. This could be used to store data and persistent artifacts.

With Ansible Automation Platform 2, localhost means you are running inside a container, which is ephemeral in nature. Localhost is no longer tied to a particular host, and with portable execution environments, this means it can run anywhere with the right environment and software prerequisites already embedded into the execution environment container.

Persisting data from auto runs

Consider the local Automation Controller filesystem as counterproductive since that ties the data to that host. If you have a multi-node cluster, then you can contact a different host each time causing issues if you are creating workflows that depend upon each other and created directories. For example, if a directory was only created in one node, while another node runs the playbook, the results would be inconsistent.

The solution is to use some form of shared storage solution, like Amazon S3, Gist, or a role to rsync data to your data endpoint.

The option exists of injecting data or a configuration into a container at runtime. This can be achieved using the automation controller’s isolated jobs path option.

This provides a way to mount directories and files into an execution environment at runtime. This is achieved through the automation mesh, utilizing ansible-runner to inject them into a Podman container to launch the automation. What follows are some of the use cases for using isolated job paths:

  • Providing SSL certificates at runtime, rather than baking them into an execution environment.

  • Passing runtime configuration data, such as SSH config settings, but could be anything you want to provide and use during automation.

  • Reading and writing to files used before, during and after automation runs.

There are caveats to utilization:

  • The volume mount has to pre-exist on all nodes capable of automation execution (so hybrid control plane nodes and all execution nodes).

  • Where SELinux is enabled (Ansible Automation Platform default) beware of file permissions.

    • This is important since rootless podman is run on non-OCP based installs.

The caveats need to be carefully observed. It is highly recommended to read up on rootless Podman and the Podman volume mount runtime options, the [:OPTIONS] part of the isolated job paths, as this is what is used inside Ansible Automation Platform 2.

Converting playbook examples

Examples

This example is of a shared directory called /mydata in which we want to be able to read and write files to during a job run. Remember this has to already exist on the execution node we will be using for the automation run.

You will target the aape1.local execution node to run this job, because the underlying hosts already has this in place.

[awx@aape1 ~]$ ls -la /mydata/
total 4
drwxr-xr-x.  2 awx  awx   41 Apr 28 09:27 .
dr-xr-xr-x. 19 root root 258 Apr 11 15:16 ..
-rw-r--r--.  1 awx  awx   33 Apr 11 12:34 file_read
-rw-r--r--.  1 awx  awx    0 Apr 28 09:27 file_write

You will use a simple playbook to launch the automation with sleep defined to allow you access, and to understand the process, as well as demonstrate reading and writing to files.

# vim:ft=ansible:
- hosts: all
  gather_facts: false
  ignore_errors: yes
  vars:
    period: 120
    myfile: /mydata/file
  tasks:
    - name: Collect only selected facts
      ansible.builtin.setup:
        filter:
          - 'ansible_distribution'
          - 'ansible_machine_id'
          - 'ansible_memtotal_mb'
          - 'ansible_memfree_mb'
    - name: "I'm feeling real sleepy..."
      ansible.builtin.wait_for:
        timeout: "{{ period }}"
      delegate_to: localhost
    - ansible.builtin.debug:
        msg: "Isolated paths mounted into execution node: {{ AWX_ISOLATIONS_PATHS }}"
    - name: "Read pre-existing file..."
      ansible.builtin.debug:
        msg: "{{ lookup('file', '{{ myfile }}_read'
    - name: "Write to a new file..."
      ansible.builtin.copy:
        dest: "{{ myfile }}_write"
        content: |
          This is the file I've just written to.

    - name: "Read written out file..."
      ansible.builtin.debug:
        msg: "{{ lookup('file', '{{ myfile }}_write') }}"

In Ansible Automation Platform 2 from the Settings menu, select menu:Job Settings[Jobs].

Paths to expose isolated jobs:

[
"/mydata:/mydata:rw"
]

The volume mount is mapped with the same name in the container and has read-write capability. This will get used when you launch the job template.

The prompt on launch should be set for extra_vars so you can adjust the sleep duration for each run, The default is 30 seconds.

Once launched, and the wait_for module is invoked for the sleep, you can go onto the execution node and look at what is running.

To verify the run has completed successfully, run this command to get an output of the job:

$ podman exec -it 'podman ps -q' /bin/bash
bash-4.4#

You are now inside the running execution environment container.

Look at the permissions, you will see that awx has become ‘root’, but this is not really root as in the superuser, as you are using rootless Podman, which maps users into a kernel namespace similar to a sandbox. Learn more about How does rootless Podman work? for shadow-utils.

bash-4.4# ls -la /mydata/
Total 4
drwxr-xr-x. 2 root root 41 Apr 28 09:27 .
dr-xr-xr-x. 1 root root 77 Apr 28 09:40 ..
-rw-r—--r–. 1 root root 33 Apr 11 12:34 file_read
-rw-r—--r–. 1 root root  0 Apr 28 09:27 file_write

According to the results, this job failed. In order to understand why, the remaining output needs to be examined.

TASK [Read pre-existing file…]******************************* 10:50:12
ok: [localhost] =>  {
    “Msg”: “This is the file I am reading in.”

TASK {Write to a new file...}********************************* 10:50:12
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: PermissionError: [Errno 13] Permission denied: b'/mydata/.ansible_tmpazyqyqdrfile_write' -> b' /mydata/file_write'
Fatal: [localhost]: FAILED! => {"changed": false, :checksum": "9f576o85d584287a3516ee8b3385cc6f69bf9ce", "msg": "Unable to make b'/root/.ansible/tmp/anisible-tim-1651139412.9808054-40-91081834383738/source' into /mydata/file_write, failed final rename from b'/mydata/.ansible_tmpazyqyqdrfile_write': [Errno 13] Permission denied: b'/mydata/.ansible_tmpazyqyqdrfile_write' -> b'/mydata/file_write}
...ignoring

TASK [Read written out file...] ****************************** 10:50:13
Fatal: [localhost]: FAILED: => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError;>, original message: could not locate file in lookup: /mydate/file_write. Vould not locate file in lookup: /mydate/file_write"}
...ignoring

The job failed, even though :rw is set, so it should have write capability. The process was able to read the existing file, but not write out. This is due to SELinux protection that requires proper labels to be placed on the volume content mounted into the container. If the label is missing, SELinux may prevent the process from running inside the container. Labels set by the OS are not changed by Podman. See the Podman documentation for more information.

This could be a common misinterpretation. We have set the default to :z, which tells Podman to relabel file objects on shared volumes.

So we can either add :z or leave it off.

Paths to expose isolated jobs:

[
   "/mydata:/mydata"
]

The playbook will now work as expected:

PLAY [all] **************************************************** 11:05:52
TASK [I'm feeling real sleepy. . .] *************************** 11:05:52
ok: [localhost]
TASK [Read pre-existing file...] ****************************** 11:05:57
ok: [localhost] =>  {
	"Msg": "This is the file I'm reading in."
}
TASK [Write to a new file…] ********************************** 11:05:57
ok: [localhost]
TASK [Read written out file…] ******************************** 11:05:58
ok: [localhost] =>  {
      “Msg”: “This is the file I’ve just written to.”

Back on the underlying execution node host, we have the newly written out contents.

Note
If you are using container groups to launch automation jobs inside Red Hat OpenShift, we can also tell Ansible Automation Platform 2 to expose the same paths to that environment, but you must toggle the default to On under settings.

Once enabled, this will inject this as volumeMounts and volumes inside the pod spec that will be used for execution. It will look like this:

apiVersion: v1
kind: Pod
Spec:
   containers:
   - image: registry.redhat.io/ansible-automatoin-platform-24/ee-minimal-rhel8
  args:
    - ansible runner
    - worker
    - –private-data-dir=/runner
  volumeMounts:
mountPath: /mnt2
name: volume-0
readOnly: true
mouuntPath: /mnt3
name: volume-1
readOnly: true
mountPath: /mnt4
name: volume-2
readOnly: true
volumes:
hostPath:
  path: /mnt2
  type: “”
name: volume-0
hostPath:
  path: /mnt3
  type: “”
name: volume-1
hostPath:
  path: /mnt4
  type: “”
name: volume-2

Storage inside the running container is using the overlay file system. Any modifications inside the running container are destroyed after the job completes, much like a tmpfs being unmounted.

Configuring automation controller websocket connections

You can configure automation controller in order to align the websocket configuration with your nginx or load balancer configuration.

Websocket configuration for automation controller

Automation controller nodes are interconnected through websockets to distribute all websocket-emitted messages throughout your system. This configuration setup enables any browser client websocket to subscribe to any job that might be running on any automation controller node. Websocket clients are not routed to specific automation controller nodes. Instead, any automation controller node can handle any websocket request and each automation controller node must know about all websocket messages destined for all clients.

You can configure websockets at /etc/tower/conf.d/websocket_config.py in all of your automation controller nodes and the changes will be effective after the service restarts.

Automation controller automatically handles discovery of other automation controller nodes through the Instance record in the database.

Important

Your automation controller nodes are designed to broadcast websocket traffic across a private, trusted subnet (and not the open Internet). Therefore, if you turn off HTTPS for websocket broadcasting, the websocket traffic, composed mostly of Ansible playbook stdout, is sent unencrypted between automation controller nodes.

Configuring automatic discovery of other automation controller nodes

You can configure websocket connections to enable automation controller to automatically handle discovery of other automation controller nodes through the Instance record in the database.

  • Edit automation controller websocket information for port and protocol, and confirm whether to verify certificates with True or False when establishing the websocket connections.

    BROADCAST_WEBSOCKET_PROTOCOL = 'http'
    BROADCAST_WEBSOCKET_PORT = 80
    BROADCAST_WEBSOCKET_VERIFY_CERT = False

Working with job templates

A job template combines an Ansible playbook from a project and the settings required to launch it. Job templates are useful to execute the same job many times. Job templates also encourage the reuse of Ansible playbook content and collaboration between teams. For more information, see Job Templates in the Automation Controller User Guide.

Getting started with job templates

As part of the initial setup, a Demo Job Template is created for you.

Procedure
  1. To review existing templates, select Templates from the navigation panel.

  2. Click btn:[Demo Job Template] to view its details.

Job templates

Editing a job template

As part of the initial setup, you can leave the default Demo Job Template as it is, but you can edit it later.

Procedure
  1. Open the template to edit it by using one of these methods:

    • Click menu:Details[Edit].

    • From the navigation panel, select menu:Templates[Edit Template] next to the template name and edit the appropriate details.

  2. Save your changes.

    Job templates
  3. To exit after saving and return to the Templates list view, use the breadcrumb navigation links or click btn:[Cancel]. Clicking btn:[Save] does not exit the Details dialog.

Running a job template

A benefit of automation controller is the push-button deployment of Ansible playbooks. You can configure a template to store all the parameters that you would normally pass to the Ansible playbook on the command line. In addition to the playbooks, the template passes the inventory, credentials, extra variables, and all options and settings that you can specify on the command line.

Procedure
  • From the navigation panel, select menu:Templates[Launch] next to the job template.

Launch template

The initial job launch generates a status page, which updates automatically using automation controller’s Live Event feature, until the job is complete.

For more information on the job results, see Jobs in the Automation Controller User Guide.

Additional resources

To learn more about these automation controller features or to learn about administration tasks and the controller API, see the following documentation sets:

Installing the Red Hat Ansible Automation Platform operator on Red Hat OpenShift Container Platform

Prerequisites
Procedure
  1. Log in to Red Hat OpenShift Container Platform.

  2. Navigate to menu:Operators[OperatorHub].

  3. Search for the Red Hat Ansible Automation Platform operator and click btn:[Install].

  4. Select an Update Channel:

    • stable-2.x: installs a namespace-scoped operator, which limits deployments of automation hub and automation controller instances to the namespace the operator is installed in. This is suitable for most cases. The stable-2.x channel does not require administrator privileges and utilizes fewer resources because it only monitors a single namespace.

    • stable-2.x-cluster-scoped: deploys automation hub and automation controller across multiple namespaces in the cluster and requires administrator privileges for all namespaces in the cluster.

  5. Select Installation Mode, Installed Namespace, and Approval Strategy.

  6. Click btn:[Install].

The installation process will begin. When installation is complete, a modal will appear notifying you that the Red Hat Ansible Automation Platform operator is installed in the specified namespace.

  • Click btn:[View Operator] to view your newly installed Red Hat Ansible Automation Platform operator.

Choosing and obtaining a Red Hat Ansible Automation Platform installer

Choose the Red Hat Ansible Automation Platform installer you need based on your Red Hat Enterprise Linux environment internet connectivity. Review the scenarios below and determine which Red Hat Ansible Automation Platform installer meets your needs.

Installing with internet access

Choose the Red Hat Ansible Automation Platform (AAP) installer if your Red Hat Enterprise Linux environment is connected to the internet. Installing with internet access retrieves the latest required repositories, packages, and dependencies. Choose one of the following ways to set up your AAP installer.

Tarball install
  1. Navigate to the Red Hat Ansible Automation Platform download page.

  2. Click btn:[Download Now] for the Ansible Automation Platform <latest-version> Setup.

  3. Extract the files:

    $ tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz
RPM install
  1. Install Ansible Automation Platform Installer Package

    v.2.4 for RHEL 8 for x86_64

    $ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-automation-platform-installer

    v.2.4 for RHEL 9 for x86-64

    $ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-automation-platform-installer
Note
dnf install enables the repo as the repo is disabled by default.

When you use the RPM installer, the files are placed under the /opt/ansible-automation-platform/installer directory.

Installing without internet access

Use the Red Hat Ansible Automation Platform (AAP) Bundle installer if you are unable to access the internet, or would prefer not to install separate components and dependencies from online repositories. Access to Red Hat Enterprise Linux repositories is still needed. All other dependencies are included in the tar archive.

Procedure
  1. Navigate to the Red Hat Ansible Automation Platform download page.

  2. Click btn:[Download Now] for the Ansible Automation Platform <latest-version> Setup Bundle.

  3. Extract the files:

    $ tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz

Planning your Red Hat Ansible Automation Platform installation

Red Hat Ansible Automation Platform is supported on both Red Hat Enterprise Linux and Red Hat Openshift. Use this guide to plan your Red Hat Ansible Automation Platform installation on Red Hat Enterprise Linux.

To install Red Hat Ansible Automation Platform on your Red Hat OpenShift Container Platform environment, see Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform.

Ansible content migration

If you are migrating from an ansible-core version to ansible-core 2.13, consider reviewing Ansible core Porting Guides to familiarize yourself with changes and updates between each version. When reviewing the Ansible core porting guides, ensure that you select the latest version of ansible-core or devel, which is located at the top left column of the guide.

For a list of fully supported and certified Ansible Content Collections, see Ansible Automation hub on console.redhat.com.

Installing Ansible collections

As part of the migration from Ansible 2.9 to more recent versions, you need to find and download the collections that include the modules you have been using. Once you find that list of collections, you can use one of the following options to include your collections locally:

  1. Download and install the Collection into your runtime or execution environments using ansible-builder.

  2. Update the 'requirements.yml' file in your Automation Controller project install roles and collections. This way every time you sync the project in Automation Controller the roles and collections will be downloaded.

Note
In many cases the upstream and downstream Collections could be the same, but always download your certified collections from Automation Hub.

Migrating your Ansible playbooks and roles to Core 2.13

When you are migrating from non collection-based content to collection-based content, you should use the Fully Qualified Collection Names (FQCN) in playbooks and roles to avoid unexpected behavior.

Example playbook with FQCN:

- name: get some info
  amazon.aws.ec2_vpc_net_info:
    region: "{{ec2_region}}"
  register: all_the_info
  delegate_to: localhost
  run_once: true

If you are using ansible-core modules and are not calling a module from a different Collection, you should use the FQCN ansible.builtin.copy.

Example module with FQCN:

- name: copy file with owner and permissions
  ansible.builtin.copy:
  src: /srv/myfiles/foo.conf
  dest: /etc/foo.conf
  owner: foo
  group: foo
  mode: '0644'

Converting playbook examples

Examples

This example is of a shared directory called /mydata in which we want to be able to read and write files to during a job run. Remember this has to already exist on the execution node we will be using for the automation run.

You will target the aape1.local execution node to run this job, because the underlying hosts already has this in place.

[awx@aape1 ~]$ ls -la /mydata/
total 4
drwxr-xr-x.  2 awx  awx   41 Apr 28 09:27 .
dr-xr-xr-x. 19 root root 258 Apr 11 15:16 ..
-rw-r--r--.  1 awx  awx   33 Apr 11 12:34 file_read
-rw-r--r--.  1 awx  awx    0 Apr 28 09:27 file_write

You will use a simple playbook to launch the automation with sleep defined to allow you access, and to understand the process, as well as demonstrate reading and writing to files.

# vim:ft=ansible:
- hosts: all
  gather_facts: false
  ignore_errors: yes
  vars:
    period: 120
    myfile: /mydata/file
  tasks:
    - name: Collect only selected facts
      ansible.builtin.setup:
        filter:
          - 'ansible_distribution'
          - 'ansible_machine_id'
          - 'ansible_memtotal_mb'
          - 'ansible_memfree_mb'
    - name: "I'm feeling real sleepy..."
      ansible.builtin.wait_for:
        timeout: "{{ period }}"
      delegate_to: localhost
    - ansible.builtin.debug:
        msg: "Isolated paths mounted into execution node: {{ AWX_ISOLATIONS_PATHS }}"
    - name: "Read pre-existing file..."
      ansible.builtin.debug:
        msg: "{{ lookup('file', '{{ myfile }}_read'
    - name: "Write to a new file..."
      ansible.builtin.copy:
        dest: "{{ myfile }}_write"
        content: |
          This is the file I've just written to.

    - name: "Read written out file..."
      ansible.builtin.debug:
        msg: "{{ lookup('file', '{{ myfile }}_write') }}"

In Ansible Automation Platform 2 from the Settings menu, select menu:Job Settings[Jobs].

Paths to expose isolated jobs:

[
"/mydata:/mydata:rw"
]

The volume mount is mapped with the same name in the container and has read-write capability. This will get used when you launch the job template.

The prompt on launch should be set for extra_vars so you can adjust the sleep duration for each run, The default is 30 seconds.

Once launched, and the wait_for module is invoked for the sleep, you can go onto the execution node and look at what is running.

To verify the run has completed successfully, run this command to get an output of the job:

$ podman exec -it 'podman ps -q' /bin/bash
bash-4.4#

You are now inside the running execution environment container.

Look at the permissions, you will see that awx has become ‘root’, but this is not really root as in the superuser, as you are using rootless Podman, which maps users into a kernel namespace similar to a sandbox. Learn more about How does rootless Podman work? for shadow-utils.

bash-4.4# ls -la /mydata/
Total 4
drwxr-xr-x. 2 root root 41 Apr 28 09:27 .
dr-xr-xr-x. 1 root root 77 Apr 28 09:40 ..
-rw-r—--r–. 1 root root 33 Apr 11 12:34 file_read
-rw-r—--r–. 1 root root  0 Apr 28 09:27 file_write

According to the results, this job failed. In order to understand why, the remaining output needs to be examined.

TASK [Read pre-existing file…]******************************* 10:50:12
ok: [localhost] =>  {
    “Msg”: “This is the file I am reading in.”

TASK {Write to a new file...}********************************* 10:50:12
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: PermissionError: [Errno 13] Permission denied: b'/mydata/.ansible_tmpazyqyqdrfile_write' -> b' /mydata/file_write'
Fatal: [localhost]: FAILED! => {"changed": false, :checksum": "9f576o85d584287a3516ee8b3385cc6f69bf9ce", "msg": "Unable to make b'/root/.ansible/tmp/anisible-tim-1651139412.9808054-40-91081834383738/source' into /mydata/file_write, failed final rename from b'/mydata/.ansible_tmpazyqyqdrfile_write': [Errno 13] Permission denied: b'/mydata/.ansible_tmpazyqyqdrfile_write' -> b'/mydata/file_write}
...ignoring

TASK [Read written out file...] ****************************** 10:50:13
Fatal: [localhost]: FAILED: => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError;>, original message: could not locate file in lookup: /mydate/file_write. Vould not locate file in lookup: /mydate/file_write"}
...ignoring

The job failed, even though :rw is set, so it should have write capability. The process was able to read the existing file, but not write out. This is due to SELinux protection that requires proper labels to be placed on the volume content mounted into the container. If the label is missing, SELinux may prevent the process from running inside the container. Labels set by the OS are not changed by Podman. See the Podman documentation for more information.

This could be a common misinterpretation. We have set the default to :z, which tells Podman to relabel file objects on shared volumes.

So we can either add :z or leave it off.

Paths to expose isolated jobs:

[
   "/mydata:/mydata"
]

The playbook will now work as expected:

PLAY [all] **************************************************** 11:05:52
TASK [I'm feeling real sleepy. . .] *************************** 11:05:52
ok: [localhost]
TASK [Read pre-existing file...] ****************************** 11:05:57
ok: [localhost] =>  {
	"Msg": "This is the file I'm reading in."
}
TASK [Write to a new file…] ********************************** 11:05:57
ok: [localhost]
TASK [Read written out file…] ******************************** 11:05:58
ok: [localhost] =>  {
      “Msg”: “This is the file I’ve just written to.”

Back on the underlying execution node host, we have the newly written out contents.

Note
If you are using container groups to launch automation jobs inside Red Hat OpenShift, we can also tell Ansible Automation Platform 2 to expose the same paths to that environment, but you must toggle the default to On under settings.

Once enabled, this will inject this as volumeMounts and volumes inside the pod spec that will be used for execution. It will look like this:

apiVersion: v1
kind: Pod
Spec:
   containers:
   - image: registry.redhat.io/ansible-automatoin-platform-24/ee-minimal-rhel8
  args:
    - ansible runner
    - worker
    - –private-data-dir=/runner
  volumeMounts:
mountPath: /mnt2
name: volume-0
readOnly: true
mouuntPath: /mnt3
name: volume-1
readOnly: true
mountPath: /mnt4
name: volume-2
readOnly: true
volumes:
hostPath:
  path: /mnt2
  type: “”
name: volume-0
hostPath:
  path: /mnt3
  type: “”
name: volume-1
hostPath:
  path: /mnt4
  type: “”
name: volume-2

Storage inside the running container is using the overlay file system. Any modifications inside the running container are destroyed after the job completes, much like a tmpfs being unmounted.

Network ports and protocols

Red Hat Ansible Automation Platform (AAP) uses a number of ports to communicate with its services. These ports must be open and available for incoming connection to the Red Hat Ansible Automation Platform server in order for it to work. Ensure that these ports are available and are not being blocked by the server firewall.

The following architectural diagram is an example of a fully deployed Ansible Automation Platform with all possible components.

Ansible Automation Platform Architectural Diagram

The following tables provide the default Red Hat Ansible Automation Platform destination ports required for each application.

Note
The default destination ports and installer inventory listed below are configurable. If you choose to configure them to suit your environment, you may experience a change in behavior.
Table 22. PostgreSQL
Port Protocol Service Direction Installer Inventory Variable Required for

22

TCP

SSH

Inbound and Outbound

ansible_port

Remote access during installation

5432

TCP

Postgres

Inbound and Outbound

pg_port

Default port

ALLOW connections from controller(s) to database port

Table 23. Automation controller
Port Protocol Service Direction Installer Inventory Variable Required for

22

TCP

SSH

Inbound and Outbound

ansible_port

Installation

80

TCP

HTTP

Inbound

nginx_http_port

UI/API

443

TCP

HTTPS

Inbound

nginx_https_port

UI/API

5432

TCP

PostgreSQL

Inbound and Outbound

pg_port

Open only if the internal database is used along with another component. Otherwise, this port should not be open

Hybrid mode in a cluster

27199

TCP

Receptor

Inbound and Outbound

receptor_listener_port

ALLOW receptor listener port across all controllers for mandatory & automatic control plane clustering

Table 24. Hop Nodes
Port Protocol Service Direction Installer Inventory Variable Required for

22

TCP

SSH

Inbound and Outbound

ansible_port

Installation

27199

TCP

Receptor

Inbound and Outbound

receptor_listener_port

Mesh

ALLOW connection from controller(s) to Receptor port

Table 25. Execution Nodes
Port Protocol Service Direction Installer Inventory Variable Required for

22

TCP

SSH

Inbound and Outbound

ansible_port

Installation

27199

TCP

Receptor

Inbound and Outbound

receptor_listener_port

Mesh - Nodes directly peered to controllers. No hop nodes involved. 27199 is bi-directional for the execution nodes

ALLOW connections from controller(s) to Receptor port (non-hop connected nodes)

ALLOW connections from hop node(s) to Receptor port (if relayed through hop nodes)

Table 26. Control Nodes
Port Protocol Service Direction Installer Inventory Variable Required for

22

TCP

SSH

Inbound and Outbound

ansible_port

Installation

27199

TCP

Receptor

Inbound and Outbound

receptor_listener_port

Mesh - Nodes directly peered to controllers. Direct nodes involved. 27199 is bi-directional for execution nodes

ENABLE connections from controller(s) to Receptor port for non-hop connected nodes

ENABLE connections from hop node(s) to Receptor port if relayed through hop nodes

443

TCP

Podman

Inbound

nginx_https_port

UI/API

Table 27. Hybrid Nodes
Port Protocol Service Direction Installer Inventory Variable Required for

22

TCP

SSH

Inbound and Outbound

ansible_port

Installation

27199

TCP

Receptor

Inbound and Outbound

receptor_listener_port

Mesh - Nodes directly peered to controllers. No hop nodes involved. 27199 is bi-directional for the execution nodes

ENABLE connections from controller(s) to Receptor port for non-hop connected nodes

ENABLE connections from hop node(s) to Receptor port if relayed through hop nodes

443

TCP

Podman

Inbound

nginx_https_port

UI/API

Table 28. Automation hub
Port Protocol Service Direction Installer Inventory Variable Required for

22

TCP

SSH

Inbound and Outbound

ansible_port

Installation

80

TCP

HTTP

Inbound

Fixed value

User interface

443

TCP

HTTPS

Inbound

Fixed value

User interface

5432

TCP

PostgreSQL

Inbound and Outbound

automationhub_pg_port

Open only if the internal database is used along with another component. Otherwise, this port should not be open

Table 29. Red Hat Insights for Red Hat Ansible Automation Platform
URL Required for

http://api.access.redhat.com:443

General account services, subscriptions

https://cert-api.access.redhat.com:443

Insights data upload

https://cert.console.redhat.com:443

Inventory upload and Cloud Connector connection

https://console.redhat.com

Access to Insights dashboard

Table 30. Automation Hub
URL Required for

https://console.redhat.com:443

General account services, subscriptions

https://catalog.redhat.com

Indexing execution environments

https://sso.redhat.com:443

TCP

https://automation-hub-prd.s3.amazonaws.com https://automation-hub-prd.s3.us-east-2.amazonaws.com/

Firewall access

https://galaxy.ansible.com

Ansible Community curated Ansible content

https://ansible-galaxy.s3.amazonaws.com

https://registry.redhat.io:443

Access to container images provided by Red Hat and partners

https://cert.console.redhat.com:443

Red Hat and partner curated Ansible Collections

Table 31. Execution Environments (EE)
URL Required for

https://registry.redhat.io:443

Access to container images provided by Red Hat and partners

cdn.quay.io:443

Access to container images provided by Red Hat and partners

cdn01.quay.io:443

Access to container images provided by Red Hat and partners

cdn02.quay.io:443

Access to container images provided by Red Hat and partners

cdn03.quay.io:443

Access to container images provided by Red Hat and partners

Important

Image manifests and filesystem blobs are served directly from registry.redhat.io. However, from 1 May 2023, filesystem blobs are served from quay.io instead. To avoid problems pulling container images, you must enable outbound connections to the listed quay.io hostnames.

This change should be made to any firewall configuration that specifically enables outbound connections to registry.redhat.io.

Use the hostnames instead of IP addresses when configuring firewall rules.

After making this change, you can continue to pull images from registry.redhat.io. You do not require a quay.io login, or need to interact with the quay.io registry directly in any way to continue pulling Red Hat container images.

For more information, see the article here

Managing usability analytics and data collection from automation controller

You can change how you participate in usability analytics and data collection from automation controller by opting out or changing your settings in the automation controller user interface.

Usability analytics and data collection

Usability data collection is included with automation controller to collect data to better understand how automation controller users specifically interact with automation controller, to help enhance future releases, and to continue streamlining your user experience.

Only users installing a trial of automation controller or a fresh installation of automation controller are opted-in for this data collection.

Additional resources

Controlling data collection from automation controller

You can control how automation controller collects data by setting your participation level in the User Interface tab in the Settings menu.

Procedure
  1. Log in to your automation controller.

  2. Navigate to menu:Settings[User Interface].

  3. Select the desired level of data collection from the User Analytics Tracking State drop-down list:

    • Off: Prevents any data collection.

    • Anonymous: Enables data collection without your specific user data.

    • Detailed: Enables data collection including your specific user data.

  4. Click btn:[Save] to apply the settings or btn:[Cancel] to discard the changes.

Managing your Ansible automation controller subscription

Before you can use automation controller, you must have a valid subscription, which authorizes its use.

Obtaining an authorized Ansible automation controller subscription

If you already have a subscription to a Red Hat product, you can acquire an automation controller subscription through that subscription. If not, you can request a trial subscription.

Procedure
  • If you already have a Red Hat Ansible Automation Platform subscription, use your Red Hat customer credentials when you launch the automation controller to access your subscription information. See Importing a subscription.

  • If you have a non-Ansible Red Hat or Satellite subscription, access automation controller with one of these methods:

Additional resources

If you have issues with your subscription, contact your Sales Account Manager or Red Hat Customer Service at: https://access.redhat.com/support/contact/customerService/.

Importing a subscription

After you have obtained an authorized Ansible Automation Platform subscription, you must first import it into the automation controller system before you can begin using it.

Prerequisites
Procedure
  1. Launch controller for the first time. The Subscription Management screen is displayed.

    Subscription Management
  2. Retrieve and and import your subscription by completing either of the following steps:

    1. If you have obtained a subscription manifest, upload it by navigating to the location where the file is saved (the subscription manifest is the complete .zip file, and not only its component parts).

      Note

      If the Browse option in the subscription manifest option is disabled, clear the username and password fields to enable it.

      The subscription metadata is then retrieved from the RHSM/Satellite API, or from the manifest provided. If multiple subscription counts were applied in a single installation, automation controller combines the counts but uses the earliest expiration date as the expiry (at which point you must refresh your subscription).

    2. If you are using your Red Hat customer credentials, enter your username and password on the license page. Use your Satellite username or password if your automation controller cluster nodes are registered to Satellite with Subscription Manager. After you enter your credentials, click btn:[Get Subscriptions].

      Automation controller retrieves your configured subscription service. Then, it prompts you to choose the subscription that you want to run and applies that metadata to automation controller. You can log in over time and retrieve new subscriptions if you have renewed.

  3. Click btn:[Next] to proceed to Tracking and Insights. Tracking and insights collect data to help Red Hat improve the product and deliver a better user experience. For more information about data collection, see Usability Analytics and Data Collection. This option is checked by default, but you can opt out of any of the following:

    1. User analytics. Collects data from the controller UI.

    2. Insights Analytics. Provides a high level analysis of your automation with automation controller. It helps you to identify trends and anomalous use of the controller. For opt-in of Automation Analytics to be effective, your instance of automation controller must be running on Red Hat Enterprise Linux. For more information, see the Automation Analytics section.

      Note

      You can change your analytics data collection preferences at any time, as described in the Usability Analytics and Data Collection section.

  4. After you have specified your tracking and Insights preferences, click btn:[Next] to proceed to the End User Agreement.

  5. Review and check the I agree to the End User License Agreement checkbox and click btn:[Submit].

After your subscription is accepted, automation controller displays the subscription details and opens the Dashboard. To return to the Subscription settings screen from the Dashboard, select menu:Settings[Subscription settings] from the btn:[Subscription] option in the navigation panel.

Subscription Details
Troubleshooting

When your subscription expires (you can check this in the Subscription details of the Subscription settings window), you must renew it in automation controller by one of the preceding two methods.

If you encounter the "Error fetching licenses" message, check that you have the proper permissions required for the Satellite user. The automation controller administrator requires this to apply a subscription.

The Satellite username and password is used to query the Satellite API for existing subscriptions. From the Satellite API, the automation controller receives metadata about those subscriptions, then filters through to find valid subscriptions that you can apply. These are then displayed as valid subscription options in the UI.

The following Satellite roles grant proper access:

  • Custom with view_subscriptions and view_organizations filter

  • Viewer

  • Administrator

  • Organization Administrator

  • Manager

Use the Custom role for your automation controller integration, as it is the most restrictive. For more information, see the Satellite documentation on managing users and roles.

Note

The System Administrator role is not equivalent to the Administrator user checkbox, and does not provide sufficient permissions to access the subscriptions API page.

Troubleshooting: Keeping your subscription in compliance

Your subscription has two possible statuses:

  • Compliant: Indicates that your subscription is appropriate for the number of hosts that you have automated within your subscription count.

  • Out of compliance: Indicates that you have exceeded the number of hosts in your subscription.

Subscription out of compliance

Host metric utilities

Automation controller provides a way to generate a CSV output of the host metric data and host metric summary through the Command Line Interface (CLI) and to soft delete hosts in bulk through the API.

Disconnected installation

Ansible Automation Platform installation on disconnected RHEL

Install Ansible Automation Platform (AAP) automation controller and a private automation hub, with an installer-managed database located on the automation controller without an Internet connection.

Prerequisites

To install AAP on a disconnected network, complete the following prerequisites:

  1. Create a subscription manifest.

  2. Download the AAP setup bundle.

  3. Create DNS records for the automation controller and private automation hub servers.

Note
The setup bundle includes additional components that make installing AAP easier in a disconnected environment. These include the AAP RPMs and the default execution environment (EE) images.

System Requirements

Hardware requirements are documented in the Automation Platform Installation Guide. Reference the "Red Hat Ansible Automation Platform Installation Guide" in the AAP Product Documentation for your version of AAP.

RPM Source

RPM dependencies for AAP that come from the BaseOS and AppStream repositories are not included in the setup bundle.To add these dependencies, you must obtain access to BaseOS and AppStream repositories.

  • Satellite is the recommended method from Red Hat to synchronize repositories

  • reposync - Makes full copies of the required RPM repositories and hosts them on the disconnected network

  • RHEL Binary DVD - Use the RPMs available on the RHEL 8 Binary DVD

Note
The RHEL Binary DVD method requires the DVD for supported versions of RHEL, including version 8.6 or higher. See Red Hat Enterprise Linux Life Cycle for information on which versions of RHEL are currently supported.

Synchronizing RPM repositories by using reposync

To perform a reposync you need a RHEL host that has access to the Internet. After the repositories are synced, you can move the repositories to the disconnected network hosted from a web server.

Procedure
  1. Attach the BaseOS and AppStream required repositories:

    # subscription-manager repos \
        --enable rhel-8-for-x86_64-baseos-rpms \
        --enable rhel-8-for-x86_64-appstream-rpms
  2. Perform the reposync:

    # dnf install yum-utils
    # reposync -m --download-metadata --gpgcheck \
        -p /path/to/download
    1. Make certain that you use reposync with --download-metadata and without --newest-only. See [RHEL 8] Reposync.

    2. If not using --newest-only the repos downloaded will be ~90GB.

    3. If using --newest-only the repos downloaded will be ~14GB.

  3. If you plan to use Red Hat Single Sign-On (RHSSO) sync these repositories.

    1. jb-eap-7.3-for-rhel-8-x86_64-rpms

    2. rh-sso-7.4-for-rhel-8-x86_64-rpms

    After the reposync is completed your repositories are ready to use with a web server.

  4. Move the repositories to your disconnected network.

Creating a new web server to host repositories

If you do not have an existing web server to host your repositories, create one with the synced repositories.

Procedure

Use the following steps if creating a new web server.

  1. Install prerequisites:

    $ sudo dnf install httpd
  2. Configure httpd to serve the repo directory:

    /etc/httpd/conf.d/repository.conf
    
    DocumentRoot '/path/to/repos'
    
    <LocationMatch "^/+$">
        Options -Indexes
        ErrorDocument 403 /.noindex.html
    </LocationMatch>
    
    <Directory '/path/to/repos'>
        Options All Indexes FollowSymLinks
        AllowOverride None
        Require all granted
    </Directory>
  3. Ensure that the directory is readable by an apache user:

    $ sudo chown -R apache /path/to/repos
  4. Configure SELinux:

    $ sudo semanage fcontext -a -t httpd_sys_content_t "/path/to/repos(/.*)?"
    $ sudo restorecon -ir /path/to/repos
  5. Enable httpd:

    $ sudo systemctl enable --now httpd.service
  6. Open firewall:

    $ sudo firewall-cmd --zone=public --add-service=http –add-service=https
    --permanent
    $ sudo firewall-cmd --reload
  7. On automation controller and automation hub, add a repo file at /etc/yum.repos.d/local.repo, add the optional repos if needed:

    [Local-BaseOS]
    name=Local BaseOS
    baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-baseos-rpms
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    
    [Local-AppStream]
    name=Local AppStream
    baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-appstream-rpms
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

Accessing RPM Repositories for Locally Mounted DVD

If you are going to access the repositories from the DVD, it is necessary to set up a local repository. This section shows how to do that.

Procedure
  1. Mount DVD or ISO

    1. DVD

      # mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd
    2. ISO

      # mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd
  2. Create yum repo file at /etc/yum.repos.d/dvd.repo

    [dvd-BaseOS]
    name=DVD for RHEL - BaseOS
    baseurl=file:///media/rheldvd/BaseOS
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    
    [dvd-AppStream]
    name=DVD for RHEL - AppStream
    baseurl=file:///media/rheldvd/AppStream
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
  3. Import the gpg key

    # rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release
Note
If the key is not imported you will see an error similar to
# Curl error (6): Couldn't resolve host name for
https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host:
www.redhat.com]

Adding a Subscription Manifest to AAP without an Internet connection

In order to add a subscription to AAP without an Internet connection, create and import a subscription manifest.

Procedure
  1. Login to access.redhat.com.

  2. Navigate to menu:Subscriptions[Subscriptions].

  3. Click btn:[Subscription Allocations].

  4. Click btn:[Create New subscription allocation].

  5. Name the new subscription allocation.

  6. Select menu:Satellite 6.8[Satellite 6.8] as the type.

  7. Click btn:[Create]. The Details tab will open for your subscription allocation.

  8. Click btn:[Subscriptions] tab.

  9. Click btn:[Add Subscription].

  10. Find your AAP subscription, in the Entitlements box add the number of entitlements you want to assign to your environment. A single entitlement is needed for each node that will be managed by AAP: server, network device, etc.

  11. Click btn:[Submit].

  12. Click btn:[Export Manifest].

  13. This downloads a file manifest_<allocation name>_<date>.zip that be imported with Automation Controller after installation.

Installing the AAP Setup Bundle

The “bundle” version is strongly recommended for disconnected installations as it comes with the RPM content for AAP as well as the default execution environment images that will be uploaded to your Private Automation Hub during the installation process.

Downloading the Setup Bundle

Procedure
  1. Download the AAP setup bundle package by navigating to the Red Hat Ansible Automation Platform download page and clicking btn:[Download Now] for the Ansible Automation Platform 2.4 Setup Bundle.

Installing the Setup Bundle

The download and installation of the setup bundle needs to be located on the automation controller. From the controller, untar the bundle, edit the inventory file, and run the setup.

  1. Untar the bundle

    $ tar xvf \
       ansible-automation-platform-setup-bundle-2.4-1.tar.gz
    $ cd ansible-automation-platform-setup-bundle-2.4-1
  2. Edit the inventory file to include the required options

    1. automationcontroller group

    2. automationhub group

    3. admin_password

    4. pg_password

    5. automationhub_admin_password

    6. automationhub_pg_host, automationhub_pg_port

    7. automationhub_pg_password

      Example Inventory

      [automationcontroller]
      automationcontroller.example.org ansible_connection=local
      
      [automationcontroller:vars]
      peers=execution_nodes
      
      [automationhub]
      automationhub.example.org
      
      [all:vars]
      admin_password='password123'
      
      pg_database='awx'
      pg_username='awx'
      pg_password='dbpassword123'
      
      receptor_listener_port=27199
      
      automationhub_admin_password='hubpassword123'
      
      automationhub_pg_host='automationcontroller.example.org'
      automationhub_pg_port='5432'
      
      automationhub_pg_database='automationhub'
      automationhub_pg_username='automationhub'
      automationhub_pg_password='dbpassword123'
      automationhub_pg_sslmode='prefer'
      Note
      The inventory should be kept intact after installation since it is used for backup, restore, and upgrade functions. Consider keeping a backup copy in a secure location, given that the inventory file contains passwords.
  3. Run the AAP setup bundle executable as the root user

    $ sudo -i
    # cd /path/to/ansible-automation-platform-setup-bundle-2.4-1
    # ./setup.sh
  4. Once installation is complete, navigate to the Fully Qualified Domain Name (FQDN) for the Automation controller node that was specified in the installation inventory file.

  5. Log in with the administrator credentials specified in the installation inventory file.

Completing Post Installation Tasks

Adding Controller Subscription

Procedure
  1. Navigate to the FQDN of the Automation controller. Login with admin and the password you specified as admin_password in your inventory file.

  2. Click btn:[Browse] and select the manifest.zip you created earlier.

  3. Click btn:[Next].

  4. Uncheck btn:[User analytics] and btn:[Automation analytics]. These rely on an Internet connection and should be turned off.

  5. Click btn:[Next].

  6. Read the End User License Agreement and click btn:[Submit] if you agree.

Updating the CA trust store

Self-Signed Certificates

By default, AAP hub and controller are installed using self signed certificates. This creates an issue where the controller does not trust the hub’s certificate and will not download the execution environments from the hub. The solution is to import the hub’s CA cert as a trusted cert on the controller. You can use SCP or directly copy and paste from one file into another to perform this action. The steps below are copied from a KB article found at https://access.redhat.com/solutions/6707451.

Copying the root certificate on the private automation hub to the automation controller using secure copy (SCP)

If SSH is available as the root user between the controller and the private automation hub, use SCP to copy the root certificate on the private automation hub to the controller and run update-ca-trust on the controller to update the CA trust store.

On the Automation controller

$ sudo -i
# scp <hub_fqdn>:/etc/pulp/certs/root.crt
/etc/pki/ca-trust/source/anchors/automationhub-root.crt
# update-ca-trust
Copying and Pasting

If SSH is unavailable as root between the private automation hub and the controller, copy the contents of the file /etc/pulp/certs/root.crt on the private automation hub and paste it into a new file on the controller called /etc/pki/ca-trust/source/anchors/automationhub-root.crt. After the new file is created, run the command update-ca-trust in order to update the CA trust store with the new certificate.

On the Private automation hub

$ sudo -i
# cat /etc/pulp/certs/root.crt
(copy the contents of the file, including the lines with 'BEGIN CERTIFICATE' and
'END CERTIFICATE')

On the Automation controller

$ sudo -i
# vi /etc/pki/ca-trust/source/anchors/automationhub-root.crt
(paste the contents of the root.crt file from the private automation hub into the new file and write to disk)
# update-ca-trust

Importing Collections into Private Automation Hub

You can download collection tarball files from the following sources:

Downloading collection from Red Hat Automation Hub

This section gives instructions on how to download a collection from Red Hat Automation Hub. If the collection has dependencies, they will also need to be downloaded and installed.

Procedure
  1. Navigate to https://console.redhat.com/ansible/automation-hub/ and login with your Red Hat credentials.

  2. Click on the collection you wish to download.

  3. Click btn:[Download tarball]

  4. To verify if a collection has dependencies, click the Dependencies tab.

  5. Download any dependencies needed for this collection.

Creating Collection Namespace

The namespace of the collection must exist for the import to be successful. You can find the namespace name by looking at the first part of the collection tarball filename. For example the namespace of the collection ansible-netcommon-3.0.0.tar.gz is ansible.

Procedure
  1. Login to private automation hub web console.

  2. Navigate to menu:Collections[Namespaces].

  3. Click btn:[Create].

  4. Provide the namespace name.

  5. Click btn:[Create].

Importing the collection tarball with GUI

  1. Login to private automation hub web console.

  2. Navigate to menu:Collections[Namespaces].

  3. Click on btn:[View collections] of the namespace you will be importing the collection into.

  4. Click btn:[Upload collection].

  5. Click the folder icon and select the tarball of the collection.

  6. Click btn:[Upload].

This opens the 'My Imports' page. You can see the status of the import and various details of the files and modules that have been imported.

Importing the collection tarball using ansible-galaxy via CLI

You can import collections into the private automation hub by using the command-line interface rather than the GUI.

  1. Copy the collection tarballs to the private automation hub.

  2. Log in to the private automation hub server via SSH.

  3. Add the self-signed root CA cert to the trust store on the automation hub.

    # cp /etc/pulp/certs/root.crt \
        /etc/pki/ca-trust/source/anchors/automationhub-root.crt
    # update-ca-trust
  4. Update the /etc/ansible/ansible.cfg file with your hub configuration. Use either a token or a username and password for authentication.

    [galaxy]
    server_list = private_hub
    
    [galaxy_server.private_hub]
    url=https://<hub_fqdn>/api/galaxy/
    token=<token_from_private_hub>
  5. Import the collection using the ansible-galaxy command.

$ ansible-galaxy collection publish <collection_tarball>
Note
Create the namespace that the collection belongs to in advance or publishing the collection will fail.

Approving the Imported Collection

After you have imported collections with either the GUI or the CLI method, you must approve them by using the GUI. After they are approved, they are available for use.

Procedure
  1. Login to private automation hub web console.

  2. Go to menu:Collections[Approval].

  3. Click btn:[Approve] for the collection you wish to approve.

  4. The collection is now available for use in your private automation hub.

Note
The collection is added to the "Published" repository regardless of its source.
  1. Import any dependency for the collection using these same steps.

Recommended collections depend on your use case. Ansible and Red Hat provide these collections.

Custom Execution Environments

Use the ansible-builder program to create custom execution environment images. For disconnected environments, custom EE images can be built in the following ways:

  • Build an EE image on an internet-facing system and import it to the disconnected environment

  • Build an EE image entirely on the disconnected environment with some modifications to the normal process of using ansible-builder

  • Create a minimal base container image that includes all of the necessary modifications for a disconnected environment, then build custom EE images from the base container image

Transferring a Custom EE Images Across a Disconnected Boundary

A custom execution environment image can be built on an internet-facing machine using the existing documentation. Once an execution environment has been created it is available in the local podman image cache. You can then transfer the custom EE image across a disconnected boundary. To transfer the custom EE image across a disconnected boundary, first save the image:

  1. Save the image:

$ podman image save localhost/custom-ee:latest | gzip -c custom-ee-latest.tar.gz

Transfer the file across the disconnected boundary by using an existing mechanism such as sneakernet, one-way diode, etc.. After the image is available on the disconnected side, import it into the local podman cache, tag it, and push it to the disconnected hub:

$ podman image load -i custom-ee-latest.tar.gz
$ podman image tag localhost/custom-ee <hub_fqdn>/custom-ee:latest
$ podman login <hub_fqdn> --tls-verify=false
$ podman push <hub_fqdn>/custom-ee:latest

Building an Execution Environment in a Disconnected Environment

When building a custom execution environment, the ansible-builder tool defaults to downloading the following requirements from the internet:

  • Ansible Galaxy (galaxy.ansible.com) or Automation Hub (console.redhat.com) for any collections added to the EE image.

  • PyPI (pypi.org) for any python packages required as collection dependencies.

  • The UBI repositories (cdn.redhat.com) for updating any UBI-based EE images.

    • The RHEL repositories might also be needed to meet certain collection requirements.

  • registry.redhat.io for access to the ansible-builder-rhel8 container image.

Building an EE image in a disconnected environment requires a subset of all of these mirrored, or otherwise made available on the disconnected network. See Importing Collections into Private Automation Hub for information on importing collections from Galaxy or Automation Hub into a private automation hub.

Mirrored PyPI content once transferred into the high-side network can be made available using a web server or an artifact repository like Nexus.

The UBI repositories can be mirrored on the low-side using a tool like reposync, imported to the disconnected environment, and made available from Satellite or a simple web server (since the content is freely redistributable).

The ansible-builder-rhel8 container image can be imported into a private automation hub in the same way a custom EE can be imported. See Transferring a Custom EE Images Across a Disconnected Boundary for details substituting localhost/custom-ee for registry.redhat.io/ansible-automation-platform-21/ansible-builder-rhel8. This will make the ansible-builder-rhel8 image available in the private automation hub registry along with the default EE images.

Once all of the prerequisites are available on the high-side network, ansible-builder and podman can be used to create a custom execution environment image.

Installing the ansible-builder RPM

Procedure
  1. On a RHEL system, install the ansible-builder RPM. This can be done in one of several ways:

    1. Subscribe the RHEL box to a Satellite on the disconnected network.

    2. Attach the Ansible Automation Platform subscription and enable the AAP repo.

    3. Install the ansible-builder RPM.

      Note

      This is preferred if a Satellite exists because the EE images can use RHEL content from the Satellite if the underlying build host is registered.

  2. Unarchive the AAP setup bundle.

  3. Install the ansible-builder RPM and its dependencies from the included content:

    $ tar -xzvf ansible-automation-platform-setup-bundle-2.4-1.tar.gz
    $ cd ansible-automation-platform-setup-bundle-2.4/bundle/el8/repos/
    $ sudo yum install ansible-builder-1.0.1-2.el8ap.noarch.rpm
    python38-requirements-parser-0.2.0-3.el8ap.noarch.rpm
  4. Create a directory for your custom EE build artifacts.

    $ mkdir custom-ee
    $ cd custom-ee/
  5. Create an execution-environment.yml file that defines the requirements for your custom EE following the documentation at https://ansible-builder.readthedocs.io/en/stable/definition/. Override the EE_BASE_IMAGE and EE_BUILDER_IMAGE variables to point to the EEs available in your private automation hub.

    $ cat execution-environment.yml
    ---
    version: 1
    build_arg_defaults:
      EE_BASE_IMAGE: '<hub_fqdn>/ee-supported-rhel8:latest'
      EE_BUILDER_IMAGE: '<hub_fqdn>/ansible-builder-rhel8:latest'
    
    dependencies:
      python: requirements.txt
      galaxy: requirements.yml
  6. Create an ansible.cfg file that points to your private automation hub and contains credentials that allow uploading, such as an admin user token.

    $ cat ansible.cfg
    [galaxy]
    server_list = private_hub
    
    [galaxy_server.private_hub]
    url=https://<hub_fqdn>/api/galaxy/
    token=<admin_token>
  7. Create a ubi.repo file that points to your disconnected UBI repo mirror (this could be your Satellite if the UBI content is hosted there).

    This is an example output where reposync was used to mirror the UBI repos.

    $ cat ubi.repo
    [ubi-8-baseos]
    name = Red Hat Universal Base Image 8 (RPMs) - BaseOS
    baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-baseos
    enabled = 1
    gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    gpgcheck = 1
    
    [ubi-8-appstream]
    name = Red Hat Universal Base Image 8 (RPMs) - AppStream
    baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-appstream
    enabled = 1
    gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    gpgcheck = 1
  8. Add the CA certificate used to sign the private automation hub web server certificate.

    1. For self-signed certificates (the installer default), make a copy of the file /etc/pulp/certs/root.crt from your private automation hub and name it hub-root.crt.

    2. If an internal certificate authority was used to request and sign the private automation hub web server certificate, make a copy of that CA certificate called hub-root.crt.

  9. Create your python requirements.txt and ansible collection requirements.yml with the content needed for your custom EE image. Note that any collections you require should already be uploaded into your private automation hub.

  10. Use ansible-builder to create the context directory used to build the EE image.

    $ ansible-builder create
    Complete! The build context can be found at: /home/cloud-user/custom-ee/context
    $ ls -1F
    ansible.cfg
    context/
    execution-environment.yml
    hub-root.crt
    pip.conf
    requirements.txt
    requirements.yml
    ubi.repo
  11. Copy the files used to override the internet-facing defaults into the context directory.

    $ cp ansible.cfg hub-root.crt pip.conf ubi.repo context/
  12. Edit the file context/Containerfile and add the following modifications.

    1. In the first EE_BASE_IMAGE build section, add the ansible.cfg and hub-root.crt files and run the update-ca-trust command.

    2. In the EE_BUILDER_IMAGE build section, add the ubi.repo and pip.conf files.

    3. In the final EE_BASE_IMAGE build section, add the ubi.repo and pip.conf files.

      $ cat context/Containerfile
      ARG EE_BASE_IMAGE=<hub_fqdn>/ee-supported-rhel8:latest
      ARG EE_BUILDER_IMAGE=<hub_fqdn>/ansible-builder-rhel8:latest
      
      FROM $EE_BASE_IMAGE as galaxy
      ARG ANSIBLE_GALAXY_CLI_COLLECTION_OPTS=
      USER root
      
      ADD _build /build
      WORKDIR /build
      
      # this section added
      ADD ansible.cfg /etc/ansible/ansible.cfg
      ADD hub-root.crt /etc/pki/ca-trust/source/anchors/hub-root.crt
      RUN update-ca-trust
      # end additions
      RUN ansible-galaxy role install -r requirements.yml \
          --roles-path /usr/share/ansible/roles
      RUN ansible-galaxy collection install \
          $ANSIBLE_GALAXY_CLI_COLLECTION_OPTS -r requirements.yml \
          --collections-path /usr/share/ansible/collections
      
      FROM $EE_BUILDER_IMAGE as builder
      
      COPY --from=galaxy /usr/share/ansible /usr/share/ansible
      
      ADD _build/requirements.txt requirements.txt
      RUN ansible-builder introspect --sanitize \
          --user-pip=requirements.txt \
          --write-bindep=/tmp/src/bindep.txt \
          --write-pip=/tmp/src/requirements.txt
      # this section added
      ADD ubi.repo /etc/yum.repos.d/ubi.repo
      ADD pip.conf /etc/pip.conf
      # end additions
      RUN assemble
      
      FROM $EE_BASE_IMAGE
      USER root
      
      COPY --from=galaxy /usr/share/ansible /usr/share/ansible
      # this section added
      ADD ubi.repo /etc/yum.repos.d/ubi.repo
      ADD pip.conf /etc/pip.conf
      # end additions
      
      COPY --from=builder /output/ /output/
      RUN /output/install-from-bindep && rm -rf /output/wheels
  13. Create the EE image in the local podman cache using the podman command.

    $ podman build -f context/Containerfile \
        -t <hub_fqdn>/custom-ee:latest
  14. Once the custom EE image builds successfully, push it to the private automation hub.

    $ podman push <hub_fqdn>/custom-ee:latest

Workflow for upgrading between minor AAP releases

To upgrade between minor releases of AAP 2, use this general workflow.

Procedure
  1. Download and unarchive the latest AAP 2 setup bundle.

  2. Take a backup of the existing installation.

  3. Copy the existing installation inventory file into the new setup bundle directory.

  4. Run ./setup.sh to upgrade the installation.

For example, to upgrade from version 2.2.0-7 to 2.3-1.2, make sure that both setup bundles are on the initial controller node where the installation occurred:

    $ ls -1F
ansible-automation-platform-setup-bundle-2.2.0-7/
ansible-automation-platform-setup-bundle-2.2.0-7.tar.gz
ansible-automation-platform-setup-bundle-2.3-1.2/
ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz

Back up the 2.2.0-7 installation:

$ cd ansible-automation-platform-setup-bundle-2.2.0-7
$ sudo ./setup.sh -b
$ cd ..

Copy the 2.2.0-7 inventory file into the 2.3-1.2 bundle directory:

$ cd ansible-automation-platform-setup-bundle-2.2.0-7
$ cp inventory ../ansible-automation-platform-setup-bundle-2.3-1.2/
$ cd ..

Upgrade from 2.2.0-7 to 2.3-1.2 with the setup.sh script:

$ cd ansible-automation-platform-setup-bundle-2.3-1.2
$ sudo ./setup.sh

Use automation controller’s' search tool for search and filter capabilities across multiple functions. An expandable "cheat-sheet" is available from the Advanced option from the Name menu in the search field.

From there, use the combination of Set Type, Key, and Lookup type to filter.

key sheet

Tips for searching in automation controller

These searching tips assume that you are not searching hosts. Most of this section still applies to hosts but with some subtle differences.

  • The typical syntax of a search consists a field (left-hand side) and a value (right-hand side).

  • A colon is used to separate the field that you want to search from the value.

  • If the seach has no colon (see example 3) it is treated as a simple string search where ?search=foobar is sent.

The following are examples of syntax used for searching:

  1. name:localhost In this example, the string before the colon represents the field that you want to search on. If that string does not match something from Fields or Related Fields then it is treated the same way as in Example 3 (string search). The string after the colon is the string that you want to search for within the name attribute.

  2. organization.name:Default This example shows a Related Field Search. The period in organization.name separates the model from the field. Depending on how deep or complex the search is, you can have multiple periods in that part of the query.

  3. foobar This is a simple string (key term) search that finds all instances of the search term using an icontains search against the name and description fields. If you use a space between terms, for example foo bar, then results that contain both terms are returned. If the terms are wrapped in quotes, for example, "foo bar", automation controller searches for the string with the terms appearing together.

Specific name searches search against the API name. For example, Management job in the user interface is system_job in the API. . organization:Default This example shows a Related Field search but without specifying a field to go along with the organization. This is supported by the API and is analogous to a simple string search but carried out against the organization (does an icontains search against both the name and description).

Values for search fields

To find values for certain fields, refer to the API endpoint for extensive options and their valid values. For example, if you want to search against /api/v2/jobs > type field, you can find the values by performing an OPTIONS request to /api/v2/jobs and look for entries in the API for "type". Additionally, you can view the related searches by scrolling to the bottom of each screen. In the example for /api/v2/jobs, the related search shows:

"related_search_fields": [
       "modified_by__search",
       "project__search",
       "project_update__search",
       "credentials__search",
       "unified_job_template__search",
       "created_by__search",
       "inventory__search",
       "labels__search",
       "schedule__search",
       "webhook_credential__search",
       "job_template__search",
       "job_events__search",
       "dependent_jobs__search",
       "launch_config__search",
       "unifiedjob_ptr__search",
       "notifications__search",
       "unified_job_node__search",
       "instance_group__search",
       "hosts__search",
       "job_host_summaries__search"

The values for Fields come from the keys in a GET request. url, related, and summary_fields are not used. The values for Related Fields also come from the OPTIONS response, but from a different attribute. Related Fields is populated by taking all the values from related_search_fields and stripping off the __search from the end.

Any search that does not start with a value from Fields or a value from the Related Fields, is treated as a generic string search. Searching for localhost, for example, results in the UI sending ?search=localhost as a query parameter to the API endpoint. This is a shortcut for an icontains search on the name and description fields.

Searching a Related Field requires you to start the search string with the Related Field. The following example describes how to search using values from the Related Field, organization.

The left-hand side of the search string must start with organization, for example, organization:Default. Depending on the related field, you can provide more specific direction for the search by providing secondary and tertiary fields. An example of this is to specify that you want to search for all job templates that use a project matching a certain name. The syntax on this would look like: job_template.project.name:"A Project".

Note

This query executes against the unified_job_templates endpoint which is why it starts with job_template. If you were searching against the job_templates endpoint, then you would not need the job_template portion of the query.

Other search considerations

Be aware of the following issues when searching in automation controller:

  • There is currently no supported syntax for OR queries. All search terms are *AND*ed in the query parameters.

  • The left-hand portion of a search parameter can be wrapped in quotes to support searching for strings with spaces. For more information, see Tips for searching.

  • Currently, the values in the Fields are direct attributes expected to be returned in a GET request. Whenever you search against one of the values, automation controller carries out an __icontains search. So, for example, name:localhost sends back ?name__icontains=localhost. Automation controller currently performs this search for every Field value, even id.

Sort

Where applicable, use the arrows in each column to sort by ascending order. The following is an example from the schedules list:

sort arrow

The direction of the arrow indicates the sort order of the column.

Supported Inventory plugin templates

After upgrade to 4.x, existing configurations are migrated to the new format that produces a backwards compatible inventory output. Use the following templates to aid in migrating your inventories to the new style inventory plugin output.

Amazon Web Services EC2

compose:
  ansible_host: public_ip_address
  ec2_account_id: owner_id
  ec2_ami_launch_index: ami_launch_index | string
  ec2_architecture: architecture
  ec2_block_devices: dict(block_device_mappings | map(attribute='device_name') | list | zip(block_device_mappings | map(attribute='ebs.volume_id') | list))
  ec2_client_token: client_token
  ec2_dns_name: public_dns_name
  ec2_ebs_optimized: ebs_optimized
  ec2_eventsSet: events | default("")
  ec2_group_name: placement.group_name
  ec2_hypervisor: hypervisor
  ec2_id: instance_id
  ec2_image_id: image_id
  ec2_instance_profile: iam_instance_profile | default("")
  ec2_instance_type: instance_type
  ec2_ip_address: public_ip_address
  ec2_kernel: kernel_id | default("")
  ec2_key_name: key_name
  ec2_launch_time: launch_time | regex_replace(" ", "T") | regex_replace("(\+)(\d\d):(\d)(\d)$", ".\g<2>\g<3>Z")
  ec2_monitored: monitoring.state in ['enabled', 'pending']
  ec2_monitoring_state: monitoring.state
  ec2_persistent: persistent | default(false)
  ec2_placement: placement.availability_zone
  ec2_platform: platform | default("")
  ec2_private_dns_name: private_dns_name
  ec2_private_ip_address: private_ip_address
  ec2_public_dns_name: public_dns_name
  ec2_ramdisk: ramdisk_id | default("")
  ec2_reason: state_transition_reason
  ec2_region: placement.region
  ec2_requester_id: requester_id | default("")
  ec2_root_device_name: root_device_name
  ec2_root_device_type: root_device_type
  ec2_security_group_ids: security_groups | map(attribute='group_id') | list |  join(',')
  ec2_security_group_names: security_groups | map(attribute='group_name') | list |  join(',')
  ec2_sourceDestCheck: source_dest_check | default(false) | lower | string
  ec2_spot_instance_request_id: spot_instance_request_id | default("")
  ec2_state: state.name
  ec2_state_code: state.code
  ec2_state_reason: state_reason.message if state_reason is defined else ""
  ec2_subnet_id: subnet_id | default("")
  ec2_tag_Name: tags.Name
  ec2_virtualization_type: virtualization_type
  ec2_vpc_id: vpc_id | default("")
filters:
  instance-state-name:
  - running
groups:
  ec2: true
hostnames:
  - network-interface.addresses.association.public-ip
  - dns-name
  - private-dns-name
keyed_groups:
  - key: image_id | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: images
    prefix: ''
    separator: ''
  - key: placement.availability_zone
    parent_group: zones
    prefix: ''
    separator: ''
  - key: ec2_account_id | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: accounts
    prefix: ''
    separator: ''
  - key: ec2_state | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: instance_states
    prefix: instance_state
  - key: platform | default("undefined") | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: platforms
    prefix: platform
  - key: instance_type | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: types
    prefix: type
  - key: key_name | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: keys
    prefix: key
  - key: placement.region
    parent_group: regions
    prefix: ''
    separator: ''
  - key: security_groups | map(attribute="group_name") | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list
    parent_group: security_groups
    prefix: security_group
  - key: dict(tags.keys() | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list | zip(tags.values()
      | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list))
    parent_group: tags
    prefix: tag
  - key: tags.keys() | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list
    parent_group: tags
    prefix: tag
  - key: vpc_id | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: vpcs
    prefix: vpc_id
  - key: placement.availability_zone
    parent_group: '{{ placement.region }}'
    prefix: ''
    separator: ''
plugin: amazon.aws.aws_ec2
use_contrib_script_compatible_sanitization: true

Google Compute Engine

[literal, options="nowrap" subs="+attributes"]
auth_kind: serviceaccount
compose:
  ansible_ssh_host: networkInterfaces[0].accessConfigs[0].natIP | default(networkInterfaces[0].networkIP)
  gce_description: description if description else None
  gce_id: id
  gce_image: image
  gce_machine_type: machineType
  gce_metadata: metadata.get("items", []) | items2dict(key_name="key", value_name="value")
  gce_name: name
  gce_network: networkInterfaces[0].network.name
  gce_private_ip: networkInterfaces[0].networkIP
  gce_public_ip: networkInterfaces[0].accessConfigs[0].natIP | default(None)
  gce_status: status
  gce_subnetwork: networkInterfaces[0].subnetwork.name
  gce_tags: tags.get("items", [])
  gce_zone: zone
hostnames:
- name
- public_ip
- private_ip
keyed_groups:
- key: gce_subnetwork
  prefix: network
- key: gce_private_ip
  prefix: ''
  separator: ''
- key: gce_public_ip
  prefix: ''
  separator: ''
- key: machineType
  prefix: ''
  separator: ''
- key: zone
  prefix: ''
  separator: ''
- key: gce_tags
  prefix: tag
- key: status | lower
  prefix: status
- key: image
  prefix: ''
  separator: ''
plugin: google.cloud.gcp_compute
retrieve_image_info: true
use_contrib_script_compatible_sanitization: true

Microsoft Azure Resource Manager

conditional_groups:
  azure: true
default_host_filters: []
fail_on_template_errors: false
hostvar_expressions:
  computer_name: name
  private_ip: private_ipv4_addresses[0] if private_ipv4_addresses else None
  provisioning_state: provisioning_state | title
  public_ip: public_ipv4_addresses[0] if public_ipv4_addresses else None
  public_ip_id: public_ip_id if public_ip_id is defined else None
  public_ip_name: public_ip_name if public_ip_name is defined else None
  tags: tags if tags else None
  type: resource_type
keyed_groups:
- key: location
  prefix: ''
  separator: ''
- key: tags.keys() | list if tags else []
  prefix: ''
  separator: ''
- key: security_group
  prefix: ''
  separator: ''
- key: resource_group
  prefix: ''
  separator: ''
- key: os_disk.operating_system_type
  prefix: ''
  separator: ''
- key: dict(tags.keys() | map("regex_replace", "^(.*)$", "\1_") | list | zip(tags.values() | list)) if tags else []
  prefix: ''
  separator: ''
plain_host_names: true
plugin: azure.azcollection.azure_rm
use_contrib_script_compatible_sanitization: true

VMware vCenter

compose:
  ansible_host: guest.ipAddress
  ansible_ssh_host: guest.ipAddress
  ansible_uuid: 99999999 | random | to_uuid
  availablefield: availableField
  configissue: configIssue
  configstatus: configStatus
  customvalue: customValue
  effectiverole: effectiveRole
  guestheartbeatstatus: guestHeartbeatStatus
  layoutex: layoutEx
  overallstatus: overallStatus
  parentvapp: parentVApp
  recenttask: recentTask
  resourcepool: resourcePool
  rootsnapshot: rootSnapshot
  triggeredalarmstate: triggeredAlarmState
filters:
- runtime.powerState == "poweredOn"
keyed_groups:
- key: config.guestId
  prefix: ''
  separator: ''
- key: '"templates" if config.template else "guests"'
  prefix: ''
  separator: ''
plugin: community.vmware.vmware_vm_inventory
properties:
- availableField
- configIssue
- configStatus
- customValue
- datastore
- effectiveRole
- guestHeartbeatStatus
- layout
- layoutEx
- name
- network
- overallStatus
- parentVApp
- permission
- recentTask
- resourcePool
- rootSnapshot
- snapshot
- triggeredAlarmState
- value
- capability
- config
- guest
- runtime
- storage
- summary
strict: false
with_nested_properties: true

Red Hat Satellite 6

group_prefix: foreman_
keyed_groups:
- key: foreman['environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_') | regex_replace('none', '')
  prefix: foreman_environment_
  separator: ''
- key: foreman['location_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
  prefix: foreman_location_
  separator: ''
- key: foreman['organization_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
  prefix: foreman_organization_
  separator: ''
- key: foreman['content_facet_attributes']['lifecycle_environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
  prefix: foreman_lifecycle_environment_
  separator: ''
- key: foreman['content_facet_attributes']['content_view_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
  prefix: foreman_content_view_
  separator: ''
legacy_hostvars: true
plugin: theforeman.foreman.foreman
validate_certs: false
want_facts: true
want_hostcollections: false
want_params: true

OpenStack

expand_hostvars: true
fail_on_errors: true
inventory_hostname: uuid
plugin: openstack.cloud.openstack

Red Hat Virtualization

compose:
  ansible_host: (devices.values() | list)[0][0] if devices else None
keyed_groups:
- key: cluster
  prefix: cluster
  separator: _
- key: status
  prefix: status
  separator: _
- key: tags
  prefix: tag
  separator: _
ovirt_hostname_preference:
- name
- fqdn
ovirt_insecure: false
plugin: ovirt.ovirt.ovirt

Red Hat Ansible Automation Platform

include_metadata: true
inventory_id: <inventory_id or url_quoted_named_url>
plugin: awx.awx.tower
validate_certs: <true or false>

Setting up automation mesh

Configure the Ansible Automation Platform installer to set up automation mesh for your Ansible environment. Perform additional tasks to customize your installation, such as importing a Certificate Authority (CA) certificate.

automation mesh Installation

You use the Ansible Automation Platform installation program to set up automation mesh or to upgrade to automation mesh. To provide Ansible Automation Platform with details about the nodes, groups, and peer relationships in your mesh network, you define them in an the inventory file in the installer bundle.

Importing a Certificate Authority (CA) certificate

A Certificate Authority (CA) verifies and signs individual node certificates in an automation mesh environment. You can provide your own CA by specifying the path to the certificate and the private RSA key file in the inventory file of your Red Hat Ansible Automation Platform installer.

Note
The Ansible Automation Platform installation program generates a CA if you do not provide one.
Procedure
  1. Open the inventory file for editing.

  2. Add the mesh_ca_keyfile variable and specify the full path to the private RSA key (.key).

  3. Add the mesh_ca_certfile variable and specify the full path to the CA certificate file (.crt).

  4. Save the changes to the inventory file.

Example
[all:vars]
mesh_ca_keyfile=/tmp/<mesh_CA>.key
mesh_ca_certfile=/tmp/<mesh_CA>.crt

With the CA files added to the inventory file, run the installation program to apply the CA. This process copies the CA to the to /etc/receptor/tls/ca/ directory on each control and execution node on your mesh network.

Firewall policy management with Ansible security automation

As a security operator, you can use Ansible security automation to manage multiple firewall policies. Create and delete firewall rules to block or unblock a source IP address from accessing a destination IP address.

About firewall policy management

An organization’s network firewall is the first line of defense against an attack and a vital component for maintaining a secure environment. As a security operator, you construct and manage secure networks to ensure that your firewall only allows inbound and outbound network traffic defined by your organization’s firewall policies. A firewall policy consists of security rules that protect the network against harmful incoming and outgoing traffic.

Managing multiple firewall rules across various products and vendors can be both challenging and time consuming for security teams. Manual workflow processes that involve complex tasks can result in errors and ultimately cause delays in investigating an application’s suspicious behavior or stopping an ongoing attack on a server. When every solution in a security portfolio is automated through the same language, both security analysts and operators can perform a series of actions across various products in a fraction of the time. This automated process maximizes the overall efficiency of the security team.

Ansible security automation interacts with a wide variety of security technologies from a range of vendors. Ansible enables security teams to manage different products, interfaces, and workflows in a unified way to produce a successful deployment. For example, your security team can automate tasks such as blocking and unblocking IP and URLs on supported technologies such as enterprise firewalls.

Automate firewall rules

Ansible security automation enables you to automate various firewall policies that require a series of actions across various products. You can use an Ansible role, such as the acl_manager role to manage your Access Control Lists (ACLs) for many firewall devices such as blocking or unblocking an IP or URL. Roles let you automatically load related vars, files, tasks, handlers, and other Ansible artifacts based on a known file structure. After you group your content in roles, you can easily reuse them and share them with other users.

The below lab environment is a simplified example of a real-world enterprise security architecture, which can be more complex and include additional vendor-specific tools. This is a typical incident response scenario where you receive an intrusion alert and immediately execute a playbook with the acl_manger role that blocks the attacker’s IP address.

Your entire team can use Ansible security automation to address investigations, threat hunting, and incident response all on one platform. Red Hat Ansible Automation Platform provides you with certified content collections that are easy to consume and reuse within your security team.

Simplified security lab environment
Additional resources

For more information on Ansible roles, see roles on docs.ansible.com.

Creating a new firewall rule

Use the acl_manager role to create a new firewall rule for blocking a source IP address from accessing a destination IP address.

Prerequisites
  • You have installed Ansible 2.9 or later

  • You have access to the Check Point Management server to enforce the new policies

Procedure
  1. Install the acl_manager role using the ansible-galaxy command.

    $ ansible-galaxy install ansible_security.acl_manager
  2. Create a new playbook and set the following parameter. For example, source object, destination object, access rule between the two objects and the actual firewall you are managing, such as Check Point:

    - name: block IP address
      hosts: checkpoint
      connection: httpapi
    
      tasks:
        - include_role:
            name: acl_manager
            tasks_from: block_ip
          vars:
            source_ip: 172.17.13.98
            destination_ip: 192.168.0.10
            ansible_network_os: checkpoint
  3. Run the playbook $ ansible-navigator run --ee false <playbook.yml>.

    Playbook with new firewall rule
Verification

You have created a new firewall rule that blocks a source IP address from accessing a destination IP address. Access the MGMT server and verify that the new security policy has been created.

Additional resources

For more information on installing roles, see Installing roles from Galaxy.

Deleting a firewall rule

Use the acl_manager role to delete a security rule.

Prerequisites
  • You have installed Ansible 2.9 or later

  • You have access to the firewall MGMT servers to enforce the new policies

Procedure
  1. Install the acl_manager role using the ansible-galaxy command:

    $ ansible-galaxy install ansible_security.acl_manager
  2. Using CLI, create a new playbook with the acl_manger role and set the parameters (e.g., source object, destination object, access rule between the two objects):

    - name: delete block list entry
      hosts: checkpoint
      connection: httpapi
    
        - include_role:
            name: acl_manager
            Tasks_from: unblock_ip
          vars:
            source_ip: 192.168.0.10
            destination_ip: 192.168.0.11
            ansible_network_os: checkpoint
  3. Run the playbook $ ansible-navigator run --ee false <playbook.yml>:

    Playbook with deleted firewall rule
Verification

You have deleted the firewall rule. Access the MGMT server and verify that the new security policy has been removed.

Additional resources

For more information on installing roles, see Installing roles from Galaxy.

Automating Network Intrusion Detection and Prevention Systems (IDPS) with Ansible

You can use Ansible to automate your Intrusion Detection and Prevention System (IDPS). For the purpose of this guide, we use Snort as the IDPS. Use Ansible automation hub to consume content collections, such as tasks, roles, and modules to create automated workflows.

Requirements and prerequisites

Before you begin automating your IDPS with Ansible, ensure that you have the proper installations and configurations necessary to successfully manage your IDPS.

  • You have installed Ansible 2.9 or later.

  • SSH connection and keys are configured.

  • IDPS software (Snort) is installed and configured.

  • You have access to the IDPS server (Snort) to enforce new policies.

Verifying your IDPS installation

To verify that Snort has been configured successfully, call it via sudo and ask for the version:

  $ sudo snort --version

   ,,_     -*> Snort! <*-
  o"  )~   Version 2.9.13 GRE (Build 15013)
  ""    By Martin Roesch & The Snort Team: http://www.snort.org/contact#team
        Copyright (C) 2014-2019 Cisco and/or its affiliates. All rights reserved.
        Copyright (C) 1998-2013 Sourcefire, Inc., et al.
        Using libpcap version 1.5.3
        Using PCRE version: 8.32 2012-11-30
        Using ZLIB version: 1.2.7

Verify that the service is actively running via sudo systemctl:

$ sudo systemctl status snort
● snort.service - Snort service
   Loaded: loaded (/etc/systemd/system/snort.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-08-26 17:06:10 UTC; 1s ago
  Main PID: 17217 (snort)
   CGroup: /system.slice/snort.service
           └─17217 /usr/sbin/snort -u root -g root -c /etc/snort/snort.conf -i eth0 -p -R 1 --pid-path=/var/run/snort --no-interface-pidfile --nolock-pidfile
[...]

If the Snort service is not actively running, restart it with systemctl restart snort and recheck the status.

Once you confirm the service is actively running, exit the Snort server by simultaneously pressing CTRL and D, or by typing exit on the command line. All further interaction will be done through Ansible from the Ansible control host.

Automating your IDPS rules with Ansible

To automate your IDPS, use the ids_rule role to create and change Snort rules. Snort uses rule-based language that analyzes your network traffic and compares it against the given rule set.

The following lab environment demonstrates what an Ansible security automation integration would look like. A machine called “Attacker” simulates a potential attack pattern on the target machine on which the IDPS is running.

Keep in mind that a real world setup will feature other vendors and technologies.

Sample Ansible security automation integration

Creating a new IDPS rule

Use the ids_rule role to manage your rules and signatures for IDPS. For example, you can set a new rule that looks for a certain pattern aligning with a previous attack on your firewall.

Note

Currently, the ids_rule role only supports Snort IDPS.

Prerequisites
  • You need root privileges to make any changes on the Snort server.

Procedure
  1. Install the ids_rule role using the ansible-galaxy command:

    $ ansible-galaxy install ansible_security.ids_rule
  2. Create a new playbook file titled add_snort_rule.yml. Set the following parameters:

    - name: Add Snort rule
      hosts: snort
  3. Add the become flag to ensure that Ansible handles privilege escalation.

    - name: Add Snort rule
      hosts: snort
      become: true
  4. Specify the name of your IDPS provider by adding the following variables:

    - name: Add Snort rule
      hosts: snort
      become: true
    
      vars:
        ids_provider: snort
  5. Add the following tasks and task-specific variables (e.g., rules, Snort rules file, and the state of the rule - present or absent) to the playbook:

    - name: Add Snort rule
      hosts: snort
      become: true
    
      vars:
        ids_provider: snort
    
      tasks:
        -  name: Add snort password attack rule
           include_role:
             name: "ansible_security.ids_rule"
           vars:
             ids_rule: 'alert tcp any any -> any any (msg:"Attempted /etc/passwd Attack"; uricontent:"/etc/passwd"; classtype:attempted-user; sid:99000004; priority:1; rev:1;)'
             ids_rules_file: '/etc/snort/rules/local.rules'
             ids_rule_state: present

    Tasks are components that make changes on the target machine. Since you are using a role that defines these tasks, the include_role is the only entry you need.

    The ids_rules_file variable specifies a defined location for the local.rules file, while the ids_rule_state variable indicates that the rule should be created if it does not already exist.

  6. Run the playbook by executing the following command:

    $ ansible-navigator run add_snort_rule.ym --mode stdout

    Once you run the playbook, all of your tasks will be executed in addition to your newly created rules. Your playbook output will confirm your PLAY, TASK, RUNNING HANDLER, and PLAY RECAP.

Verification

To verify that your IDPS rules were successfully created, SSH to the Snort server and view the content of the /etc/snort/rules/local.rules file.