Source
Red Hat Blog
IntroductionIn a previous article, I demonstrated how to configure the Automatic Certificate Management Environment (ACME) feature included in the Identity Management (IdM) Dogtag Certificate Authority (CA). Specifically, I covered installation of IdM with random serial numbers, and how to enable the ACME service and expired certificate pruning. This article explains the management of ACME (currently a technology preview) with IdM and Red Hat Enterprise Linux (RHEL) clients.Currently, mod_md is the only ACME client implementation completely supported and provided by Red Hat. For this article,
Red Hat Advanced Cluster Security for Kubernetes and Red Hat Advanced Cluster Security for Kubernetes Cloud Service versions 4.6 are now available. This update lays the foundation for a future based on policy as code and improves the UI to make it easier for users to find what they need.The significant changes in this version can be found here, but the highlights are:Violations Management UX improvementsACS Scanner v4 adopts Red Hat CSAF/VEXNVD CVSS scores for all CVEs (when available)Compliance reportingACSCS PCI DSS 4.0.0 complianceRed Hat Advanced Cluster Management for Kubernetes GlobalHub
It’s easy to tick the checkboxes on a compliance checklist with the mindset that your system is protected and not exposed to risk. If it is this simple, why do we continue to invest billions of dollars in developing security controls and software development lifecycle (SDL) practices that help harden software and minimize risk? What is the value in configuring services, tuning firewalls, and enforcing access policies only to accept a risk rating for a vulnerability directly mapped to a base score that seemingly ignores all the work done?This contradictory model of focusing on security featur
This is the first of a series of articles in which we will share how confidential computing (a set of hardware and software technologies designed to protect data in use) can be integrated into the Red Hat OpenShift cluster. Our goal is to enhance data security, so all data processed by workloads running on OpenShift can remain confidential at every stage.In this article, we will focus on the public cloud and examine how confidential computing with OpenShift can effectively address the trust issues associated with cloud environments. Confidential computing removes some of the barriers that high
As organizations are looking to modernize their applications they are also looking for a more secure and easy-to-use application platform. Along with this move to modernization, there is a noticeable shift away from managing long-lived credentials in favor of short-term, limited privilege mechanisms that do not require active management. This has led to the rapid adoption of managed identities in Microsoft Azure, and our customers expect the same from their application platforms such as Azure Red Hat OpenShift (ARO) – a fully-managed turnkey application platform that allows organizations to
Now that large language models (LLMs) and LLM systems are flourishing, it’s important to reflect upon their security, the risks affecting them and the security controls to reduce these risks to acceptable levels.First of all, let’s differentiate between LLMs and LLM systems. This difference is key when analyzing the risks and the countermeasures that need to be applied. An LLM is an algorithm designed to analyze data, identify patterns and make predictions based on that data. A LLM system is a piece of software composed of artificial intelligence (AI) components, which includes a LLM along
Security is important in enterprise scenarios, where core business applications need to run seamlessly but are often connected to the external world where they are vulnerable to attack.Malware, unauthorized access to files and execution of unverified code are just some examples of how system security can be compromised, not only by exploiting known bugs and vulnerabilities, but also by the lack of appropriate countermeasures.Red Hat Enterprise Linux (RHEL) can help, as it provides some tools and services that can natively support the process of system hardening to help make your system more se
As the usage of artificial intelligence (AI) workloads in the industry is becoming ubiquitous, the risks of using AI models are also increasing, with new unauthorized personas potentially accessing those models. AI models are now the new key for organizations requiring large investments in training and inferencing, which largely rely on accelerated computing (GPUs).When we talk about protecting those models in Kubernetes environments, we look at protecting data in rest (storage), data in transit (networking), and data in use. Our focus here will be on data in use by leveraging confidential com
OpenSSL is a popular cryptographical toolkit with more than 20 years of history. For a long time, the only way to extend it was by using an "engine", which defines how a cryptographic algorithm is computed. This could include hardware devices and even new algorithms not included in the main library, but as OpenSSL evolved it became evident that the engines API was limiting. A new pluggable system, called a "provider", was introduced.What is a providerA provider, in OpenSSL terms, is a unit of code that provides one or more implementations of cryptographic operations, making new algorithms avai
When using the public cloud there are always challenges which need to be overcome. Organizations lose some of the control over how security is handled and who can access the elements which, in most cases, are the core of the company's business. Additionally, some of those elements are controlled by local laws and regulations.This is especially true in the Financial Services and Insurance Industry (FSI) where regulations are gradually increasing in scope. For example in the EU, the emerging Digital Operational Resiliency Act (DORA) now includes the protection and handling of data while it is ex