BMC Software | Blogs https://s7280.pcdn.co Thu, 26 Dec 2024 10:50:18 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png BMC Software | Blogs https://s7280.pcdn.co 32 32 What Is CVE? Common Vulnerabilities and Exposures Explained https://s7280.pcdn.co/cve-common-vulnerabilities-exposures/ Thu, 26 Dec 2024 00:00:25 +0000 https://www.bmc.com/blogs/?p=17454 Common Vulnerabilities and Exposures, often known simply as CVE, is a list of publicly disclosed computer system security flaws. CVE is a public resource that is free for download and use. This list helps IT teams prioritize their security efforts, share information, and proactively address areas of exposure or vulnerability. Doing so makes systems and […]]]>

Common Vulnerabilities and Exposures, often known simply as CVE, is a list of publicly disclosed computer system security flaws. CVE is a public resource that is free for download and use. This list helps IT teams prioritize their security efforts, share information, and proactively address areas of exposure or vulnerability. Doing so makes systems and networks more secure and helps to prevent damaging cyberattacks. A basic understanding of what the CVE Project is and how CVE works can help organizations better take advantage of and to contribute to this resource.

(This article is part of our Security & Compliance Guide. Use the right-hand menu to navigate.)

The background of CVE

CVE was created in 1999 at a time when most cybersecurity tools used their own databases and their own names for vulnerabilities. Because the available products varied so widely, it was hard to figure out when different databases were referring to the same issue. This led to gaps in security coverage, making it hard to create any good system for interoperability between different databases and tools.

To address these issues, the CVE was developed to provide common, standardized identification. As such, it addressed these underlying concerns and made it possible for IT professionals to share information about vulnerabilities, working together to identify and address those issues. As a result, it’s become the industry standard for identifying vulnerabilities and exposures, and it’s endorsed by the CVE Numbering Authority, CVE Board, and many industry-leading products and services.

At its core, CVE provides reference points so that different products and services can communicate. This leads to interoperability and better security coverage. Further, it creates a basis for evaluating services, tools, and databases.

The CVE is maintained by the MITRE Corporation, a non-profit organization that manages federally funded research and development centers supporting U.S. government agencies. MITRE is responsible for maintaining the CVE dictionary and public website. This project is funded by the Department of Homeland Security’s Cybersecurity and Infrastructure Agency.

Who leads CVE efforts

Much of the success of the CVE Project’s efforts has come from the fact that it has been a collaborative effort by the international cybersecurity community. This has enabled the list to be comprehensive, which, in turn, has led to more people using services and products that are compatible with CVE. The key players making contributions to the CVE are the CVE Numbering Authority, the CVE Board, and the CVE Sponsor.

The CVE Numbering Authority (CNA) assigns CVE identification numbers. CNAs are given a block of CVE numbers to hold in reserve and to assign as issues are discovered. There are generally about 100 CNA, and this group includes vulnerability researchers; vendors and projects; national and industry CERTS; and bug bounty programs.

The CVE Board is tasked with ensuring that the CVE Program meets the global cybersecurity community’s vulnerability identification needs. It oversees the CVE, provides input about the CVE strategic direction, and advocates on behalf of the CVE. The CVE Board includes cyber-security organizations, commercial security tool vendors, members of academia and research institutions, members of government departments and agencies, and security experts.

The SVE Sponsor is the United States Department of Homeland Security Cybersecurity and Infrastructure Security Agency. CISA is responsible for the funding of the CVE Project.

The basics of CVE in cybersecurity

CVE consists of a list of entries, each of which has an identification number, a description, and a public reference. Each CVE lists a specific vulnerability or exposure. Per the CVE site, a vulnerability is defined as a mistake in software code that gives attackers direct access to a system or network. This type of access allows an attacker to become a super-user or system administrator with full privileges. In contrast, an exposure is a mistake that gives an attacker indirect access to a system or network. This type of access allows an attacker to collect customer information to sell.

Broadly speaking, the CVE Project creates a system for identifying and organizing vulnerabilities and exposures. The first step for creating a CVE listing is identifying a vulnerability or exposure. Next, the vulnerability will be assigned a CVE identification number by the CNA. The CNA then writes a description of the issue and provides references. Finally, the completed CVE entry is added to the CVE list and posted to the CVE website.

CVE offers a single, unique identifier for each specific exposure or vulnerability. It’s worth noting that it’s more like a dictionary than a database. The description for each entry is brief and does not include technical data, information about specific impacts, or information about fixes. Instead, that information is found in other databases, for example, the U.S. National Vulnerability Database (NVD) or CERT/CC Vulnerability Notes Database.

Understanding CVE identifiers

When referring to CVE, people usually refer to a specific identification number. These common identifiers, referred to as CVEs, CVE IDs, or CVE numbers, allow for consistency when discussing or sharing information about specific vulnerabilities. CVE identifiers can be issued by CNAs or directly by MITRE. Thousands of CVE IDs are assigned each year, and a single complex project, like an operating system, can have hundreds of CVEs.

Vulnerabilities or exposures in need of a CVE identifier can be identified by anyone – a researcher, vendor, or even a savvy user. In fact, to encourage the disclosure of flaws, some vendors even offer “bug bounties.” That said, not all flaws are assigned a CVE. To be assigned a CVE ID, the issue must be:

  • Independently fixable, meaning that it can be resolved independently of other bugs
  • Acknowledged by the software or hardware vendor OR documented with a vulnerability report
  • Affecting only one codebase. If a flaw is affecting more than one product, each is given its own CVE ID.

It’s worth noting that, to ensure that information in the CVE list is not exploited by cyberattackers, sometimes a CVE will be assigned before a public security advisory is issued. To reduce the risk of attacks once a vulnerability is identified, they are often kept secret until a fix has been developed and tested.

Benefits of CVEs

Creating CVEs benefits the cybersecurity community in a number of ways:

  • Standardize identification. By creating a unique identifier for each vulnerability, cybersecurity professionals have a clear and consistent way to track these issues across tools, platforms, and organizations.
  • Better communication. CVEs eliminate the confusion that occurs when people from multiple organizations discuss specific vulnerabilities.
  • Improve interoperability. Because CVEs are widely trusted and used worldwide, they support compatibility across tools.
  • Support collaboration and information sharing. Users and vendors can work together, simplifying the reporting of vulnerabilities, managing patches, and streamlining updates.
  • Encourage proactive security. Organizations can continuously monitor vulnerabilities with access to CVEs and vendors have a clear incentive to disclose and fix issues.
  • Evaluate security coverage. CVEs are useful in studying attack patterns and can be integrated with threat intelligence to help organizations become more aware and more prepared.
  • Prioritize vulnerabilities. Vulnerabilities can be scored using the Common Vulnerability Scoring System (CVSS), so organizations can focus resources on critical risks first.
  • Streamline access to information. With vulnerabilities in a centralized repository, users can search for and find them, reducing the time it takes to get information about threats.
  • Support automation. Users can automate scanning for vulnerabilities and managing patches, saving time and effort.

The future of CVEs

The CVE Project is a great resource for all IT organizations to use. It’s especially important for researchers and product developers to utilize CVE entries and to use products and services that are compatible with CVE. Additionally, it’s important to always be looking for vulnerabilities in software and to share any that your organization finds when using open-source software. Further, it’s key to communicate about vulnerabilities internally and externally to help prevent attacks and to efficiently resolve issues.

While CVE entries are a great resource, it’s key to analyze all entries that apply to products your organization uses. Not all issues apply in all situations, and whenever they are applicable, it’s necessary to conduct vulnerability management in order to prioritize risks. The Common Vulnerability Scoring System (CVSS) is a popular way to determine how severe a vulnerability is and, subsequently, to prioritize cybersecurity efforts. The CVSS provides open standards to assign a number, or rating, to a vulnerability. These numbers range from 0.0 to 10.0, and the higher the number, the greater the severity. Using the CVSS or a similar system is a key aspect of vulnerability management and can help to effectively focus cybersecurity efforts.

]]>
What Is GRC? Governance, Risk, and Compliance Explained https://www.bmc.com/blogs/grc-governance-risk-compliance/ Tue, 24 Dec 2024 00:25:11 +0000 https://www.bmc.com/blogs/?p=18026 Any organization seeking to meet its business objectives continues to face a myriad of challenges owing to the ever-changing complexity of the business environment: Regulation (e.g. SOX, HIPAA, GDPR, PCI-DSS,) People (diversity, millennials, skills gap, etc.) Technology (IoT, AI) Processes Many more aspects For this reason, there is an increasing need for enterprises to put […]]]>

Any organization seeking to meet its business objectives continues to face a myriad of challenges owing to the ever-changing complexity of the business environment:

For this reason, there is an increasing need for enterprises to put in place mechanics to ensure that the business can successfully ride the wave of these complexities. GRC—Governance, Risk, and Compliance—is one of the most important elements any organization must put in place to achieve its strategic objectives and meet the needs of stakeholders.

What is GRC?

GRC as an acronym stands for governance, risk, and compliance, but the term GRC means much more than that. The OCEG (formerly known as “Open Compliance and Ethics Group”) states that the term GRC was first referenced as early as 2003, but was mentioned in a peer reviewed paper by their co-founder in 2007.

The OCEG views GRC as a well-coordinated and integrated collection of all the capabilities necessary to support principled performance at every level of the organization. These capabilities include:

  • The work done by internal audit, compliance, risk, legal, finance, IT, HR
  • The work done by the lines of business, the executive suite, and the board itself
  • The outsourced work done by other parties and carried out by external stakeholders

Principled Performance refers to a point of view and approach to business that helps organizations reliably achieve objectives while addressing uncertainty and acting with integrity.

GRC Break Down

When broken down, the constituent elements can be defined from ITIL® 4 and explained as follows:

Governance

The means by which an organization is directed and controlled. In GRC, governance is necessary for setting direction (through strategy and policy), monitoring performance and controls, and evaluating outcomes.

Risk

A possible event that could cause harm or loss or make it more difficult to achieve objectives. In GRC, risk management ensures that the organization identifies, analyses, and controls risk that can derail the achievement of strategic objectives.

Compliance

The act of ensuring that a standard or set of guidelines is followed, or that proper, consistent accounting or other practices are being employed. In GRC, compliance ensures that depending on the context, the organization takes measures and implements controls to assure that compliance requirements are met consistently.

Drivers for GRC

Without a doubt, the biggest driver for GRC is regulation. While traditional industries such as banking, insurance, healthcare, and telecoms have borne the brunt of regulation in the past, today’s digital age is fueling a risk in regulation that touches all entities, large or small.

Use of data, particularly personally identifiable information, has huge business potential as well as risk of abuse. Therefore, governments and international agencies are paying a closer eye to how digital businesses manage data. The rise in cyber-attacks, which expose personal data, as well as growing awareness by individuals and civil rights organizations have shed new light into how companies manage information and technology through processes, people, and culture.

Benefits of GRC framework

According to CIO.com, benefits of GRC include:

  • Improved decision-making
  • More optimal IT investments
  • Elimination of silos
  • Reduced fragmentation among divisions and departments

A collective approach is the best bet for any organization seeking to get to grips with the ever-changing regulatory landscape. When GRC is done right across the whole organization, and the right people get the right information at the right time, and the right objectives and controls are established, then OCEG states that we can expect reduction in costs, duplication, and impacted operations.

The organization can also benefit through better decision-making agility and confidence, as well as sustained, reliable performance, and delivery of value.

The GRC approach

As has been stated before, GRC is best implemented in a holistic manner that encompasses the entire organization. This does not necessarily mean that an umbrella unit is required for coordination, even though that might work for certain types of entities. The OCEG has defined an open source approach called the GRC Capability Model (also called the Red Book) that integrates the various sub-disciplines of governance, risk, audit, compliance, ethics/culture and IT into a unified approach. The Capability Model is made up of four components:

  • LEARN about the organization context, culture and key stakeholders to inform objectives, strategy and actions.
  • ALIGN strategy with objectives, and actions with strategy, by using effective decision-making that addresses values, opportunities, threats and requirements.
  • PERFORM actions that promote and reward things that are desirable, prevent and remediate things that are undesirable, and detect when something happens as soon as possible.
  • REVIEW the design and operating effectiveness of the strategy and actions, as well as the ongoing appropriateness of objectives to improve the organization.

These components outline an iterative continuous improvement process to achieve principled performance and are further decomposed into elements which are then supported by practices, actions and controls. The actions and controls are classified in three types, which organizations can select a mix depending on their context:

  • Proactive
  • Detective
  • Responsive
GRC Capability Model

GRC Capability Model – Element View (Source: OCEG Red Book)

GRC use cases

Organizations use GRC to integrate processes and tools to manage risks, meet compliance demands, and serve their own objectives. Here are typical examples of uses:

  • Establishing Policies and Practices
    • A GRC framework helps organizations establish policies and practices to minimize compliance risk.
    • IT and security GRC solutions leverage timely information on data, infrastructure, and applications (virtual, mobile, cloud).
  • Improving Efficiency
    • Centralizing issues into one framework eliminates duplicate efforts.
    • GRC creates a “single source of truth” to provide consistent and up-to-date information to everyone.
  • Streamlining GRC Activities
    • Monitoring compliance, risks, and governance can be automated to reduce manual work.
    • Many tasks can be systematized to save time and reduce errors.
  • Managing Financial and AI-Driven Models
    • GRC guides model development, validation, and use.
    • It makes it easier to catalog and manage all models in use.
    • GRC ensures models are in compliance with applicable regulations.
    • GRC provides guidelines and standards for how organizations can use AI ethically.
  • Risk Assessment and Reduction
    • Organizations can get ahead with prevention, using the framework to identify risks.
    • GRC facilitates creating scenarios to analyze and formulating proactive protections to prevent problems.
  • Support for Companies with Compliance Failures
    • GRC can help organizations track and analyze incidents to identify root causes, and provides an audit trail.
    • The framework helps with impact assessments, incident response, and corrective actions.
    • GRC provides support in case of future failures.
  • Improving Compliance
    • GRC helps organizations identify areas where they are non-compliant and vulnerable.
    • It supports proactive reporting.
    • GRC contributes to creating a culture of compliance.
  • Better Policies and Management
    • Organizations can standardize their policies and apply them consistently.
    • It is easier to respond to regulatory changes quickly, even automatically.
    • Companies can make faster, more informed decisions.

GRC solutions

In order to address the needs of GRC, a lot of organizations are turning to technology solutions. These solutions enable the leadership to monitor GRC across the enterprise by ensuring business processes and information technology continue to align to the governance, risk and compliance requirements of the organization. Capabilities include:

  • Risk management (logging, analysis, and management)
  • Document management
  • Audit management
  • Reporting
  • Analytics

However, having a GRC tool alone isn’t enough to guarantee effective GRC. Technology doesn’t have ethics—people do. Hence GRC must be addressed from a people and process perspective, even before technology is considered.

However, technology is a very good enabler in reducing the “compliance” overhead that comes with gathering and managing records required to prove that the organization is meeting GRC requirements, without overburdening employees who should be focused on generating value instead.

Challenges of implementing GRC systems

Despite the many advantages, implementing GRC systems can be difficult for some organizations. To smooth adoption and get full value from the framework, consider how to address issues such as:

  • Change Management
    • Though GRC supports making good decisions in a business environment that is changing quickly, some organizations resist change or lack the agility to act in response to new insights.
  • Data Management
    • GRC makes it possible to break down data silos, so that data from any part of the organization can be shared across the organization. However, eliminating duplicate data and dealing with data management issues are very real challenges.
  • Lack of a Comprehensive GRC Framework
    • Some organizations struggle to seamlessly integrate GRC across business activities, leading to a fragmented GRC framework.
  • Ethical Culture Development
    • The collaborative nature of GRC and the need for sharing and transparency can challenge the status quo of some organizations. Clear leadership and recognition for ethical behavior are keys to establishing a new culture.
  • Inadequate Technology and Other Resources
    • Legacy IT systems may not have the flexibility or the potential to scale, slowing adoption and creating more manual work.
    • Implementation requires an investment of time, talent, and money, which may not be possible in some organizations.
  • Insufficient Training
    • Your organization may not have personnel skilled across GRC implementation and issues. The result can lead to resistance to adopting GRC and poor implementation.

Additional resources

Explore more on this topic with the BMC Security & Compliance Blog and our Guide to Security & Compliance.

]]>
AWS Glue Crawler: A Complete Setup Guide https://www.bmc.com/blogs/amazon-glue-crawler/ Tue, 24 Dec 2024 00:00:32 +0000 https://www.bmc.com/blogs/?p=18234 In this tutorial, we discuss what a crawler is in Amazon Web Services (AWS), and show you how to make your own Amazon Glue crawler. A fully managed service from Amazon, AWS Glue handles data operations like ETL (extract, transform, load) to get the data prepared and loaded for analytics activities. Glue can crawl S3, […]]]>

In this tutorial, we discuss what a crawler is in Amazon Web Services (AWS), and show you how to make your own Amazon Glue crawler.

A fully managed service from Amazon, AWS Glue handles data operations like ETL (extract, transform, load) to get the data prepared and loaded for analytics activities. Glue can crawl S3, DynamoDB, and JDBC data sources.

What is a crawler?

A crawler is an automated software app that “crawls” through the content of web pages, following the links on those pages to find and crawl additional pages. Sometimes also called spiders or bots, these software programs crawl with a purpose. They may crawl through web pages to scan and index content for search engines. They may also extract large sets of data for content aggregation, identifying trends, conducting sentiment analysis, or monitoring feature sets and prices. Internet archiving sites use crawlers to make an historical record of web pages for future reference. Crawlers are also useful for link checking and SEO audits.

What is the AWS Glue crawler?

The AWS Glue crawler is a tool from Amazon that automates how you discover, catalog, and organize data scraped from various sources. For example, AWS Glue crawler can be used to crawl the Amazon Simple Storage Service (S3) relational database or other sources of data, such as DynamoDB, JDBC, and MongoDB.

The Amazon Glue crawler scans and catalogs data to detect changes in the structure of data or schema. It can create an updated data catalog and conduct extract, transform, and load (ETL) operations for fast processing and analytics. It also integrates with AWS analytics services. The AWS Glue crawler populates Amazon S3 buckets—scalable, secure, and durable storage containers for large volumes of files and data—along with accompanying metadata.

Learn more about the innovation of AWS cloud databases.

Understanding the AWS Glue architecture and workflow

The AWS Glue architecture and workflow starts with connecting to and crawling a data store, which could be a bucket like Amazon S3, or a database like Amazon RDS, Amazon Redshift, Amazon DynamoDB, or JDBC.

Next, AWS Glue determines the structure and format of the data using classifiers, which are sets of rules that identify Glue data types, schemas, file formats, and a variety of types of metadata.

Lastly, AWS Glue writes the metadata into a centralized repository called a data catalog. This contains information about the source of the data, the data schema, and other data descriptions.

How to create a crawler in AWS Glue

Now that we understand the key components of the Amazon Glue crawler—from the architecture, data stores, and data catalog, to the interaction between each component—let’s discuss how to create a crawler in AWS Glue.

1. Download sample JSON data

We need some sample data. Because we want to show how to join data in Glue, we need to have two data sets that have a common element.

In our AWS Glue crawler example, we’re using data from IMDB. We have selected a small subset (24 records) of that data and put it into JSON format. (Specifically, they have been formatted to load into DynamoDB, which we will do later.)

One file has the description of a movie or TV series. The other has ratings on that series or movie. Since the data is in two files, it is necessary to join that data in order to get ratings by title. Glue can do that.

Download these two JSON data files:

  • Download title data here.
  • Download ratings data here.

2. Upload the data to Amazon S3

Create these buckets in S3 using the Amazon AWS command line client. (Don’t forget to run aws configure to store your private key and secret on your computer so you can access Amazon AWS.)

Below we create the buckets titles and rating inside movieswalker. The reason for this is Glue will create a separate table schema if we put that data in separate buckets.

(Your top-level bucket name must be unique across all of Amazon. That’s an Amazon requirement, since you refer to the bucket by URL. No two customers can have the same URL.)

aws s3 mb s3://movieswalker

aws s3 mb s3://movieswalker/titles

aws s3 mb s3://movieswalker/ratings

Then copy the title basics and ratings file to their respective buckets.

aws s3 cp 100.basics.json s3://movieswalker/titles

aws s3 cp 100.ratings.tsv.json s3://movieswalker/ratings

3. Configure the crawler in Glue

Log into the Glue console for your AWS region. (Mine is European West.)

Then go to the crawler screen and add a crawler:

Next, pick a data store. A better name would be data source, since we are pulling data from there and storing it in Glue.

Then pick the top-level movieswalker folder we created above.

Notice that the data store can be S3, DynamoDB, or JDBC.

Then start the crawler. When it’s done you can look at the logs.

If you get this error it’s an S3 policy error. You can make the tables public just for purposes of this tutorial if you don’t want to dig into IAM policies. In this case, I got this error because I uploaded the files as the Amazon root user while I tried to access it using a user created with IAM.

ERROR : Error Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 16BA170244C85551; S3 Extended Request ID: y/JBUpMqsdtf/vnugyFZp8k/DK2cr2hldoXP2JY19NkD39xiTEFp/R8M+UkdO5X1SjrYXuJOnXA=) retrieving file at s3://movieswalker/100.basics.json. Tables created did not infer schemas from this file.

View the crawler log. Here you can see each step of the process.

4. View tables created in AWS Glue

Here are the tables created in Glue.

If you click on them you can see the schema.

It has these properties. The item of interest to note here is it stored the data in Hive format, meaning it must be using Hadoop to store that.

AWS Glue crawler tutorial: Key steps summarized

  1. Upload data to Amazon S3.
  2. Create and configure a crawler with the right access permissions, identifying the data source and path.
  3. Configure the classifiers that your crawler will use to interpret the data structure.
  4. Specify an output location in the data catalog. You are now ready to trigger and run the crawler.

Using AWS Glue is a great way to automate data discovery, querying, and processing. To learn more about other AWS management tools, visit our blog at AWS Management Tools.

]]>
Introduction to Enterprise Security https://www.bmc.com/blogs/enterprise-security/ Mon, 23 Dec 2024 07:03:26 +0000 https://www.bmc.com/blogs/?p=17241 The attack surface of any enterprise has expanded significantly in recent years. Traditionally, organizations would be responsible for securing data stored in on-premise servers and leverage state-of-the-art security solutions to protect against cyber-attacks. These threats were usually motivated by financial or political gains. Today, businesses connect technologies to reach a wider user base, collaborate with […]]]>

The attack surface of any enterprise has expanded significantly in recent years. Traditionally, organizations would be responsible for securing data stored in on-premise servers and leverage state-of-the-art security solutions to protect against cyber-attacks. These threats were usually motivated by financial or political gains. Today, businesses connect technologies to reach a wider user base, collaborate with vendors, and allow for work across a distributed workforce across geographically disparate locations—your risk is higher than ever before.

The growing attack surface requires defense systems that go beyond traditional cybersecurity measures. Let’s take a look at the meaning of enterprise IT security with examples, why enterprise cybersecurity is important, and the measures you can take to enhance yours.

(This article is part of our Security and Compliance Guide. Use the right-hand menu to navigate.)

What is enterprise security?

Enterprise security includes the strategies, techniques, and process of securing information and IT assets against unauthorized access and risks that may infringe on the confidentiality, integrity, or availability of these systems. Building on the traditional cybersecurity premise of protecting digital assets at the local front, enterprise-level security extends to the security of data in transit across the connected network, servers, and end users.

Enterprise cybersecurity systems encompass the technology, people, and processes involved in maintaining a secure environment for digital assets. Because it encompasses the enterprise, this security has additional focus on the legal and cultural requirements of securing data assets that belong to an organization’s user base.

Why is enterprise IT security important?

Enterprise IT security is crucial no matter what size an organization is. No organization is immune to the threats that are growing in number and variety. Cybercrimes involve disrupting operations, theft, data breaches, ransomware, and more. Consider these examples of enterprise security threats:

  • Email is the top vector for cybersecurity threats, playing a critical role in social engineering and breaching systems.
  • Various types of phishing attacks, posing as a legitimate organization through legitimate communications, account for 74% of exploits.
  • Malware attacks are on the rise, costing businesses on average $4.91 million.
  • DDoS attacks can disrupt enterprise websites, affecting public- and private-facing content.
  • On top of the costs of a cyberattack are the costs of compliance, with global regulations and the fines levied for companies that suffer a cyberattack while not being in compliance.

Protecting sensitive data, ensuring business continuity, staying compliant with laws and regulations, and maintaining consumer trust require strong enterprise IT security. Technology plays a role, but just as important are processes and the education of workers about the risks and their role in protecting enterprise IT.

Measures taken to enhance enterprise cybersecurity

The following trends are forcing organizations to enhance their cybersecurity solutions for enterprise:

The cloud

The proliferation of cloud computing has enabled organizations of all sizes to take advantage of high-end and scalable hardware resources as an operating expense. As a result, they have been able to expand their business across global markets—but that comes with a significant caveat: a vast volume of data is not accessible to companies, yet they are responsible for securing their data assets.

The trouble with cloud-stored data is significant. The data is not locally hosted. Enterprises do not control the cloud computing resources that store their data. With limited visibility and control into cloud hardware, they must rely on cloud vendors for their first line of defense in enterprise information security.

The IoT

The influx of connected devices, the Internet of Things (IoT), allows businesses to extend their service offerings and achieve operational excellence. IoT has enabled organizations to automate manual processes, reduce human error, and pursue new business models.

The growing ecosystem of IoT networks also brings key enterprise web security challenges:

  • The number of potentially vulnerable devices connecting to the corporate network has increased dramatically.
  • Attackers now have more pathways to exploit, as most IoT devices can offer limited security defense at the physical layer of network endpoints.

The drive for data

More data means more insights. Organizations rely on insightful information to deliver the right services to the right customer. Computing resources and data intelligence solutions are widely available and affordable. End users are willing to share some personal information in exchange for a useful service. This brings tremendous opportunities for enterprises to produce data-driven products and business strategies that guarantee high returns on investments.

At the same time, these companies are responsible for securing user information that must be leveraged only for the allowed purposes and within ethical bounds of the modern digital world, showing a need for enterprise information security measures.

Privacy awareness and regulations

Governments around the world have recognized the need for stringent privacy regulations in response to growing cyber-security risks to end-users. In 2013, all 3 billion Yahoo user accounts were hacked, eventually resulting in a data breach settlement of $117.5 million. More importantly, the company has since lost billions of dollars in market cap as internet users have largely adopted alternatives. The lost brand reputation has been irrecoverable and was caused due to a large-scale data leak that took place long before it was discovered and made public.

More recently, governments have enforced compliance measures that force businesses to reshape and enhance their enterprise security capabilities, along with heavy fines for compliance failure. GDPR in the EU is a prime example.

Best practices for enterprise security systems

Enterprise security therefore involves security measures across all aspects of the organization. It ranges from backend cloud networks to IoT endpoints at the network edge. It is driven by the proliferation of data-intensive business operations and services, and heavily mandated by stringent global regulations. Internet users are increasingly aware and distant from organizations failing to guarantee security of their personal information.

The threats come from both within the enterprise, such as human error or disgruntled employees, as well as external cyber-attackers. The following best practices can help your organizations improve security capabilities across all these fronts:

  • Protect the data at rest and in transit. Identify data assets that must be encrypted and develop a security strategy around it. Encryption should scale across your network and secure data workloads in dynamic and distributed cloud environments. Monitor the performance of your encryption implementations.
  • Establish strong Identity and Access Management controls. Use the principle of least privilege that allows users only the limited necessary access to perform their job. Limiting user access reduces the risk of data leaks and network intrusions via human error or malicious intent.
  • Enact a strong disaster recovery and risk mitigation plan. A well-defined plan should include responsibilities and workflows for orderly and successful disaster recovery protocols. Update this plan regularly to combat growing cyber threats and changing workforce landscapes.
  • Educate your employees on cybersecurity measures. The workforce can behave as a strong first line of defense against cyber threats that target the human element. On the other hand, employees lacking security awareness can serve as weak links in the security chain that’s otherwise equipped with advanced enterprise cybersecurity solutions.
  • Manage endpoint security with technologies that monitor network performance continuously for anomalous data traffic. Ensure that IoT devices are properly configured and operate on up to date firmware.
  • Involve senior management in developing the enterprise security strategy. Cyber threats should not be treated as or relegated to an “IT only” domain—it is a business problem that must become a business activity. Security expertise should span the executive level where the necessary risk management decisions must take place. The board and executive management should understand the legal, financial, cultural, and technology-related implications of their enterprise security systems decisions, and consider outsourcing complex enterprise security services such as mainframe security services.
]]>
Streamlining BMC AMI Data in CI/CD Pipelines with Jenkins, Azure DevOps, and GitHub Actions https://www.bmc.com/blogs/streamlining-bmc-ami-data-in-ci-cd-pipelines-with-jenkins-azure-devops-github-actions/ Fri, 20 Dec 2024 16:09:04 +0000 https://www.bmc.com/blogs/?p=54368 Delivering fast, reliable, and high-quality applications begins with a strong foundation in database management. Tasks such as SQL performance optimization and schema updates have traditionally been handled by database administrators (DBAs), creating bottlenecks that slow down development cycles. But what if developers could manage these tasks, resolving issues earlier and accelerating delivery? BMC AMI SQL […]]]>

Delivering fast, reliable, and high-quality applications begins with a strong foundation in database management. Tasks such as SQL performance optimization and schema updates have traditionally been handled by database administrators (DBAs), creating bottlenecks that slow down development cycles. But what if developers could manage these tasks, resolving issues earlier and accelerating delivery?

BMC AMI SQL Performance for Db2® and BMC AMI DevOps for Db2® are purpose-built solutions that enable this shift. Developers can tackle critical database tasks by automating SQL performance testing and schema management while ensuring they meet performance and reliability standards. This approach allows teams to “shift left” by addressing potential issues early in the pipeline, promoting faster and more efficient development cycles.

Seamless integration into CI/CD pipelines

To maximize its effectiveness, BMC AMI SQL Performance for Db2® and BMC AMI DevOps for Db2® integrate seamlessly with popular continuous integration and continuous delivery (CI/CD) tools like  Jenkins, Azure DevOps, and GitHub Actions, with the addition of GitLab in January of 2025, and more to come. These integrations empower teams to embed database testing, validation, and deployment tasks directly into their development workflows for a smoother and faster process, while also allowing developers to harness the full capabilities of BMC AMI Data, aligning database tasks with their application delivery workflows for a fully streamlined process. For example, these AMI Data solutions can be leveraged for DevOps purposes within these tools in a variety of ways, including but not limited to:

  • Jenkins: Automates SQL performance testing and schema validation through a dedicated plugin.
  • Azure DevOps: Employs a universal connector to include database updates within DevOps workflows.
  • GitHub Actions: Incorporates database testing and deployment steps into GitHub-based pipelines.

BMC AMI DevX: Supercharging database and DevOps orchestration

BMC AMI DevX can also act as an orchestration engine, unifying BMC AMI Data solutions within CI/CD platforms to automate database and DevOps tasks in a single pipeline. DevX equips developers with powerful tools, including the shift-left capable BMC AMI SQL Performance for Db2® and BMC AMI DevOps for Db2®, enabling them to take on tasks traditionally handled by DBAs, such as identifying and resolving SQL performance issues or validating and deploying schema changes. By automating these steps earlier in the process, DevX accelerates workflows, fosters seamless collaboration between developers and DBAs, saves time, boosts efficiency, and delivers high-quality outcomes.

Real-world examples

1. Automating SQL performance testing:

With BMC AMI Data, a Jenkins pipeline can include automated SQL performance testing, allowing developers to identify inefficient queries early in the development cycle—without waiting for a DBA. By addressing performance issues sooner, teams can avoid costly delays and ensure their applications run smoothly in production.

2. Managing schema changes:

In an Azure DevOps pipeline, BMC AMI Data simplifies schema management by automatically validating and deploying updates while ensuring all dependencies are met. Developers can independently manage schema changes, reducing the need for DBA involvement and eliminating bottlenecks.

3. Streamlining with GitHub Actions:

Utilizing GitHub Actions, teams can embed SQL performance testing and schema validation steps into their workflows directly within their GitHub repository. By storing schema files, SQL scripts, and other database-related resources in a Git repository, GitHub Actions can automate these processes to ensure database changes are monitored, validated, and optimized alongside application updates. This integration creates a unified pipeline, saving time and minimizing risk by streamlining version control and automated checks.

Empowering developers, simplifying collaboration

BMC AMI Data and its integrations into Jenkins, Azure DevOps, and GitHub Actions shifts database management tasks left, giving control to developers. With BMC AMI DevX as the orchestration engine, developers can collaborate with DBAs earlier, resolve issues faster, and streamline the entire database and DevOps lifecycle.

Conclusion

BMC AMI Data redefines how database and DevOps workflows operate by automating SQL performance testing and schema management. With seamless integration into CI/CD pipelines and orchestration from BMC AMI DevX, teams can break down silos, accelerate development, and deliver more reliable applications.

Ready to revolutionize your database and DevOps processes? Explore how BMC AMI solutions can transform your workflows today.

Learn more in this e-book: Driving down database development dollars

]]>
Lewin’s 3 Stage Model of Change Explained https://www.bmc.com/blogs/lewin-three-stage-model-change/ Fri, 20 Dec 2024 00:00:24 +0000 https://www.bmc.com/blogs/?p=15724 Change behavior—how humans accept, embrace, and perform change—is the core of modern change management. ITSM frameworks incorporate various approaches to change management, but one started it all: Kurt Lewin’s 3 Stage Model of Change. Lewin’s Model of Change Lewin’s model was developed by a pioneering psychologist in the field of social and organizational psychology in […]]]>

Change behavior—how humans accept, embrace, and perform change—is the core of modern change management. ITSM frameworks incorporate various approaches to change management, but one started it all: Kurt Lewin’s 3 Stage Model of Change.

Lewin’s Model of Change

Lewin’s model was developed by a pioneering psychologist in the field of social and organizational psychology in the face of change. The model is based on three steps, or stages, that together make up the change process. These are:

  • Unfreeze – Prepare an organization for coming change by communicating why change is needed, ideally breaking down mindsets and behaviors that resist change.
  • Change or Transition – During the actual change, organizations implement new behaviors, processes, and beliefs.
  • Refreeze – Once the change is done, an organization solidifies it to reinforce new behaviors and processes, rewarding new approaches and preventing a return to the old.

Initially a popular concept, current ITSM thinking criticizes Lewin’s model for being too simplistic and abstract to manage change in a real way. In today’s speedy, complex, and dynamic landscape of enterprise IT, the three-step model provides limited actionable guidance.

Still, understanding these steps provides an essential view into change management, so let’s have a look.


Take IT Service Management to the next level with BMC Helix ITSM.›

Understanding Lewin’s Change Model

A leader in change management, Kurt Lewin was a German-American social psychologist in the early 20th century. Among the first to research group dynamics and organizational development, Lewin developed the 3 Stage Model of Change in order to evaluate two areas:

  • The change process in organizational environments
  • How the status-quo could be challenged to realize effective changes

Lewin proposed that the behavior of any individual in response to a proposed change is a function of group behavior. Any interaction or force affecting the group structure also affects the individual’s behavior and capacity to change. Therefore, the group environment, or ‘field’, must be considered in the change process.

The 3 Stage Model of Change describes status-quo as the present situation, but a change process—a proposed change—should then evolve into a future desired state. To understand group behavior, and hence the behavior of individual group members during the change process, we must evaluate the totality and complexity of the field. This is also known as Field Theory, which is widely used to develop change models including Lewin’s 3 Stage Model.

Lewin’s 3 Stages of Change

Let’s look at how Lewin’s three-step model describes the nature of change, its implementation, and common challenges:

Step 1: Unfreeze

Lewin identifies human behavior, with respect to change, as a quasi-stationary equilibrium state. This state is a mindset, a mental and physical capacity that can be almost absolutely reached, but it is initially situated so that the mind can evolve without actually attaining that capacity. For example, a contagious disease can spread rapidly in a population and resist initial measures to contain the escalation. Eventually, through medical advancement, the disease can be treated and virtually disappear from the population.

Lewin argues that change follows similar resistance, but group forces (the field) prevent individuals from embracing this change. Therefore, we must agitate the equilibrium state in order to instigate a behavior that is open to change. Lewin suggests that an emotional stir-up may disturb the group dynamics and forces associated with self-righteousness among the individual group members. Certainly, there are a variety of ways to shake up the present status-quo, and you’ll want to consider whether you need change in an individual or, as in a company, amongst a group of people.

Let’s consider the process of preparing a meal. The first change, before anything else can happen, is to “unfreeze” foods—preparing them for change, whether they’re frozen and require thawing, or raw food requiring washing. Lewin’s 3 Step Model believes that human change follows a similar philosophy, so you must first unfreeze the status-quo before you may implement organizational change.

Though not formally part of Lewin’s model, actions within this Unfreeze stage may include:

  • Determining what needs to change.
    • Survey your company.
    • Understand why change is necessary.
  • Ensuring support from management and the C-suite.
    • Talk with stakeholders to obtain support.
    • Frame your issue as one that positively impacts the entire company.
  • Creating the need for change.
    • Market a compelling message about why change is best.
    • Communicate the change using your long-term vision.

Step 2: Change

Once you’ve “unfrozen” the status quo, you may begin to implement your change. Organizational change in particular is notoriously complex, so executing a well-planned change process does not guarantee predictable results. Therefore, you must prepare a variety of change options, from the planned change process to trial-and-error. With each attempt at change, examine what worked, what didn’t, what parts were resistant, etc.

During this evaluation process, there are two important drivers of successful and long-term effectiveness of the change implementation process: information flow and leadership.

  • Information flow refers to sharing information across multiple levels of the organizational hierarchy, making available a variety of skills and expertise, and coordinating problem solving across the company.
  • Leadership is defined as the influence of certain individuals in the group to achieve common goals. A well-planned change process requires defining a vision and motivation.

The iterative approach is also necessary to sustain a change. According to Lewin, a change left without adequate reinforcement may be short-lived and therefore fail to meet the objectives of a change process.

During the Change phase, companies should:

  • Communicate widely and clearly about the planned implementation, benefits, and who is affected. Answer questions, clarify misunderstandings, and dispel rumors.
  • Promote and empower action. Encourage employees to get involved proactively with the change, and support managers in providing daily and weekly direction to staff.
  • Involve others as much as possible. These easy wins can accumulate into larger wins, and working with more people can help you navigate various stakeholders.

Step 3: Refreeze

The purpose of the final step—refreezing—is to sustain the change you’ve enacted. The goal is for the people involved to consider this new state as the new status-quo, so they no longer resist forces that are trying to implement the change. The group norms, activities, strategies, and processes are transformed per the new state.

Without appropriate steps that sustain and reinforce the change, the previously dominant behavior tends to reassert itself. You’ll need to consider both formal and informal mechanisms to implement and freeze these new changes. Consider one or more steps or actions that can be strong enough to counter the cumulative effect of all resistive forces to the change—these stronger steps help ensure the new change will prevail and become “the new normal”.

In the Refreeze phase, companies should do the following:

  • Tie the new changes into the culture by identifying change supports and change barriers.
  • Develop and promote ways to sustain the change long-term. Consider:
    • Ensuring leadership and management support and adapting organizational structure when necessary.
    • Establishing feedback processes.
    • Creating a rewards system.
  • Offer training, support, and communication for both the short- and long-term. Promote both formal and informal methods, and remember the various ways that employees learn.
  • Celebrate success!

Lewin’s Change Model and Modern Organizations

While Lewin’s Change Model has been called simplistic, a straightforward approach can often be more effective. WIth an understanding of the clear steps above, you can make positive and lasting changes as your organization evolves.

Because Lewin’s Model of Change is intuitive and makes it easy to understand how changes occur individually and as a group, you can concentrate on moving forward effectively. When more complex modern change management frameworks are not working, returning to these fundamentals about social behavior in light of change can help you move forward. Models may change, but human nature does not.

Additional resources

]]>
AWS ECS vs. EKS: What’s the Difference and How to Choose? https://www.bmc.com/blogs/aws-ecs-vs-eks/ Fri, 20 Dec 2024 00:00:24 +0000 https://www.bmc.com/blogs/?p=12575 The increased popularity of containerized applications has illustrated the need for proper container orchestration platforms to support applications at scale. Containers need to be managed throughout their lifecycle, and many products have been created to fulfill this need. These container orchestration products range from open-source solutions such as Kubernetes and Rancher to provider-specific implementations such […]]]>

The increased popularity of containerized applications has illustrated the need for proper container orchestration platforms to support applications at scale.

Containers need to be managed throughout their lifecycle, and many products have been created to fulfill this need. These container orchestration products range from open-source solutions such as Kubernetes and Rancher to provider-specific implementations such as:

  • Amazon Elastic Container Service (ECS)
  • Azure Kubernetes Service (AKS)
  • Elastic Kubernetes Service (EKS)

All these different platforms come with their unique advantages and disadvantages. Amazon itself offers an extensive array of container management services and associated tools like the ECS mentioned above, EKS, AWS Fargate, and the newest option, EKS Anywhere.

AWS users need to evaluate these solutions carefully before selecting the right container management platform for their needs—and we’re here to help!

(This tutorial is part of our AWS Guide. Use the right-hand menu to navigate.)

How container management works

A container is a lightweight, stand-alone, portable, and executable package that includes everything required to run an application from the application itself to all the configurations, dependencies, system libraries, etc. This containerization greatly simplifies the development and deployment of applications. However, we need the following things to run containers:

While containers encapsulate the application itself, the container management or an orchestration platform provides the rest of the above facilities required throughout the lifecycle of the container.

ECS and EKS are the primary offerings by AWS that aim to provide this container management functionality. In the following sections, we will see what exactly these two offerings bring to the table.

ECS vs. EKS vs. Fargate

What is Amazon Elastic Container Service (ECS)?

The Elastic Container Service can be construed as a simplified version of Kubernetes—but that’s misleading. The Elastic Container Service is an AWS-opinionated, fully managed container orchestration service. ECS is built with simplicity in mind without sacrificing management features. It easily integrates with AWS services such as AWS Application/Network load balancers and CloudWatch.

Amazon Elastic Container Service uses its scheduler to determine:

  • Where a container is run
  • The number of copies started
  • How resources are allocated

As shown in the following image, ECS follows a simple, easily understood model. Each application in your stack (API, Thumb, Web) is defined as a service in ECS and schedules (runs) tasks (instances) on one or more underlying hosts that meet the resource requirements defined for each service.

Elastic Container Service

This model is relatively simple to understand and implement for containerized workloads as it closely resembles a traditional server-based implementation. Thus migrating applications to ECS becomes a simple task that only requires the containerized application, pushing the container image to the Amazon Elastic Container Repository (ECR) and then defining the service to run the image in ECS.

Most teams can easily adapt to such a workflow. ECS also provides simple yet functional management and monitoring tools that suit most needs.

What is Elastic Kubernetes Service (EKS)?

The Elastic Kubernetes Service is essentially a fully managed Kubernetes Cluster. The primary difference between ECS and EKS is how they handle services such as networking and support components.

  • ECS relies on AWS-provided services like ALB, Route 53, etc.,
  • EKS handles all these mechanisms internally, just as in any old Kubernetes cluster.

The Elastic Kubernetes Service provides all the features and flexibility of Kubernetes while leveraging the managed nature of the service. However, all these advantages come with the increased complexity of the overall application architecture.

EKS introduces the Kubernetes concept of Pods to deploy and manage containers while ECS directly uses individual containers to deploy them. Pods can contain either one or more containers with a shared resource pool and provide far more flexibility and fine-grained control over components within a service. The below image shows that all the services (ex: proxy, service discovery) that need to run containers are within the Kubernetes cluster.

Let’s assume that our Thumb service was a combination of three separate components:

Kubernetes allows us to run these three separate components as distinct containers within a single Pod that makes up the Thumb service.

Kubernetes

Containers within the pods run collocated with one another. Furthermore, they have easy access to each other and can share resources like storage without relying on complex configurations or external services. All these facts make it possible for users to create more complex applications architectures with EKS.

Additionally, EKS enables users to tap into the wider Kubernetes echo-system and use add-ons like:

  • The networking policy provider Calico
  • Romana Layer 3 networking solution
  • CoreDNS flexible DNS service
  • Many other third-party add-ons and integrations

Since EKS is based on Kubernetes, users have the flexibility to move their workloads between different Kubernetes clusters without being vendor-locked into a specific provider or platform.

What is Fargate?

Even with managed services, servers still exist, and users can decide which types of compute options to use with ECS or EKS.

AWS Fargate is a serverless, pay-as-you-go compute engine that allows you to focus on building applications without managing servers. This means that AWS will take over the management of the underlying server without requiring users to create a server, install software, and keep it up to date. With AWS Fargate, you only need to create a cluster and add a workload—then, AWS will automatically add pre-configured servers to match your workload requirements.

Fargate is the better solution in most cases. It will not cost more than self-managed servers and, most of the time, saves costs due to only charging for the exact usage. Therefore, users do not have to worry about the unused capacity like in self-managed servers, which requires manually shutting down to save costs.

However, here some notable exceptions of Fargate:

  • Fargate cannot be used in highly regulated environments with strict security and compliance requirements. The reason is that users lose access to the underlying servers, which they might need control over to meet those stringent regulatory requirements. Additionally, Fargate does not support “dedicated tenancy” hosting requirements.
  • ECS and Fargate only support AWS VPC networking mode, which may not be suitable when deep control over the networking layer is required.
  • Fargate automatically allocates resources depending on the workload with limited control over the exact mechanism. This automatic resource allocation can lead to unexpected cost increases, especially in R&D environments where many workloads are tested. Therefore, self-managed servers with capacity limitations will be a better solution for these kinds of scenarios.

What about EKS Anywhere?

EKS Anywhere extends the functionality of EKS by allowing users to create and operate Kubernetes clusters on customer-managed infrastructure with default configurations. It provides the necessary tools to manage the cluster using the EKS console.

EKS Anywhere is built upon the Amazon EKS Distro and provides all the necessary and up-to-date software which resides on your infrastructure. Moreover, it provides a far more reliable Kubernetes platform compared to a fully self-managed Kubernetes cluster.

EKS Anywhere is also an excellent option to power hybrid cloud architecture while maintaining operational consistency between the cloud and on-premises. Besides, EKS Anywhere provides the ideal solution to keep data with on-premises infrastructure where data sovereignty is a primary concern. It leverages AWS to manage the application architecture and delivery.

Key Differences between ECS and EKS

While ECS and EKS are both helpful tools for container management, they do have differences. To help you evaluate the strengths and benefits of choosing one over the other, we gathered details together to help you more easily compare the two.

Platform Integration:

  • ECS: Seamlessly integrates with AWS services like Elastic Load Balancer, AWS Fargate, and Amazon RDS, simplifying deployment and management.
  • EKS: Based on Kubernetes, it offers flexibility to run applications both on-premises and in the cloud, but requires more effort to integrate with AWS services.

Scalability:

  • ECS: Automatically scales applications based on demand and supports both Fargate and EC2 launch types for flexibility.
  • EKS: Requires manual configuration of autoscaling groups and Kubernetes’ Horizontal Pod Autoscaler, making it more complex to manage.

Security:

  • ECS: Utilizes AWS’s robust security features, including IAM roles, security groups, and VPC network isolation.
  • EKS: Leverages Kubernetes’ native security features like RBAC, Network Policies, and Secret Management, which require in-depth Kubernetes knowledge.

Pricing Models:

  • ECS: Charges based on resources like vCPU and memory consumed by containers, with no additional service charges.
  • EKS: Charges a flat fee per cluster plus resource consumption costs, which makes it potentially more expensive than ECS for larger projects with many clusters.

Complexity:

  • ECS: Is easier to set up and manage, with seamless integration into AWS tools, reducing operational complexity.
  • EKS: Requires a deep understanding of Kubernetes, offering more control but increasing management complexity.

Choosing Amazon ECS vs. EKS: Which is right for you?

EKS is undoubtedly the more powerful platform. However, it does not mean EKS is the de facto choice for any workload. ECS is still suitable for many workloads with its simplicity and feature set.

When to use ECS

  • ECS is much simpler to get started with a lower learning curve. Small organizations or teams with limited resources will find ECS the better option to manage their container workloads compared to the overhead associated with Kubernetes.
  • Tighter AWS integrations allow users to use already familiar resources like ALB, NLB, Route 53, etc., to manage the application architectures. It helps them to get the application up and running quickly.
  • ECS can be the stepping stone for Kubernetes. Rather than adapting EKS at once, users can use ECS to implement a containerization strategy and move its workloads into a managed service with less up-front investment.

When to use EKS

On the other hand, ECS can sometimes be too simple with limited configuration options. This is where EKS shines. It offers far more features and integrations to build and manage workloads at any scale.

  • Pods may not be required for many workloads. However, Pods offer unparalleled control over pod placement and resource sharing. This can be invaluable when dealing with most service-based architectures.
  • EKS offers far more flexibility when managing the underlying resources with the flexibility to run on EC2, Fargate, and even on-premise via EKS Anywhere.
  • EKS provides the ability to use any public and private container repositories.
  • Monitoring and management tools of ECS are limited to the ones provided by AWS. While they are sufficient for most use cases, EKS allows greater management and monitoring capabilities both via built-in Kubernetes tools and readily available external integrations.

All in all, the choice of the platform comes down to specific user needs. Both options have their pros and cons, and any of them can be the right choice depending on the workload.

To sum up, it’s better to go with EKS if you are familiar with Kubernetes and want to get the flexibility and features it provides. On the other hand, you can try ECS first if you are just starting up with containers or want a simpler solution.

Related reading

]]>
What Is a Cloud Service Provider? CSPs Explained https://www.bmc.com/blogs/csp-cloud-service-providers/ Thu, 19 Dec 2024 00:00:15 +0000 https://www.bmc.com/blogs/?p=50333 The cloud has become the primary vehicle for IT infrastructure for many companies. Initial skepticism about handing over control to another party has largely melted away. Today’s cloud-first approach is being driven by its many benefits, including the quick deployment of IT infrastructure without the need for a large upfront capital investment and the ability […]]]>

The cloud has become the primary vehicle for IT infrastructure for many companies. Initial skepticism about handing over control to another party has largely melted away. Today’s cloud-first approach is being driven by its many benefits, including the quick deployment of IT infrastructure without the need for a large upfront capital investment and the ability to support remote workers.

Spending on cloud services grew 20.4% in 2024, to total $675.4 billion, up from $561 billion in 2023, according to Gartner. The ability for companies to use new technologies like AI, new software engineering methods like containerization, and greater flexibility with DevOps approaches like Agile, are driving this growth.

What is a CSP in cloud computing?

In cloud computing, a CSP stands for cloud service provider. For your company to access public cloud services, you need to engage a CSP. In simple terms, cloud service providers make cloud services available to consumers.

According to the NIST Cloud Computing Reference Architecture, NIST SP 500-292, a CSP:

Click here if you’re looking instead for CSPs as in communication service providers.

Benefits of CSPs

The main advantages that cloud-based services providers offer over traditional hosting services include:

  • Immediate access to more services. A wide variety of capabilities and integrations.
  • Usage-based pricing. Billing based on time use or capacity.
  • Global scale. Coverage across most of the developed world.
  • Scalability. It is quick and easy to right-size your IT resources, adding when business demands are higher or scaling down during slower times.
  • Flexibility. Use the resources you need when you need them, adding and subtracting capabilities depending on demand.
  • Mobility. Workers can access IT services any place they have a network connection—even when on the move.
  • Disaster recovery. With failover redundancy, your services can move from a physical location suffering a power outage or other disruption to another location far from the problem, without missing a beat.
  • Security and compliance. Large commercial cloud computing providers have security and compliance protections that are likely far beyond what a smaller company can acquire.
  • Automatic updates and maintenance. With teams of experts on hand and direct access to the most innovative cloud networking service providers, you will have the latest tech, with the assurance that it is well-maintained.
  • Resource optimization. You will be able make the most efficient use of right-sized resources, with access to powerful monitoring tools, to minimize waste and maximize application performance.

Five major players dominate the public cloud infrastructure market, with a near 82% market share, per Gartner in 2023:

Gartner Market Share

(View availability regions and zones for major cloud computing providers.)

What do CSPs provide?

What are the cloud services that CSPs offer? CSP IT cloud solutions are broadly grouped into three main categories:

  • Infrastructure as a Service (IaaS). This includes the physical computing resources underlying the cloud service solutions, including the servers, networks, storage, and hosting infrastructure, that are through a set of service interfaces and computing resource abstractions such as virtual machines. The customer makes selections based on desired computing resources such as processing, memory, capacity, and bandwidth, and manages the upper layers themselves.
  • Platform as a Service (PaaS). This includes the computing infrastructure for the platform and runs the cloud software that provides the components of the platform, such as the runtime software execution stack, databases, and other middleware components. The customer makes selections based on the requirements of the applications they intend to install and manage by themselves.
  • Software as a Service (SaaS). Here, the CSP deploys, configures, maintains, and updates the operation of the software applications on a cloud infrastructure, and provides an interface for the customer to access the applications. The customer only has some limited administrative control and customization capabilities.

(Read our full IaaS, PaaS, SaaS explainer.)

However, these groupings do not adequately capture many of the capabilities that CSPs are currently providing, including containers (CaaS), serverless computing (FaaS), storage, edge computing, private cloud, and AI/ML platforms.

The table below outlines examples of the many cloud service solutions from some of the major cloud providers:

Sample of the many offerings

NIST SP 500-292 defines 5 major activities that CSPs perform:

Cloud Provider

  • Service deployment. The operation of cloud infrastructure by the CSP on behalf of the consumer based on the following deployment models: public cloud, private cloud, hybrid cloud, or community cloud.
  • Service orchestration. The activities involved in the arrangement, coordination, and management of computing resources through the composition of system components to provide cloud services to consumers.
  • Cloud service management. The service-related functions that are necessary for the management and operation of those services required by or proposed for cloud consumers, including business support, provisioning and configuration, and facilitating data portability and interoperability.
  • Security. Implementing security controls across the entire cloud service architecture and working with consumers and third parties to reduce attack surfaces, tackle vulnerabilities, and limit the impact of threats.
  • Privacy. Protecting the assured, proper, and consistent collection, processing, communication, use, and disposition of consumer personal information (PI) and personally identifiable information (PII) stored in the cloud.

Challenges faced by cloud services providers

The last two activities, security and privacy, are intrinsically tied to the CSP’s biggest challenge—governance.

Trust is invariably tied to security and privacy, as any organization that entrusts its data to a third party expects that measures have been put in place to ensure confidentiality, integrity, and availability are always guaranteed. The increasingly difficult legal and regulatory requirements on data privacy, especially concerning location and sharing, mean that cloud services providers must be on their toes to ensure compliance while at the same time dealing with new and evolving security threats.

CSPs must also grapple with the challenge of optimizing their capacity to meet customer demand while maintaining service uptime to the highest degree possible. For the major players, any significant downtime comes with media scrutiny owing to the services that are consumed on a global scale.

Additionally, knowledge management is a significant issue, as CSPs require employees with technical competencies that are expensive and hard to replace.

(Explore cloud governance best practices.)

How to choose a cloud computing provider

Consider the following factors when evaluating your cloud computing options.

  • Cost: To get the best deal, consider the details of the provider’s cost model. Most use a per-use approach, but you may find variations that can materially affect the price you will pay.
  • Tools and features: Make sure your provider offers the functionality you need today, including security and data management, along with the ability to support your future needs.
  • Ease of deployment and change management: Consider each cloud provider’s process for deploying, managing, and upgrading your applications and services. Access to APIs that make it easy to connect and build are invaluable.
  • Physical location of the servers: Security and availability depend on where the cloud services are housed. Make sure the provider can meet regulations for secure data storage.
  • Reliability: Typical SLAs specify service levels, such as percentage of uptime and compensation should they fail to deliver on that guarantee. Watch out for loopholes, such as excluding or discounting outages that are just a few minutes long. Depending on your industry, even short outages can severely affect your business.
  • Security: Your business may require adherence to various security frameworks. Make sure your provider can meet those certifications and can support compliance in the long term. Pay special attention to their risk management processes and plans, and their data backup operations.
  • Strategic fit: You will want to choose a cloud network provider with the tools and capabilities that support your immediate and longer-term goals.
  • Reporting: Look for providers with controls and management tools that make it easy to monitor performance and usage.

What does the future hold for cloud-based service providers?

Cloud computing continues to grow and even accelerate. Many types of companies, not just technology firms, are using the cloud, with many taking a multi-cloud approach. The Flexera 2024 State of the Cloud Report confirmed that:

  • 89% of enterprises surveyed had a multi-cloud strategy.
  • 73% take a hybrid approach.
  • 85% are experimenting with or using generative AI public cloud services.
  • 59% prioritize cloud cost optimization.

The benefits of the cloud have been proven over time, with fewer organizations fearing the drawbacks. CSPs have increasingly drawn the attention of regulators seeking to balance benefits against risks. Given the growing importance of cloud services, as well as complex social, economic, security, and service issues, it is important to consider the implications beyond the simple business arrangements between CSPs and their customers.

Related reading

]]>
What is Code Refactoring? How Refactoring Resolves Technical Debt https://www.bmc.com/blogs/code-refactoring-explained/ Mon, 16 Dec 2024 00:00:25 +0000 http://www.bmc.com/blogs/?p=12021 We’ve all been there before: it’s time to add one last function into your program for the next release, but you don’t have the time to make it just right – organized, well-structured, and aligned with the rest of the code. Instead, you have to add the functionality in a bit of a haphazard way, […]]]>

We’ve all been there before: it’s time to add one last function into your program for the next release, but you don’t have the time to make it just right – organized, well-structured, and aligned with the rest of the code.

Instead, you have to add the functionality in a bit of a haphazard way, just to get it done, so you do. Luckily, the release goes well, the function is added smoothly, and it’s onto the next sprint of work.

But what happens to that code that isn’t the cleanest, clearest, or best it can be? We’ve talked in previous articles about technical debt – the idea that certain work gets delayed during software development in order to deliver on time. Such a short-term solution works for now but isn’t the best for the software in the long run. This work then turns into “debt” because it will eventually need to be dealt with.

In this article, we are looking at code refactoring as a way to reduce technical debt.

Code refactoring meaning

Code refactoring is defined as the process of restructuring computer code without changing or adding to its external behavior and functionality.

There are many ways to go about refactoring, but it most often comprises applying a series of standardized, basic actions, sometimes known as micro-refactorings. The changes in existing source code preserve the software’s behavior and functionality because the changes are so tiny that they are unlikely to create or introduce any new errors.

The importance of code refactoring

At first, its purpose may seem a little superfluous – sure, code refactoring is improving the nonfunctional attributes of the software, which is nice, but what’s the point if it isn’t helping the overall functionality?

Experts say that the goal of code refactoring is to turn dirty code into clean code, which reduces a project’s overall technical debt.

Dirty code is an informal term that refers to any code that is hard to maintain and update, and even more difficult to understand and translate. Dirty code is typically the result of deadlines that occur during development – the need to add or update functionality as required, even if its backend appearance isn’t all that it could or should be. You can often find dirty code by its code smell, as it were.

This is the idea behind technical debt: if code is as clean as possible, it is much easier to change and improve in later iterations – so that your future self and other future programmers who work with the code can appreciate its organization. When dirty code isn’t cleaned up, it can snowball, slowing down future improvements because developers will have to spend extra time understanding and tracking the code before they can change it.

Some types of dirty code include:

  • Codes, methods, or classes that are so enlarged that they are too unwieldy to manipulate easily
  • The incomplete or incorrect application of object-oriented programming principles
  • Superfluous coupling
  • Areas in code that require repeated code changes in multiple areas in order for the desired changes to work appropriately
  • Any code that is unnecessary and removing it won’t be detrimental to the overall functionality

Clean code, on the other hand, is much easier to read, understand, and maintain, thereby easing future software development and increasing the likelihood of a quality product in shorter time.

When to refactor

Knowing the right time to refactor isn’t too tricky. If you’re the developer, you already know where you may have cut corners in your code in order to create the functionality you needed.

If you’re part of a team that’s sharing a project, it may be harder to prioritize refactoring, so here are some tips:

  • Refactor in accordance with the Rule of 3:
    • The first time you are doing something, just get it done, even if it’s with dirty code, so the software functions as needed.
    • The second time you’re doing a similar change, you can do it again the same way – you’ll know it a little better, so you may be speedier but the code still won’t be perfectly clean.
    • When you encounter this change for the third time, start refactoring.
  • Refactor during code review. This is the last chance to clean up code before it is live. Try doing a two-person review so you can fix quick, low-hanging fruit and then better gauge which difficult code change areas are worth the time.
  • Refactor during regularly-scheduled intervals. This doesn’t have to mean dedicating a whole day to it, but rather add it in to your routine – spending the last hour of a workday on refactoring. (Bonus: proactively refactoring means your manager and your team don’t have to carve out additional time for it.)

Some refactoring purists maintain that refactoring should not occur while you’re addressing bugs, as the purpose of refactoring isn’t to improve functionality. But, cleaner code inherently equates to fewer bugs, as bugs are often the result of dirty code. By cleaning code – whether in dedicating refactoring sessions or while addressing bugs – you’ll mitigate bugs before they become problems.

Ways of refactoring code

There are several ways to refactor code, but the best approach is taking one step at a time and testing after each change. Testing ensures that the key functionality stays, but the code is improved predictably and safely – so that no errors are introduced while you’re restructuring the code.

Refactoring used to be a manual process, but new refactoring tools for common languages mean you can speed up the process a little bit. Still, it can be helpful to understand what the tool is actually doing, and if you’re in a less common language, great refactoring tools may not be available.

Which techniques to employ often depends on the problems in your code. Here are some common refactoring techniques:

  • Correcting your composing methods in order to streamline, removing duplication, and make future changes a lot easier in your code.
  • Simplifying conditional expressions, which become unnecessary complex over time, and method calls so they are easier to understand, improving interfaces for class interaction.
  • Moving features between objects in order to better distribute functionality among classes. This can include safely moving functionality, creating new classes, and hiding implementation details.
  • Organizing data to improve handling and class associations so that the classes are recyclable and portable.
  • Improving generalization.

Code refactoring checklist

What is a good stopping point for clean code? This checklist can help you determine when your code is clean:

  • It is obvious to other programmers. This can be as simple as creating clearer structures for naming, classes, and methods, or improving more sophisticated algorithms.
  • It contains no duplication. The chance for human error increases every time you have to double-up on changes.
  • It contains minimal moving parts, like number of classes. Less to remember means less to maintain and less to clean up.
  • It passes all tests. Code is dirty even if most of it passes tests.
  • It is easier to maintain. You’ll spend less time on future improvements.

The benefits of refactoring

The main benefit of refactoring is to clean up dirty code to reduce technical debt. Cleaner code is easier to read, for the original developer as well as other devs who may work on it in the future so that it is easier to maintain and add future features. Less complicated code can also lead to improved source-code maintenance, so that the internal architecture is more expressive.

Clean code also means that design elements and code modules can be reused – if code works well and is clean, it can become the basis for code elsewhere.

Refactoring can also help developers better understand code and design decisions. Both beginner-level and more advanced programmers can benefit from seeing how others have worked inside and built up the code as software functionality increased and shifted. This can encourage the sense of collective ownership – that one developer doesn’t own the code, but the whole team is responsible for it.

The act of refactoring – changing tiny pieces of code with no front-end purpose – may seem unimportant when compared to higher priority tasks. But the cumulative effect from such changes is significant and can lead to a better-functioning team and approach to programming.

]]>
Integrating Purpose: How the Private Sector Can Be a Force for Good https://www.bmc.com/blogs/integrating-purpose-force-for-good/ Fri, 13 Dec 2024 15:14:27 +0000 https://www.bmc.com/blogs/?p=54348 The holiday season is a time for reflection, connection, and giving back. At BMC, our annual Season of Giving Campaign embodies this spirit through a global initiative that includes volunteerism, fundraising, and providing 100,000 meals to communities in need. This effort is dedicated to honoring the employees, clients, and partners who make up our ecosystem. […]]]>

The holiday season is a time for reflection, connection, and giving back. At BMC, our annual Season of Giving Campaign embodies this spirit through a global initiative that includes volunteerism, fundraising, and providing 100,000 meals to communities in need. This effort is dedicated to honoring the employees, clients, and partners who make up our ecosystem.

This commitment to giving is part of a larger vision: using the power of the private sector to address the world’s most pressing challenges. The issues of poverty, hunger, inequality, and climate change may seem daunting, but they also present an opportunity for collaboration and innovation. While governments, nonprofits, and communities play essential roles, the private sector has the unique ability to accelerate progress. With the right tools, platforms, and networks, we can empower solutions that drive meaningful change and create lasting impact.

At BMC, we view this as a chance to make a difference. We’re on a journey to embed purpose into our business model, leveraging our expertise to address global challenges in a way that is scalable and impactful. By partnering with nonprofits and investing in transformative projects, we’re discovering how the private sector can be a catalyst for solutions that not only address immediate needs but also open pathways to a better future for all.

Aligning purpose with expertise

Corporate citizenship is most impactful when it aligns with what a company does best. At BMC we focus our tech expertise on:

  • Advancing digital accessibility
    We ensure technology is inclusive and accessible, from web accessibility initiatives to device drives that redistribute technology to those who need it most. Through partnerships with organizations like DV Safe Phone, Medic Mobile, and Compudopt, we help place devices in the hands of those who can use them to improve their lives.
  • Accelerating tech nonprofits
    We fundraise for and support Fast Forward, an accelerator for nonprofit technology enterprises that provides them with the funding, mentorship, and resources to solve critical issues in education, healthcare, and environmental sustainability.
  • Empowering communities through skills
    We encourage our employees to use their expertise for good by volunteering as digital literacy tutors, mentors, and career coaches with organizations like Robotex India, Raspberry Pi Foundation, Humsafar Trust, and the Joy Education Foundation. Many also contribute by providing language translation for humanitarian efforts through platforms like Tarjimly.

While we continually strive to expand and improve these initiatives, each step forward strengthens our commitment to integrating purpose into our business model and creates meaningful opportunities for personal and professional growth within our team.

Driving long-term impact through purpose

The holiday season often inspires acts of generosity, but sustainable change requires a long-term commitment. At BMC, we focus on scalable innovations that create lasting ripple effects, such as:

  • Empowering nonprofits to develop apps that connect underserved communities to healthcare resources.
  • Supporting tools that bridge educational gaps in remote areas, ensuring access to quality learning opportunities.
  • Encouraging open-source solutions that address environmental challenges like climate monitoring and sustainable farming.

For those looking to make a meaningful impact, I recommend exploring Fast Forward’s directory of tech nonprofits to find organizations that align with your mission. Platforms like GlobalGiving host thousands of grassroots projects worldwide, from building computer labs for children to funding STEM coding camps. These opportunities allow businesses to contribute to systemic change while advancing their values.

The holiday season reminds us of the power of connection and collaboration. As members of the private sector, we have the opportunity to shape the world—not just through the products and services we create, but also through the purpose we embrace.

From all of us at BMC, we wish you a joyful holiday season and encourage you to make this year a turning point for purpose-driven impact.

#CSR

]]>