Cloud Migration best practices -part 6

This blog series will discuss the best practices employed by our technical team when they are engaging into a cloud migration project. The following content has been adapted from a Cloud Migration Whitepaper authored by Google.

Testing: Evaluate how applications perform in the cloud

Testing your applications in the cloud before you officially migrate them is an important way to save time and mitigate risk. It gives enterprises the opportunity to easily see how applications perform in the cloud and to make the appropriate adjustments before going live. As mentioned
previously, some migration solutions provide a way to run clones of live environments in the cloud so you can do realistic testing but without impacting data or uptime of the live system.
While testing in the cloud, identify the key managed services you should be using from the cloud provider (e.g., Database as a Service (DBaaS), DNS services, backup). Review all the cloud environment prerequisites for supporting the migrated workloads like networking (e.g., subnets, services), security, and surrounding services.
In some cases, especially early in a migration project, it’s useful to run a proof of concept test for some of the applications you plan to migrate. These pilot projects will help you get a feel for the migration process. They also help validate two key migration metrics: The resources and capacity your application requires and your cloud vendor’s capabilities and potential limitations (e.g., number of VMs, storage types and size, and network bandwidth).

The more testing done in the beginning, the smoother the migration will be. We advise running tests to validate:

  • Application functionality, performance, and costs when running in the cloud
  • Migration solution features and functionality

Ultimately, this testing and right-sizing will help you capture the right configurations (settings, security controls, replacement of legacy firewalls, etc.), perfected your migration processes, and developed a baseline for what your deployment will cost in the cloud.

Contact us today for your FREE Cloud Migration Consultation!

Cloud Migration best practices -part 5

This blog series will discuss the best practices employed by our technical team when they are engaging into a cloud migration project. The following content has been adapted from a Cloud Migration Whitepaper authored by Google.

Migration solutions

There are two primary architectures for cloud migration solutions that exist today: Replication-based and streaming-based.
Replication-based migration tools are typically re-purposed disaster recovery tools that essentially “copy and paste” applications and data into the cloud. Example steps from a replication-based solution include:

  • Install an agent on the source and/or destination systems
  • Replicate some or all of the dataset, which can take between hours to weeks depending on network bandwidth and the solutions’s transfer optimizations, if any
  • Install an agent on the source and/or destination systems
  • Replicate some or all of the dataset, which can take between hours to weeks depending on network bandwidth and the solutions’s transfer optimizations, if any

Streaming-based migration solutions are typically a more effective approach for live and/or production applications, especially when you don’t want to wait until all the data is moved before you can test or begin running your app. The streaming approach moves just an initial subset of critical data into the cloud so that your application can begin running in the cloud within minutes. Then, in the background, your migration solution continues to upload data into the cloud and keeps the on-premises data synchronized with any changes made in the cloud. This can save tens or hundreds of hours during a migration project often making streaming-based solutions significantly faster than replication-based.

Ideally, it’s important to have answers to the following questions so that you are clear about what features and functionality you consider important for your the applications you want to migrate.

  1. Agents: Many Replication-based architectures require installing agents in each application and/or in your cloud target. Is this true for the cloud service you’ve chosen? Will you need access to each application’s systems? This installation and removal can add time and complexity. If you’re moving a lot of applications, an agent-less solution may be a better fit.
  2. Testing: Does the solution offer a way for you to test applications before they are migrated without taking production and/or live systems offline? Without the need to transfer into data sets to the cloud first? Can you change cloud instances on the fly to test different configurations?
  3. Rightsizing: Will you get analytics-based recommendations for how to map on-premises instances to cloud instance types, optimized for either performance or cost?
  4. Migrating Apps and data: Does the system handle just the data migration or can it also handle moving the application? Can the application run in the cloud while migration takes place? How much downtime will there be? Is it up front, predictable, and/or short? How will the system support multi-tier applications that require orchestrated shutdown and restart and systems being moved in a specific order?

Contact us today for your FREE Cloud Migration Consultation!

Cloud Migration best practices -part 4

This blog series will discuss the best practices employed by our technical team when they are engaging into a cloud migration project. These guidelines have been developed over the years, based on hundreds of workload migrated to Azure and GCP.

Here are a few of the application related questions/details that can be asked/identified during the planning phase of the cloud migration:

Application owner
Brief description
Application type
Hosted location (Private, Public, On-prem)
Hosted model (IaaS, PaaS, SaaS)
Required compliance
Required SLA
Application monitoring
Maintenance window allowed
Maintenance window length
Maintenance window schedule start
Change management process and lead time
Change freeze window(s)
Application business priority
Offline business impact
Maximum allowed offline time
Migration risk
Rollback plan
Cutover strategy and executer
Recovery time objective
Recovery point objective
Backup requirements
Party responsible for backups
GCP backup plan
Disaster recovery plan
Host name
Components (DB, caching, proxy, LB, etc)
External dependencies
Required licenses
Shared services
External download data size and frequency
External upload data size and frequency

Contact us today for your FREE Cloud Migration Consultation!

Cloud Migration best practices -part 3

This blog series will discuss the best practices employed by our technical team when they are engaging into a cloud migration project. These guidelines have been developed over the years, based on hundreds of workload migrated to Azure and GCP.

Planning Phase: Building the foundations

The planning phase is designed to build the foundational Cloud “landing zone” and to pilot and validate the migration approach while aligning on the long-term roadmap.

We’ll help group workloads into migration waves and build a detailed plan for those first workloads

The plan is necessarily less detailed for later waves. This plan will be iterative, building and maintaining a pipeline that is ready for migration.

Some key activities that are performed during this phase:

  • Build cloud foundations
  • Define agile process
  • Determine the governance
  • Schedule migration groups
  • Pilot migration

Contact us today for your FREE Cloud Migration Consultation!

Cloud Migration best practices -part 2

This blog series will discuss the best practices employed by our technical team when they are engaging into a cloud migration project. These guidelines have been developed over the years, based on hundreds of workload migrated to Azure and GCP.

Discovery Phase

The discovery phase helps uncover the existing workloads that will need to be migrated and the information necessary to determine migration type, level of effort, and application groups

The goal here is to understand what the customer has, and what they want to do with it. We’ll typically look to gather inventory data for the whole estate in one go; but then we’ll build a backlog by business unit / data centre location / technology type and gather the business-level detail. 

This phase is the starting point of any migration journey – but we often find customers want this as a standalone service.

As part of this discovery phase, the following outputs will be created:

  • Workloads grouped
  • First-mover workloads identified
  • TCO/ROI analysis
  • High-level effort estimations

Contact us today for your FREE Cloud Migration Consultation!

Cloud Migration best practices -part 1

This blog series will discuss the best practices employed by our technical team when they are engaging into a cloud migration project. These guidelines have been developed over the years, based on hundreds of workload migrated to Azure and GCP.

First mover identification

We’ll look for first movers(workloads that can be moved first) by aggregating the data from the automation inventory tools and from the interviews with app owners. 

Here are a few factors to take in consideration when you are trying to decide what can you move to the cloud first.

  • High business value but not mission critical
  • Not a POC
  • Are not edge cases
  • Can be used to build a knowledge base
  • Are managed by central teams
  • Have supportive app or line of business owner who likes spearheading new and innovative projects
  • High business value but not mission critical
  • Not a POC
  • Are not edge cases
  • Can be used to build a knowledge base
  • Are managed by central teams
  • Have supportive app or line of business owner who likes spearheading new and innovative projects

Contact us today for your FREE Cloud Migration Consultation!

Google Cloud Security Requirements -part 4

This blog series consists of detailed set of cloud security requirements that can be used for any organization who wants to implement securely cloud services. The requirements expressed below are cloud agnostic and can be applied to any public cloud or even private clouds.

The sub domain and requirement description are mapped to *ISO 27002:2013 controls and standards. The overall theme of this blog series covers all the cloud security controls stated by NIST 800-53 series.

Domain: Vulnerability and Threat Management

Sub-DomainReq. NameReq. Description
Governance and Operating ModelPolicyPolicy that includes responsibilities related to threat and vulnerability management, reporting, rating criteria, remediation timelines, and escalation/exception processes shall be established
Governance and Operating ModelAsset InventoryAsset inventory (to include physical systems, virtual systems, sensitive information) that is to be included in the Vulnerability & Threat management scope shall be maintained. A list of technologies able to monitor for vulnerability impacts shall be maintained
Reporting and AnalysisIntegration with Risk ManagementVulnerabilities for their impact on identified risks shall be analyzed.  Technical vulnerabilities shall be aligned with / inform risks in the risk register and the effectiveness of controls
Reporting and AnalysisPatch ManagementPatch management strategy and process shall be defined that outlines recurring patch management activities, defines acceptable implementation timelines, requires bac-kout procedures, tests patches for operational and security implications before deployment, has an exception process for not implementing patches, has a defined emergency patching process
Vulnerability TestingCloud TestingRegularly scheduled, recurring security testing of the cloud environment shall be conducted. Testing shall follow the cloud provider’s process and guidelines. Client shall require the cloud provider to regularly conduct assessments and remediation and provide attestation of such to client.  Client shall review provider attestation on a regular basis.

Client shall:
a. Periodically monitor third parties’ compliance with security requirements
b. Supervise and monitor outsourced software development
c. Periodically monitor and review the services, reports, and records provided by third parties
Vulnerability TestingCode ScanningCode reviews/scanning to identify potential security issues shall be conducted
Vulnerability TestingPre-deployment testingSecurity testing before deployment of changes to code or environment shall be conducted
Vulnerability TestingDatabase TestingRegularly scheduled reviews of database security shall be conducted
Vulnerability TestingPenetration Testing (Internal)Regularly scheduled penetration testing of it’s perimeter and public-facing environment shall be conducted
Vulnerability TestingApplication TestingApplication reviews/testing to identify potential security issues shall be conducted
Vulnerability TestingTools and TechniquesVulnerability scanning tools and techniques that shall be deployed to:
a. Promote interoperability among tools and accommodate the virtualization technologies used
b. Automate parts of the vulnerability management process by using standards for enumerating platforms, software flaws, and improper configurations; formatting and making transparent, checklists and test procedures; measuring vulnerability impact; and readily updating the list of information system vulnerabilities scanned
c. Analyze vulnerability scan reports and results from security control assessments. Remediate legitimate high-risk vulnerabilities mitigated within 30 days and moderate risk vulnerabilities within 90 days, in accordance with an organizational assessment of risk
d. Share information obtained from the vulnerability scanning process and security control assessments with designated personnel throughout the organization to help eliminate similar vulnerabilities in other information systems (i.e., systemic weaknesses or deficiencies)
Threat IntelligenceVulnerability Monitoring of AssetsRegularly scheduled reviews of the Vulnerability Management program and results of vulnerabilities shall be conducted
Threat IntelligenceCollection and Dissemination of AlertsReceive information system security alerts, advisories, and directives from designated external organizations from GCP. In addition, the capability shall disseminate security alerts, advisories, and directives to all staff with system administration, monitoring, and/or security responsibilities, and implement security directives in accordance with established time frames. Client shall establish and execute a plan for communicating how, if, and when Client is remediating security issues affecting each customer or with appropriate regulatory entities as needed. Appropriate contacts with special interest groups, relevant authorities, or other specialist security forums and professional associations shall be maintained
Technical RequirementOS SupportUnix/Linux/BSD, CISCO IOS, Junos, and Windows scanning shall be supported
Technical RequirementData ProtectionData shall be stored and transmitted securely 
Technical RequirementData AccessAccess to scanning data shall be restricted to those with a need for it

Are you ready to audit and secure your Google cloud environment? Contact our security specialists, today!

Google Cloud Security Requirements -part 3

This blog series consists of detailed set of cloud security requirements that can be used for any organization who wants to implement securely cloud services. The requirements expressed below are cloud agnostic and can be applied to any public cloud or even private clouds.

The sub domain and requirement description are mapped to *ISO 27002:2013 controls and standards. The overall theme of this blog series covers all the cloud security controls stated by NIST 800-53 series.

Domain: Data Protection

Sub-DomainReq. NameReq. Description
Privacy Processes and ProceduresUser NotificationPrivacy policies and procedures and the purposes for which personal information is collected, used, retained, and disclosed shall be documented
Privacy Processes and ProceduresThird-Party UsagePolicies and procedures should be in place that include:
a. Disclosing personal information to third parties only for the purposes identified in the notice and with the implicit or explicit consent of the employees and customers
b. Having procedures in place to evaluate that the third parties have effective controls to meet the terms of the agreement, instructions, or requirements
c. Taking remedial action in response to misuse of personal information by a third party to whom Client has transferred such information
Data Identification and ClassificationData Ownership & InventoryAppropriate ownership to data and establish procedures to classify, monitor, and update data in accordance with its classification policies. Policies and procedures shall be in place to inventory, document, and maintain data flows to ascertain any regulatory, statutory impact, and to address any other business risks associated with the data
Data Protection and MonitoringHandling ProceduresProcedures for labeling, handling, and protecting the confidentiality and integrity of personal information, test data, production data, and data involved in online transactions to prevent contract dispute and compromise of data shall be established. Mechanisms for label inheritance shall be implemented for objects that act as aggregate containers for data
Data Protection and MonitoringLeakage MitigationAreas where potential information leakage can occur shall be identified, and appropriate controls to mitigate it shall be implemented
Data Protection and MonitoringDLP SystemData Loss Prevention (DLP) system to monitor user interactions with data, analyze data traffic over its network, and scan and inspect enterprise data repositories to identify sensitive content shall be implemented. The DLP system shall integrate with:
a. HTTP/HTTPS proxy server – for HTTP and HTTPS blocking
b. DLP SMTP agent (Message Transfer Agent [MTA]) – for blocking an email containing sensitive data
c. Security Information and Event Management (SIEM) solution – for real-time security alerting and analysis
Data Protection and MonitoringDatabase Activity MonitoringDatabase Activity Monitoring (DAM) tools to monitor and audit all access to sensitive data across heterogeneous database platforms shall be deployed
Data Protection and MonitoringFile IntegrityFile Integrity/Activity Monitoring tools to monitor files of all types and detect changes in those files that can lead to increased risk of data compromise shall be deployed
Cryptographic ControlsEncryption PoliciesPolicies and procedures for the use of strong encryption protocols (e.g., AES-256) for protection of sensitive data in storage (e.g., file servers, databases, and end-user workstations) and data in transmission (e.g., system interfaces, over public networks, and electronic messaging), as per applicable legal, statutory, and regulatory compliance obligations, shall be established
Cryptographic ControlsKey ManagementIncludes:
a. Establish policies and procedures for the management of cryptographic keys in the cryptosystem
b. Assign ownership to keys
c. Prevent storage of keys in the cloud
d. Implement segregation of duties for the responsibilities of key management and key usage
Cryptographic ControlsKey RotationAutomatic key rotation for customer-managed keys shall be enabled

Are you ready to audit and secure your Google cloud environment? Contact our security specialists, today!

Google Cloud Security Requirements -part 2

This blog series consists of detailed set of cloud security requirements that can be used for any organization who wants to implement securely cloud services. The requirements expressed below are cloud agnostic and can be applied to any public cloud or even private clouds.

The sub domain and requirement description are mapped to *ISO 27002:2013 controls and standards. The overall theme of this blog series covers all the cloud security controls stated by NIST 800-53 series.

Domain: DevSecOps and CI/CD

Sub-DomainReq. NameReq. Description
GovernanceApplication Risk CategorizationAll applications shall be categorized by risk. Risk can be categorized as internal, external, or strategic (e.g., weak cryptographic standards can get the app compromised during production phase. So this can be marked as high risk.) 
ConstructionThird-Party ComponentsAny third-party components that may be used in any software development cycle shall be documented
VerificationAutomated Code Analysis Tools — SecurityAutomated code analysis tools with specific components for monitoring for security issues shall be used
VerificationPenetration TestingPenetration tests prior to release to production shall be performed
DeploymentThird-Party Components Security UpdatesThird-party software components’ websites for any security-related updates shall be regularly reviewed
DeploymentPatch Management ProcessSingle process for applying upgrades and patches to applications shall be used
DeploymentOperational Environment AutomationSoftware Engineering shall use automated tools to evaluate operational environment and application-specific health
DeploymentSecurity Alerts and ErrorsSecurity-related alerts and error conditions for all released applications
DeploymentChange Management ProcessUse of common change management process, and all software engineers shall be trained on the process
DeploymentSecure Code SigningAll released code for a single consistent process shall be securely signed on

Are you ready to audit and secure your cloud environment? Contact our security specialists, today!