Security of the Google Cloud Platform: Security Services Available by Default

Google security principles:

 

  • Shared responsibility: Security of the cloud and security in the cloud
  • Defense in depth, at scale, by default
  • Strong cryptographic identity and provenance
  • Transparency – Reduce the unverifiable trust surface
  • Customers have capabilities they need to build secure apps and businesses in the cloud in an easy and effective way
 UsageAudit loggingSafe Browsing APIBeyondCorpSecurity Key Enforcement  
 OperationsCompliance and certifications Live migration infrastructure maintenance and patchingThreat analysis and intelligenceOpen source
forensics tools
Anomaly detection(infrastructure)Incident response(infrastructure)
 DeploymentGoogle Services TLS encryption  with perfect forward secrecyCertificate authorityFree and automatic certificatesDDoS mitigation(PaaS and SaaS)  
 Application Peer code review
and static analysis(infrastructure SLDC)  
Source code/Image
provenance (infrastructure) 
Binary
authorization(infrastructure code)
WAF (PaaS and SaaS use cases)IDS/ IPS(PaaS and SaaS use cases)Web application scanner (Google services)
 NetworkInfrastructure RPC encryption in transit between data centresDNSGlobal private networkAndromeda SDN controllerJupiter datacenter networkB4 SDN  network
 StorageEncryption at restLoggingIdentity and access
management
Global at scale
key management service
  
 OS and IPCHardened KVM HypervisorAuthentication for each host and each jobCurated host imagesEncryption of interservice communications  
 BootTrusted BootCryptographic credentials    
 HardwarePurpose-built chipsPurpose-built serversPurpose-built storagePurpose-built networkPurpose-built
data centers
 
  

Confused, overwhelmed? Our GCP Security Audit Service can save you hundreds of hours of effort!

Payment Card Industry (PCI) on GCP: part 2

We continue our previous blog post about PCI DSS compliance on GCP with more PCI requirements examples:

PCI Requirement 5: Protect all systems against malware and regularly update anti-virus software or programs

Cloud Security Command Center helps security teams gather data, identify threats, and respond to threats before they result in business damage or loss. It offers deep insight into application and data risk so that you can quickly mitigate threats to your cloud resources and evaluate overall health. With Cloud Security Command Center, you can view and monitor an inventory of your cloud assets, scan storage systems for sensitive data, detect common web vulnerabilities, and review access rights to your critical resources, all from a single, centralized dashboard. It can help you comply with several requirements, including sections 5 and 6.6.

GCP Cloud Security Command Center

PCI Requirement 6: Develop and maintain secure systems and applications

PCI Requirement 6.2 talks about ensuring all systems are patched with the latest security patches within one month of release. Having a robust image pipeline can help with this, but creating the new images with the patches and depreating images that aren’t patched, so that old and obsolete images aren’t used inadvertently.

  • Use image families to make sure your automation and users use the latest image
  • Use state flags to mark images as DEPRECATED, OBSOLETE, DELETED
GCP PCI Image life cycle

PCI Requirement 7: Restrict access to cardholder data by business need to know

PCI Requirements:

7.1 Limit access to system components and cardholder data to only those individuals whose job requires such access.

7.1.1 Define access needs for each role, including: 

• System components and data resources that each role needs to access for their job function 

• Level of privilege required (for example, user, administrator, etc.) for accessing resources.

7.1.2 Restrict access to privileged user IDs to least privileges necessary to perform job responsibilities.

7.1.3 Assign access based on individual personnel’s job classification and function.

7.2 Establish an access control system(s) for systems components that restricts access based on a user’s need to know, and is set to “deny all” unless specifically allowed. 

This access control system(s) must include the following:

7.2.1 Coverage of all system components.

7.2.2 Assignment of privileges to individuals based on job classification and function.

7.2.3 Default “deny all” setting.

Once access needs for each job function are defined, custom roles can be created provide granular control over the exact permissions to access system components and data resources

  • Create groups based on job functions, and assign custom roles to those groups
  • Job function groups can be nested in job classification groups
  • Custom roles can be defined at the organizational level

Review available permissions and their purpose through the
API Explorer (search for product)

Combine predefined roles

Combine permissions

PCI Requirement 7: continuing…

Understanding IAM core principles is key to implementing separation of duties and least privilege for PCI compliance as defined in requirement 7

  • User are typically humans – think AD/LDAP users
  • Service accounts are typically “robot accounts” assigned to a service with only the permissions that service/robot needs to do its job
  • Groups are a collection of users 
  • IAM roles are a set of permissions 

Read from top to bottom

  • Users can be part of groups
  • Service accounts can be part of groups
  • Users and groups can be granted rights to service accounts via IAM roles
  • Through the IAM roles, they can be granted access to resources
  • Service Accounts are also resources – grant service account user role to allow user to run operations as the service account

PCI Requirement 8: Identify and authenticate access to system components

Requirement 8 covers identity and authentication management, and GCP can help customers implement this in both GCP, and in the customer’s own applications.

  • Cloud Identity provides identity and authentication to GCP.
  • Customers can leverage Cloud Identity-Aware Proxy (Cloud IAP) and other Google tools to implement identity and authentication on their applications.
  • Google websites and properties use robust public key technologies to encrypt data in transit: 2048-bit RSA or P-256 ECDSA SSL certificates issued by a trusted authority (currently the Google Internet Authority G2).

PCI Requirements:

8.1 Define and implement policies and procedures to ensure proper user identification management for non consumer users and administrators on all system components as follows:

8.1.1 Assign all users a unique ID before allowing them to access system components or cardholder data.

8.1.2 Control addition, deletion, and modification of user IDs, credentials, and other identifier objects.

8.1.3 Immediately revoke access for any terminated users.

8.1.4 Remove/disable inactive user accounts within 90 days.

8.1.5 Manage IDs used by third parties to access, support, or maintain system components via remote access as follows: 

• Enabled only during the time period needed and disabled when not in use. 

• Monitored when in use.

8.2 In addition to assigning a unique ID, ensure proper user-authentication management for non-consumer users and administrators on all system components by employing at least one of the following methods to authenticate all users: 

• Something you know, such as a password or passphrase 

• Something you have, such as a token device or smart card 

• Something you are, such as a biometric.

8.2.1 Using strong cryptography, render all authentication credentials (such as passwords/phrases) unreadable during transmission and storage on all system components.

8.2.3 Passwords/passphrases must meet the following: 

• Require a minimum length of at least seven characters. 

• Contain both numeric and alphabetic characters. 

Alternatively, the passwords/ passphrases must have complexity and strength at least equivalent to the parameters specified above.

8.2.6 Set passwords/passphrases for first-time use and upon reset to a unique value for each user, and change immediately after the first use.

8.3 Secure all individual non-console administrative access and all remote access to the CDE using multi-factor authentication. 

Note: Multi-factor authentication requires that a minimum of two of the three authentication methods (see Requirement 8.2 for descriptions of authentication methods) be used for authentication. Using one factor twice (for example, using two separate passwords) is not considered multi-factor authentication.

8.3.1 Incorporate multi-factor authentication for all non-console access into the CDE for personnel with administrative access.

8.3.2 Incorporate multi-factor authentication for all remote network access (both user and administrator, and including third-party access for support or maintenance) originating from outside the entity’s network.

PCI Requirement 9: Track and monitor all access to network resources and cardholder data

PCI Requirement 9 talks about tracking and monitoring all access to network resources and cardholder data. There are multiple logging options available in GCP to help you achieve this. 

Stackdriver Logging collects data from many sources including Google Cloud Platform, VM instances running Stackdriver Logging agent (FluentD agent), and user applications.

Effective log entries contain answers the following questions:

  • Who or what acted?
  • Where did they do it?
  • When did they do it?

Audit logs available for most GCP resources and services

Audit logs generate two types of logs:

  • Admin Activity 
  • Data Access
GCP PCI Log Collection

Contact our GCP Security experts for a FREE GCP Security consultation, today!

Payment Card Industry (PCI) on GCP

GCP adheres to the PCI DSS requirements set forth for a level 1 Service Provider. GCP is required to be compliant with PCI DSS and all applicable requirements that directly apply to a service provider.

As of Nov 11, 2020 here is the list of 93 GCP services that ARE in scope for PCI DSS:

Access Approval
Access Context Manager
Access Transparency
Apigee Edge
AI Platform Data Labeling
AI Platform Notebooks
AI Platform Training and Prediction
App Engine
AutoML Natural Language
AutoML Tables
AutoML Translation
AutoML Video
AutoML Vision
BigQuery
BigQuery Data Transfer Service
Cloud Asset Inventory
Cloud Bigtable
Cloud Billing API
Cloud Build
Cloud CDN
Cloud Composer
Cloud Console
Cloud Console App
Cloud Data Fusion
Cloud Data Loss Prevention
Cloud Deployment Manager
Cloud DNS
Cloud Endpoints
Cloud Filestore
Cloud Functions
Cloud Healthcare
Cloud HSM
Cloud Interconnect
Cloud Key Management Service
Cloud Life Sciences (formerly Google Genomics)
Cloud Load Balancing
Cloud NAT (Network Address Translation)
Cloud Natural Language API
Cloud Router
Cloud Run (fully managed)
Cloud Run for Anthos
Cloud SDK
Cloud Shell
Cloud Source Repositories
Cloud Spanner
Cloud SQL
Cloud Storage
Cloud Translation
Cloud Vision
Cloud VPN
Compute Engine
Container Registry
Data Catalog
Dataflow
Datalab
Dataproc
Datastore
Dialogflow
Event Threat Detection
Firestore
GCP Marketplace
GKE Hub
Google Cloud Armor
Google Cloud Identity-Aware Proxy
Google Kubernetes Engine
Identity and Access Management (IAM)
Identity Platform
IoT Core
Managed Service for Microsoft Active Directory (AD)
Memorystore
Network Service Tiers
Orbitera
Persistent Disk
Pub/Sub
Resource Manager API
Security Command Center
Service Consumer Management
Service Control
Service Management
Speech-to-Text
Stackdriver Debugger
Stackdriver Error Reporting
Stackdriver Logging
Stackdriver Trace
Storage Transfer Service
Talent Solution
Text-to-Speech
Traffic Director
Transfer Appliance
Video Intelligence API
Virtual Private Cloud (VPC)
VPC Service Controls
Web Security Scanner

PCI examples on GCP:

PCI Requirement 1: Install and maintain a firewall configuration to protect cardholder data

GCP PCI

PCI Requirement 2: Do not use vendor-supplied defaults

The PCI DSS contains a set of rules that describe how to set up machines that are part of a payment-processing architecture. These rules can be implemented in several ways, but Packer from Hashicorp offers an easy process to automate baking images.

Baking images helps meet requirements (and others)

2.2 Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards. 

2.2.1 Implement only one primary function per server to prevent functions that require different security levels from co-existing on the same server. 

2.2.2  Enable only necessary services, protocols, daemons, etc., as required for the function of the system. 

2.2.5 Remove all unnecessary functionality, such as scripts, drivers, features, subsystems, file systems, and unnecessary web servers.

Image baking

  • Base image – OS or hardened image from CIS with unnecessary packages removed
  • Core – packages and libraries needed for all instances (security, monitoring, language specific packages)
  • Application – application code

PCI Requirement 3: Protect stored cardholder data

PCI Requirement 3 talks about encryption at rest, there are multiple options on GCP to accomplish is
Google encrypts data at-rest by default with no configuration required by customers. In some cases, customers may want additional control over encryption for many reasons. For that reason, Google has two additional key management options. In the middle option, customers may choose to utilize customer-managed encryption keys (CMEK) using Google Cloud Key Management Service (KMS). You can define access controls to encryption keys, establish rotation policies, and gather additional logging into encryption/decryption activities. In both the default and customer-managed case, Google remains the root-of-trust for encryption/decryption activities. On the right-hand side, customers may choose to use customer-supplied encryption keys (CSEK) in some Google services, such as Google Cloud Storage, in which Google is no longer in the root of trust. Using CSEK comes with the added risk of data loss, as Google can not help you decrypt data if you lose encryption keys. Furthermore, customers do not have to choose one key management option only. You can make use of the default encryption for most of your workload, meets regulatory requirements, and add some additional control for select applications

GCP PCI Example 2

Data Loss Prevention API can be used to sanitize PCI data

Requirement 3.4 stipulates that a PAN must be unreadable anywhere it is stored. While Google automatically offers encryption at rest, it doesn’t automatically perform the one-way hashes, truncation, or tokenization that the rules also require. Use GCS with DLP for truncation or tokenization or use KMS for strong cryptography with managed keys.

GCP PCI DLP

PCI Requirement 4: Encrypt transmission of cardholder data across open, public networks

PCI requirement 4 talks about encryption transmission of cardholder data across open, public networks.

By default, any data sent to a data cloud service is encrypted by default using TLS from the user to the frontend (using BoringSSL) 

Once inside Google, what happens to your data? 

  • Google encrypts and authenticates all data in transit at one or more network layers when data moves outside physical boundaries not controlled by Google or on behalf of Google.
  • Data in transit inside a physical boundary controlled by or on behalf of Google is generally authenticated but not necessarily encrypted.

You can also take advantage of HTTPS load balancing to encrypt incoming customer traffic. Istio can be used to secure traffic between VMs. Cloud VPC can be used to establish a secure VPN tunnel between on-premises environment and payment-processing environment.

GCP PCI Encryption

Contact our GCP Security experts for a FREE GCP Security consultation, today!

Auditing the security of your GCP environment

There are plenty of tools out there that can perform a security audit for your environment, but none of them are using Google’s security best practices.

We are trying to change that by providing a simple, customizable approach to performing a security audit for all the projects that your organization has.

From the customer side, all you need to do is to provide us with an export of your cloud inventory assets and then we will apply all the Google’s security best practices as documented under the Cloud Foundation Toolkit project. The end result will be a very comprehensive security report which will identify the security areas that need your attention.

The topics covered under this security audit:

  1. Resource management
    • GCP org hierarchy
    • Environments & resource isolation
    • Resource provisioning
    • Organization policies
  2. Identity, Authentication, and Authorization
    • User & group management
    • Administrative roles
    • Authentication
    • Assigning IAM roles
    • Service Accounts
  3. Network security
    • VPC architecture
    • Firewall rules
    • Network logging
    • VPC Service Controls
    • DDoS and WAF
    • Identity Aware Proxy
  4. VM security
    • VM identities
    • Remote access
    • Image management
  5. GKE security
    • GKE cluster provisioning
    • Secure cluster default configurations
  6. Data security
    • Encryption key management
    • Cloud Storage security
    • BigQuery security
    • CloudSQL security
    • Data Loss Prevention
  1. Security Operations
    • Logging
    • Monitoring
    • Policy scanning

Here is a sample of the summary of those recommendations :

Recommendations organized by Priority
SectionHighMedLowTotal
Cloud Resource Management2005
Identity, Authentication, and Authorization4509
Network Security310014
Virtual Machine Security3205
GKE Security25313
Data Security92011
Security Operations0202

Part of our security report will include specific security recommendations for the areas that are marked as High Priority for you.

Here is an example of such specific security recommendations:

Use an organizational structure that is based on your business structure that is usually grouped by Cloud IAM permissions and Organization policy inheritance
Use folders to apply Cloud IAM permissions and organization policies will be applied. For example, folder structure can reflect environments such as development and production, where more restrictive policies and limited Cloud IAM access is granted to the production environments.

Avoid extensive use of folder level IAM permissions, but instead, apply permissions at a project or resource level.

If you are interested to learn more, signup for your FREE trial of our Google Cloud Platform (GCP) security audit service, today!

Microsoft 365 Security Audit

If you ever wonder how secure is your Microsoft/Office 365 environment, there are lots of online resources that can guide you to perform your own security audit. Unfortunately most of those resources lack a standardized approach and of course they don’t take in consideration YOUR particular needs.

Consider this: there are over 60 security controls that you can audit for any Microsoft/Office 365 environment; there are over 10 different security oriented services that can configured in the office admin portal.

As part of our security services, we perform the following tasks:

  • Initial security assessment of your Microsoft 365 services: a comprehensive security assessment against CIS Benchmark Security framework(over 50 security checks that have zero or very limited impact to be implemented). The report includes recommendations for Exchange Online, SharePoint Online, OneDrive for Business, Skype/Teams, Azure Active Directory, and Mobile Devices.
  • We schedule a planning session to identify all the Microsoft 365 security features that make sense for your business.
  • We configure all the security features identified above.
  • We provide a Security training/education session for the whole team. Explaining in non-technical terms security best practices(Phishing, Privacy and Protection of your own computer topics) that should be followed by everyone.

Not convinced yet? Read the following short presentation.

Turn to NovaQuantum for the expertise you need to help you safeguard your business data.  We know IT security and we know Microsoft 365. We can help you control and manage access to sensitive information, protect company data across devices, and guard against unsafe attachments, suspicious links, and other cyber threats.

You can expect guidance, recommendations, and best practices to keep your business data safe from both internal and external threats with a simple, cost-effective solution.

Book your FREE Office/Microsoft 365 security consultation with us, today!

Security defaults for Microsoft/Office 365 subscriptions

As a managed cloud security company we often get asked as to why “my environment” is not secure by default, as designed by Microsoft. This question is even more relevant in the context of Microsoft/Office 365 environment: majority of small and medium businesses that use Office 365 probably do not have a dedicated security department that is well versed in Cloud Security. To answer the question above let’s take a look at the following example.

Here is what the Secure Score looks like using all the default settings as provided by Microsoft ( your own results might vary, as you might have different options/features enabled in your own subscription). This sample subscription is using the E3 Office 365 plan.

Let’s take this example one step further: we’ve audited the same subscription using the CIS Microsoft 365 Foundations Benchmark version 1.2.0 framework. We tried to be practical, therefore we used only the E3 Level 1 profile from this framework. Items in this profile apply to customer deployments of Microsoft M365 with an E3 license and intend to:

  • be practical and prudent
  • provide a clear security benefit; and
  • not inhibit the utility of the technology beyond acceptable means.

What we’ve discovered was a bit sobering:

  • In total, 44 of the security controls audited had a “Fail” mark
  • Only 8 security controls had a “Pass” mark
  • Account / Authentication section : 7 Failed , 4 Passed
  • Application Permissions section: 2 Failed , 0 Passed
  • Data Management section: 4 Failed , 0 Passed
  • Email Security / Exchange Online section: 9 Failed , 2 Passed
  • Auditing section: 10 Failed , 2 Passed
  • Storage section: 3 Failed , 0 Passed
  • Mobile Device Management section: 9 Failed , 0 Passed

Obviously by now you started to form a well informed opinion about the quality of the default security settings of Microsoft/Office 365.

I don’t think anyone can answer very clearly our initial question, maybe someone from Microsoft can, but we can show unequivocally that even without having an in-depth review of the business requirements as they relate to the security of the data in the cloud, there is plenty of opportunity to improve the security of ANY Microsoft/Office 365 security environment.

I would strongly advise all the businesses using Office 365 to perform an in-depth review of their security settings to make sure their business data is secure in the cloud. Remember that even in this Software-as-Service platform that Microsoft/Office 365 offers, the security of the data falls upon YOU as a responsibility and not on Microsoft. You are being given a multitude of security controls that can be enabled and configured, but in the end you need to analyze them and make sure they meet your particular business requirements.

Run your business from anywhere, with peace of mind
NovaQuantum can help you in this endeavour: book your FREE Office/Microsoft 365 security consultation with us, today!

Cloud Migration best practices -part 6

This blog series will discuss the best practices employed by our technical team when they are engaging into a cloud migration project. The following content has been adapted from a Cloud Migration Whitepaper authored by Google.

Testing: Evaluate how applications perform in the cloud

Testing your applications in the cloud before you officially migrate them is an important way to save time and mitigate risk. It gives enterprises the opportunity to easily see how applications perform in the cloud and to make the appropriate adjustments before going live. As mentioned
previously, some migration solutions provide a way to run clones of live environments in the cloud so you can do realistic testing but without impacting data or uptime of the live system.
While testing in the cloud, identify the key managed services you should be using from the cloud provider (e.g., Database as a Service (DBaaS), DNS services, backup). Review all the cloud environment prerequisites for supporting the migrated workloads like networking (e.g., subnets, services), security, and surrounding services.
In some cases, especially early in a migration project, it’s useful to run a proof of concept test for some of the applications you plan to migrate. These pilot projects will help you get a feel for the migration process. They also help validate two key migration metrics: The resources and capacity your application requires and your cloud vendor’s capabilities and potential limitations (e.g., number of VMs, storage types and size, and network bandwidth).

The more testing done in the beginning, the smoother the migration will be. We advise running tests to validate:

  • Application functionality, performance, and costs when running in the cloud
  • Migration solution features and functionality

Ultimately, this testing and right-sizing will help you capture the right configurations (settings, security controls, replacement of legacy firewalls, etc.), perfected your migration processes, and developed a baseline for what your deployment will cost in the cloud.

Contact us today for your FREE Cloud Migration Consultation!

Cloud Migration best practices -part 5

This blog series will discuss the best practices employed by our technical team when they are engaging into a cloud migration project. The following content has been adapted from a Cloud Migration Whitepaper authored by Google.

Migration solutions

There are two primary architectures for cloud migration solutions that exist today: Replication-based and streaming-based.
Replication-based migration tools are typically re-purposed disaster recovery tools that essentially “copy and paste” applications and data into the cloud. Example steps from a replication-based solution include:

  • Install an agent on the source and/or destination systems
  • Replicate some or all of the dataset, which can take between hours to weeks depending on network bandwidth and the solutions’s transfer optimizations, if any
  • Install an agent on the source and/or destination systems
  • Replicate some or all of the dataset, which can take between hours to weeks depending on network bandwidth and the solutions’s transfer optimizations, if any

Streaming-based migration solutions are typically a more effective approach for live and/or production applications, especially when you don’t want to wait until all the data is moved before you can test or begin running your app. The streaming approach moves just an initial subset of critical data into the cloud so that your application can begin running in the cloud within minutes. Then, in the background, your migration solution continues to upload data into the cloud and keeps the on-premises data synchronized with any changes made in the cloud. This can save tens or hundreds of hours during a migration project often making streaming-based solutions significantly faster than replication-based.

Ideally, it’s important to have answers to the following questions so that you are clear about what features and functionality you consider important for your the applications you want to migrate.

  1. Agents: Many Replication-based architectures require installing agents in each application and/or in your cloud target. Is this true for the cloud service you’ve chosen? Will you need access to each application’s systems? This installation and removal can add time and complexity. If you’re moving a lot of applications, an agent-less solution may be a better fit.
  2. Testing: Does the solution offer a way for you to test applications before they are migrated without taking production and/or live systems offline? Without the need to transfer into data sets to the cloud first? Can you change cloud instances on the fly to test different configurations?
  3. Rightsizing: Will you get analytics-based recommendations for how to map on-premises instances to cloud instance types, optimized for either performance or cost?
  4. Migrating Apps and data: Does the system handle just the data migration or can it also handle moving the application? Can the application run in the cloud while migration takes place? How much downtime will there be? Is it up front, predictable, and/or short? How will the system support multi-tier applications that require orchestrated shutdown and restart and systems being moved in a specific order?

Contact us today for your FREE Cloud Migration Consultation!