As you probably know by now, we are in the multi-cloud era. Companies have more and more cloud-native workloads to run, meaning the landscape had to expand in order to tailor to the rising demand.

Google Cloud Platform (GCP) is a fairly known contender compared to the pioneers that are AWS and Azure, but it caught up very quickly with them. While computing is the visible part of the iceberg when it comes to the cloud, storage takes up a good chunk of software architecture and data storage considerations. Companies need options on how to store their data in the cloud.

Protect Your Data with BDRSuite

Cost-Effective Backup Solution for VMs, Servers, Endpoints, Cloud VMs & SaaS applications. Supports On-Premise, Remote, Hybrid and Cloud Backup, including Disaster Recovery, Ransomware Defense & more!

BDRSuite supports Google Cloud Object Storage, Azure Blob Storage, and S3 (AWS). Like on-premises storage solutions, Google Storage offers different types, including blocks, files (NFS), and objects, which is arguably the most flexible one.

With Google Object Storage, you can retrieve and upload any file via REST API calls, which is a huge benefit for enterprise use cases like BDRsuite. It can dynamically expand indefinitely, with each object growing up to the terabyte scale. Similarly to AWS, objects are stored in buckets. When creating a bucket, you have access to many of the same parameters as Amazon S3 and BDRsuite is adding support for it.

Download Banner

Tiered storage (Storage classes)

Similarly to VMware vSAN’s storage profiles, Google Cloud Storage offers several storage classes to select a performance tier. A storage class is a piece of metadata that can affect performance, availability, and pricing. Each bucket has a default storage class that will be applied unless an object stored in this bucket is configured otherwise.

Different types of data have different requirements in terms of performance. Archive storage is almost never accessed, while more traditional workloads issue reads and writes on a regular basis. For that reason, different tiers of storage allow users to select slower storage for cold data and fast storage for hot data.


“Google Cloud Storage – storage classes”

Google Cloud Storage offers four storage classes to address these use cases:

  • Standard storage: 99.99% SLA. This type of storage is recommended for hot data that is stored for short periods of time as it is also the most expensive
  • Nearline storage: 99.95% SLA. This type of storage is recommended for data that can afford less availability and that is accessed on average once per month. This is typically ideal for backup scenarios
  • Coldline storage: 99.95% SLA. very low-cost storage for storing infrequently accessed data at most once per quarter)
  • Archive storage: 99.95% SLA. The archive is obviously the lowest tier aimed at storing cold data that you access less than once a year. It is extremely cheap for storage but has higher costs for data access and operations. Use cases include data stored for legal or regulatory reasons or disaster recovery. Note that this data is available within milliseconds when you need it

Scalability
One of the main benefits of the cloud is that you can break free from capacity planning and get unparalleled flexibility when you need more or less of a resource and pay only for what you use. Let Google purchase exabytes of data so you can get away with the terabyte scale in your data center.
In Google Cloud Storage, you have access to unlimited storage regardless of the storage class you select, and you can choose to store the data in multi-region, dual-region, or single-region fashion for availability.

Security
Data stored in Google Cloud Storage is protected by similar mechanisms as AWS S3. You can set permissions on your buckets or switch to fine-grained permissions to configure access at the object or directory level. You can of course set buckets as public or private and configure view/write access in a granular fashion for various users.

Note also that object versioning can be enabled to retain versions of an object over time when you overwrite it.

Management
Management of Google Cloud Storage can be done via the management UI or via the API. For instance, gsutil is a useful cli to manage via the command line. Note that, when doing an operation in the UI, it will offer you the equivalent gsutil command.

The Web UI is pretty efficient and self-explanatory but if you leverage Google Cloud Storage in enterprise scenarios, you aren’t likely to spend much time in it and would rather access it via the API, either through the command line or API calls made by third-party products like BDRSuite

Pricing

Like with other hyper scalers, pricing will very much depend on your needs and can vary pretty wildly from one extreme to the other obviously. There are several variables that come into play, and you can find exhaustive information in the documentation. In a nutshell, the cost will depend on:

Data storage:

  • Quantity of data stored
  • Storage class
  • Data locality (some zones are cheaper than others)
  • Availability (multi-region versus dual vs single region)

**Note: The storage classes other than Standard have a minimum duration of storage, and if you delete, replace, or move the object before that, you will be charged as if it were stored for the minimum duration.

Data processing:

  • Operation charges apply when you perform an operation on an object to change or retrieve information about resources. Operations are classified into 3 classes that are based on types of API calls. For instance, the operation storage. buckets. The list is in class A and will set you back $0.05 with the standard storage class, while the storage is. *AccessControls. The list is class B, which will cost $0.004. Other operations are free regardless of the storage class, like storage.objects.delete
  • Retrieval fees are charged when you read, copy, move, or rewrite object data or metadata when it is on Nearline ($0.01/GB), Coldline ($0.02/GB), or Archive storage ($0.05/GB)
  • Inter-region replication will incur a per-GB fee for data protection

Network:

Finally, additional charges will be caused by egress network activity (data sent from cloud storage in HTTP responses). However, different levels of pricing will apply according to the destination.

  • Network egress within Google Cloud: Egress to other buckets or Google Cloud services
  • Specialty network services: If egress uses specific Google Cloud network products like Cloud CDN, CDN Interconnect…
  • General network usage: Egress out of Google Cloud or between continents

Wrap up

Object storage has been around for a long time in the infrastructure world and has grown ever more popular on-premise due to the flexibility and granularity offered by it compared to traditional block storage. Cloud providers offer great offerings to get quick access to object storage with unparalleled scalability due to the “unlimited” storage Google has in their data centers.

From an infrastructure perspective, object storage in the cloud is a great option for on-premise environments to quickly and efficiently store offsite copies of their backup to protect themselves against global disasters. BDRsuite offers the possibility to copy backup data from the repository to several cloud providers, including Google Cloud Storage.

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

Rate this post