GRC Viewpoint

4 Misconceptions About Data Security & Amazon S3

Given the flexibility and performance of managed cloud storage services, a growing number of businesses and government agencies are using cloud storage to ingest and share files as part of an application workflow or to build data lakes to cost-effectively collect, scale and analyze data.

In fact, Amazon Simple Storage Service (Amazon S3) has grown by a factor of 250,000 since its inception in 2006, with 800 million objects stored in its first year and more than 200 trillion objects stored today.  The prolific growth of cloud storage makes it an attractive target for cybercriminals and threats are common. When something becomes popular, people take notice.

Not only are hackers stealing sensitive data from unsecured storage, they are also penetrating environments through malicious code attached to the data that’s uploaded to and downloaded from storage. In 2022, cloud malware delivery increased compared to the year before and Amazon S3 was one of the top 5 apps for malware downloads.

Businesses must have safeguards in place to protect their customers, their partners and themselves. This is why organizations need to demonstrate compliance with security frameworks like SOC2, PCI DSS, and NIST SP 800-53, which require protection against malicious code.

Misconceptions vs Reality

Cloud computing is complex, so it’s no wonder there are some common misconceptions out there when it comes to data security and Amazon S3.

Misconception 1: We don’t have to worry about data security because my storage environment is secure by default

  • Reality: While cloud computing is often more secure than on-premise computing, threats are common and human error is an ongoing issue. While AWS provides the tooling, it’s up to the cloud customer to manage Amazon S3 configurations such as access policies, user permissions and encryption.

Misconception 2: Scanning data for malware isn’t a necessity because data isn’t executable in Amazon S3

  • Reality: Storage becomes a gateway for attacks, especially if data is ingested from third parties. Scanning data for malware is essential because after a file is accessed and opened, it could activate and spread an infection.

Misconception 3: AWS natively scans the data in Amazon S3 for malware 

  • Reality: As dictated by the AWS Shared Responsibility Model, it is the cloud customer’s responsibility to scan all files and objects for malware. AWS does not scan objects going into or out of storage for advanced threats.

Misconception 4: We can just use CASB 

  • Reality: A Cloud Access Security Broker (CASB) is great for controlling employee-based data transactions. However, CASB doesn’t work as well when it comes to controlling interactions with entities outside the organization. Moreover, CASB is software-as-a-service, so data is removed from the environment to identify malicious files; letting data leave the account could introduce additional security risks.

Start with Getting the Basics Under Control

Once an organization understands its responsibility for data security as it relates to storage, where should it begin?

Understand flow and usage 

Not all data is created equal and so not all data may need to be scanned for malware. To put appropriate safeguards in place, it’s important to assess the level of risk data carries. In doing so ask:

  • Where is data coming from?
  • Do we trust the source?
  • Where is the data going – who is using it and how?

For example, if log files from internal sources are being stored, they likely don’t require scanning because they are from a trustworthy source. However, if the data came from a third party who uploaded a document that will be shared internally for processing, scanning is most likely warranted.

Understand configurations and follow best practices

Configurations are not “set it and forget it”; they require monitoring and maintenance. Key components include:

    • Public or private access – determine which buckets should be private and which public based on the data they contain and how it needs to be used. Generally speaking, it’s a good idea to start with private access and set it to public as needed.
    • Enforce least privilege – control users by enabling the right amount of access so they are useful but not a threat.
    • Encrypt data – understand cloud provider settings (is data encrypted in transit? at rest?) and track updates to enhance security. For example, AWS just automated server-side encryption for all new objects in Amazon S3 at the beginning of 2023.
    • Enable versioning – keeping variants of mission-critical data allows for their restoration if erroneously modified or deleted.
    • Turn on logging – tracking access requests assists with security audits.
  • Use MFA Delete or Object Lock – these settings help prevent the ransoming of data by inhibiting the deletion and manipulation of the data
  • Prune data when appropriate – compliance with regulations dictate that data should not be held onto for longer than it’s needed.

Storage Needs Your Attention

Companies need to be worried about their data just as much as they worry about infrastructure and access. Don’t let storage become a blind spot when it comes to data security and compliance. Use reliable tools to automate malware scanning and configuration monitoring. Record and report on findings – it proves data has been scanned for compliance and allows for remediation of issues. Don’t forget to document processes for repeatability.

By Ed Casmer, Founder and CTO at Cloud Storage Security

Ed is a 20 year technology veteran with deep experience in cloud security across the major Cloud Platform Providers: AWS, Azure, GCP. Prior to Cloud Storage Security, he served as Cloud CTO for Symantec, where he led the team responsible for building solution offerings consumable in the cloud and managed the relationships between Symantec and the Cloud Platform Providers from both a technical integration and go-to-market standpoint.

 

Related Articles

Latest Articles