AWS认证准备-a Cloud Guru笔记 1, IAM and S3

IAM

It is global,

User,
Group,
Role,
Policy

16 S3 101 and Storage

  • read s3 FAQ **
  • S3 is object based
    • key
    • value
    • version id
    • metadata
    • sub resources (ACL, torrent)
  • files can be from 0b to 5TB
  • unlimited storage
  • stored in buckes
  • enbale multi part upload can load files to S3 much faster

    Tiers

  • S3 Standard 99.99% availability 99.99 (11)% durability

58cf1b6d376734f26627e6f1412c9b78.png

-S3, charged for storage, requests, storage management pricing, data transfer pricing

17 S3 Bucket

Screen Shot 2018-11-29 at 7.50.43 AM.jpg

Exam Tips

  • buckes are a universal name
  • receive http 200 after upload
  • S3, IA, Reduce Redundacy Storage
  • Encryption
    • Client Side
    • Server Side
      • Amazon S3 Managed Keys
      • KMS (SSE-KMS)
      • Customer Provided Keys (SSE-C)
  • control access to buckets using either a bucket ACL or bucket policies
  • default buckets are private and all object inside of them too

18 Versioning

  • versioning doc
  • delete marker
  • all new file is private even use new version replacing the old one
  • delete all versions if want to delete the file totally

tips

  • store all versions (including write and delete)
  • integreates with lifecycle rulses
  • versioning’s MFA delete capability -> to add aditional layer of security of files deleting
  • versioning cannot be stopped but suspended only after enable

19 Cross Region Replication

  • CRR Monitor
  • AWS CLI
  • different region
  • versioning enabled
  • whole bucket or prefix or tags
  • but the files will not be cloned automatically, but new and changes subsequent files will be replicated automatically, if you want to clone, tool is required

    1
    aws s3 cp --recursive s3://aws-sy-version-bucket s3://aws-ire-replication
  • but deleting will not be replicated for security (delete markers are not replicated)

  • deletingin individual versions or delete markers will not be replicated
  • P.S. replicating should be from source to dest, not dest to src

20 Lifecycle Management S3-IA and Glacier

  • can be used in conjunction with versioning
  • can be applied to current versions and previous versions
  • follwing actio can be done
    • transition to IA (infrequent Access) (128kb and 30 days after)
    • Archive to Glacier (30 days after IA)
    • Permanently delete
    • to One Zone IA
    • to IT

21 Cloudfront

  • Edge Location
  • Origin: S3 Bucker, EC2 Instance, Elastic Load Balancer, Route 53 … also non-aws server
  • Distribution: consists of a collection of edge locations
  • edge location is not read only, you can put an object to them
  • object are cached for the life of the TTL (time to live)
  • you can clear cache, but you will be charged

    key terminology

  • web distribution for websites
  • RTMP for media streaming

23 Security and Encryption

Security

  • all private default
  • bucket policies
  • access control lists
  • access logs (can be done by another bucket even another account)

Encryption

  • In transit
    • SSL/TLS
  • Server Side Encryption (SSe)
    • SSE-S3
    • SSE-KMS AWS Key Management Serive
    • SSE-C
  • Client Side Encryption

24 Storage Gateway

4 types

  • File Gateway (NFS)
  • Volume Gateway (iSCSI)
    • Stored Volumes
    • Cached Volumes
  • Tape Gate Way (VTL)

a7271dafbf8722cf546818294fc35e28.png

c569ca747ccffdaaa48ef9be29022dcd.png

f6a26b233fccd864dab7026a5d912e1b.png

874b812e94afad591b9f6d67f9415751.png

Exam tips

  • NFS, for flat files stored directly on S3
  • Volume Gateway
    • Stored Volumes, entire dataset is stored on site and is asynchronously backed up to S3
    • Cache Volumes, entire dataset is stored on S3, and frequently accessed data is cached on site
  • VTL, used for backup and uses popular backup application like NetBackup, Backup Exec, Veeam etc.

25 Snowball

  • Snowball
  • ~ Edge
  • Snowballmobile (US only)
  • understand snow and import/export

27 S3 Transfer Acceleration

it use edge location

2293c1a7b183bf47771ed85ab6e8e9b9.png

28 S3 Static Websites

  • serverless
  • very cheap, scales automatically, loab balancing automatically
  • static only
  • you can use permission/json to set files be public automatically

Questions

  • One of your users is trying to upload a 7.5GB file to S3 however they keep getting the following error message - “Your proposed upload exceeds the maximum allowed object size.”. What is a possible solution for this?
    • design your app to use multi part api to upload
  • RRS reduced redundacy storage
  • RRS, easy to reproducible, non-critical
  • bucket name https://s3-eu-west-1.amazonaws.com/acloudguru1234
  • S3 has eventual consistency for which HTTP Methods? overwrite PUTS and DELETES
  • You work for a busy digital marketing company who currently store their data on premise. They are looking to migrate to AWS S3 and to store their data in buckets. Each bucket will be named after their individual customers, followed by a random series of letters and numbers. Once written to S3 the data is rarely changed, as it has already been sent to the end customer for them to use as they see fit. However on some occassions, customers may need certain files updated quickly, and this may be for work that has been done months or even years ago. You would need to be able to access this data immediately to make changes in that case, but you must also keep your storage costs extremely low. The data is not easily reproducible if lost. Which S3 storage class should you choose to minimise costs and to maximise retrieval times? S3-IA
  • You work for a health insurance company who collects large amounts of documents regarding patients health records. This data will be used usually only once when assessing a customer and will then need to be securely stored for a period of 7 years. In some rare cases you may need to retrieve this data within 24 hours of a claim being lodged. Which storage solution would best suit this scenario? You need to keep your costs as low as possible. Glacier.
  • You run a popular photo sharing website that is based off S3. You generate revenue from your website via paid for adverts, however you have discovered that other websites are linking directly to the images on your site, and not to the HTML pages that serve the content. This means that people are not seeing your adverts and every time a request is made to S3 to serve an image it is costing your business money. How could you resolve this issue? use signed urls with expire dates

RRS

Amazon S3 Reduced Redundancy Storage

Reduced Redundancy Storage (RRS) is an Amazon S3 storage option that enables customers to store noncritical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage. It provides a highly available solution for distributing or sharing content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced. The RRS option stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as standard Amazon S3 storage.

Reduced Redundancy Storage is:

Backed with the Amazon S3 Service Level Agreement for availability.
Designed to provide **99.99% durability and 99.99% availability** of objects over a given year. This durability level corresponds to an average annual expected loss of 0.01% of objects.
Designed to sustain the loss of data in a single facility.

评论