AWS Direct Connect (DX)

AWS Direct Connect (DX) provides the ability to establish a dedicated network connection from sites such as data centers, offices, or colocation environments to AWS. It links your internal network to an AWS Direct Connect location over a standard, Ethernet fiber-optic cable.

One end of the cable is connected to your router, the other to an AWS Direct Connect router. With this connection in place, you can create virtual interfaces directly to public AWS services (for example, to Amazon S3) or to Amazon VPC, bypassing Internet service providers in your network.

Direct Connect  provides a more consistent network experience than internet-based connections at bandwidths ranging from 50 Mbps to 10 Gbps on a single connection. It allows you to create resilient connections to AWS because you have full control over the network path and network providers between your remote networks and AWS.

AWS Direct Connect requires physical connectivity between the AWS network and your network. This process involves ordering connections, receiving LOA-CFA (Letter Of Authorization – Connecting Facility Assignment), Ordering Cross Connects, and configuring VLANs and BGP.

AWS Direct Connect offers the following benefits:

1)  Security: If you choose to monitor on-premises communication, you can span ports or install tools that monitor traffic across a particular VRF. You can place firewalls in line to meet internal security requirements. You can also control communication by enforcing certain IP addresses to communicate across specific VLANs

2) Traffic engineering: You have greater ability to define and control how data moves in to and out of your AWS environment. You can define complex BGP routing rules, filter traffic paths, move data in to and out of one VPC to another VPC. You also have the ability to define which data flows through which VRF. This is particularly important if you need to satisfy specific compliance for data in-transit.

3) Traffic isolation. You can satisfy compliance requirements that call for data segregation. You also have the ability to define a public and private VRF across the same Direct Connect connection, and monitor specific data flows for security and billing requirements

– Direct Connect (DX) has the following requirements :

1) 802.1Q VLANs across I GBPs or 10 GBPs ethernet connection:- (802.1Q is an Ethernet standard that enables Virtual Local Area Networks (VLANs) on an Ethernet network, it uses the addition of a VLAN tag to the header of an Ethernet frame to define membership of a particular VLAN).

2) BGP and BGP MD5 Authentication:- (Border Gateway Protocol (BGP) is a routing protocol used to exchange network routing and reachability information, either within the same AS (iBGP) or a different autonomous system  (eBGP).

3) Your network must use single-mode fiber with a 1000BASE-LX (1310nm) transceiver for 1 gigabitEthernet or a 10GBASE-LR (1310nm) transceiver for 10 gigabit Ethernet.

4) Auto-negotiation for the port must be disabled. Port speed and full-duplex mode must be configured manually.

5) BFD (Optional):- (Bidirectional forwarding detection (BFD) is a network fault detection protocol that provides fast failure detection times, which facilitates faster re-convergence for dynamic routing protocols. It is a mechanism used to support fast failover of connections in the event of a failure in the forwarding path between two routers. If a failover occurs, then BFD notifies the associated routing protocols to recalculate available routes).

– AWS Direct Connect is billed based on port hours for the connection and data transfer outbound from AWS. The Data Transfer rates are less than standard internet out rates.



A VIF is a configuration consisting primarily of an 802.1Q VLAN and the options for an associated BGP Session. It contains all the configuration parameters required for both the AWS end of a connection and your end of the connection AWS Direct connect support two types of VIFs:
– Public VIFs
– Private VIFs


1. Public VIFs: Public Virtual interfaces enable your network to reach all of the AWS public IP addresses for the AWS region with which your AWS Direct Connect connection is associated.

Public VIFs are typically used to enable direct network access to services that are not reachable via a private IP address within your own VPC. These include Amazon S3, Amazon DynamoDB and Amazon SQS.

2. Private VIFs: Private Virtual Interfaces enable your network to reach resources that have been provisioned within your VPC via their private IP address. A Private VIF is associated with the VGW for your VPC to enable this connectivity.

Private VIFs are used to enable direct network access to services that are reachable via an IP address within your own VPC. These include Amazon EC2, Amazon RDS and Amazon Redshift.



– A Direct Connect gateway enables you to combine private VIFs with multiple VGWs in local or in the remote regions. You can use this feature to establish connectivity from an AWS Direct Connect location in one geographical zone to an AWS region in a different geographical zone.

-You associate a Direct Connect gateway with the virtual private gateway for the VPC, and then create a private virtual interface for your AWS Direct Connect connection to the Direct Connect gateway.

– A Direct Connect gateway is a globally available resource. You can create the Direct Connect gateway in any public region and access it from all other public regions.

For more information about Direct Connect, check the links for the re_invent videos below and the following documentation

  1. AWS re:Invent 2017: Extending Data Centers to the Cloud: Connectivity Options and Co (NET301):
  2. AWS re:Invent 2017: Deep Dive: AWS Direct Connect and VPNs (NET403):
  3. AWS Direct Connect Documentation:
  4. Amazon VPC Network Connectivity Options:

Elastic Load Balancing (ELB) Types & Use Cases

A load balancer in AWS is a mechanism that automatically distributes traffic across multiple compute resources such as Amazon EC2 Instances (or potentially other targets). Using a load balancer increases the availability and fault tolerance of your applications.

A load balancer accepts incoming traffic from clients and routes requests to its registered targets (such as EC2 instances) in one or more Availability Zones.

The load balancer also monitors the health of its registered targets and ensures that it routes traffic only to healthy targets. (Note: you can configure health checks, which are used to monitor the health of the compute resources so that the load balancer can send requests only to the healthy ones.)

When the load balancer detects an unhealthy target, it stops routing traffic to that target, and then resumes routing traffic to that target when it detects that the target is healthy again. You configure your load balancer to accept incoming traffic by specifying one or more listeners.

A listener is a process that checks for connection requests. It is configured with a protocol and port number for connections from clients to the load balancer and a protocol and port number for connections from the load balancer to the targets.

You can either manage your Virtual Load Balancers on Amazon EC2 instances or leverage an AWS cloud service called Elastic Load Balancing.

Elastic Load Balancing distributes incoming application or network traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, in multiple Availability Zones. ELB scales your load balancer as traffic to your application changes over time, and can scale to the vast majority of workloads automatically.

Elastic Load Balancing provides three different types of load balancers at as the time of this writing: Classic Load Balancer, Application Load Balancer (ALB) and Network Load Balancer (NLB)


Classic Load Balancer: 

– Is the original class of Elastic Load Balancing prior to the release of Application Load Balancer and Network Load Balancer

– Using a Classic Load Balancer instead of an Application Load Balancer has the following benefits:
• Support for EC2-Classic
• Support for TCP and SSL listeners
• Support for sticky sessions using application-generated cookies

– Use case: Used when you want to load balance for Amazon EC2 Classic or support for proxy protocol

Application Load Balancer

– An Application Load Balancer operates at the application layer (Layer 7) of the OSI model.

– Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
• Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
• Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
• Support for containerized applications. Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port. This enables you to make efficient use of your clusters.
• Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.
• Improved load balancer performance

– Use Case: Used when you want to have support for path based routing and host based routing

Network Load Balancer

– An Amazon Network Load Balancer operates at the transport (Layer 4) of the OSI model.

– Using a Network Load Balancer instead of a Classic Load Balancer has the following benefits:
• Ability to handle volatile workloads and scale to millions of requests per second.
• Support for static IP addresses for the load balancer. You can also assign one Elastic IP address per subnet enabled for the load balancer.
• Support for registering targets by IP address, including targets outside the VPC for the load balancer.
• Support for routing requests to multiple applications on a single EC2 instance. You can register each instance or IP address with the same target group using multiple ports.

– Use Case: Used when you want support for static IP address for the load balancer, target registration by IP addresses, Client IP Pass through when registering targets by instance ID, and volatile workloads that require scaling to millions of requests per second

For more information about Elastic Load Balancers and types, check the links for the re_invent videos below

– AWS re:Invent 2017: Elastic Load Balancing Deep Dive and Best Practices (NET402) :

– AWS re:Invent 2017: Deep Dive into the New Network Load Balancer (NET304) :

– AWS re:Invent 2016: Elastic Load Balancing Deep Dive and Best Practices (NET403) :

VPC Peering VS Private Link (Use Cases)

In this short post, i briefly want to talk about the differences between VPC peering and AWS Private Link and their use cases.

Let’ do a quick 10,000 feet overview of both services.

>> VPC Peering <<

– A VPC peering connection allows two Amazon VPCs to Communicate.

– It enables instances in either VPC to communicate with each other as if they were within the same private network.


– You Can peer VPCs with other AWS Accounts as well as with other VPCs in the same account.

– Peering connections are created through a request/accept protocol.

– Peering requires the requested VPC to accept the peering request and for both VPCs to configure routing over the peering connection.


– As shown below, VPC Peering connections do not support transitive routing

– You also cannot create a peering connection, between VPCs that have matching or overlapping CIDR blocks.


>> AWS Privatelink <<

– AWS Privatelink is a type of interface VPC endpoint for AWS in addition to customer and partner services. It enables building your own VPC endpoint services.

– It provides access to services such as Amazon EC2 API, Elastic Load Balancing API.

– Each AWS Private link connecti0n is a relationship between a service consumer and a service provider.

– It allows you to access or share a service securely between VPCs or accounts using the Network Load Balancer to create VPC Endpoint services.

– It uses the Network Load Balancer to distribute traffic to a shared resource.


>>Differences And Use Cases <<

1) VPC Peering is appropriate when are many resources that should communicate between the peered VPCs . If there is a high degree of inter VPC communication and security and trust levels are similar. VPC peering is the main option.

– Privatelink are more suited for VPC Relationships that have different trust levels. it is suited for reduced overall complexity, if you need to share one application.

2) VPC peering does not support overlapping VPC CIDR ranges.

– AWS Privatelink supports overlapping CIDR ranges by applying Source NAT from the consumer to the provider of the AWS Privatelink

3) VPC peering has scale limits, a VPC peering can only peer with 125 other VPCs.

– AWS Privatelink scales to thousands of consumers per VPC.

4) AWS Private Link only allows the consumer to originate connections to the provider.

– If Bidirectional Communication is needed, VPC Peering or a reciprocal AWS Privatelink between the consumer and provider may be required

5) AWS Privatelink inherits the design consideration of a Network Load Balancer, e.g NLBs Only support TCP, connections from the consumer to provider go through source NAT which may prevent applications from identifying the consumer IP address.

For more information about both of the services, check the links for the re_invent 2017 videos below :





Today in preparation of my AWS-Solutions Architect Exam i went through AWS Application Services, this article would solely be about SQS, SWF and their differences.


  • Amazon SQS is a web service that gives you access to a message queue that can be used to store messages while waiting for a computer to process them.
  • A Queue is a temporary repository for messages that are awaiting processing.
  • Using Amazon SQS, You can Decouple the components of an application so they run independently, with amazon SQS ceasing message management between components
  • Any component of a distributed application can store messages, in a failsafe queue. Messages can contain up to 256kb in text in any format. Any Component can later retrieve the messages programmatically using the SQS API
  • The Queue as a buffer between the component producing and saving data, and the component receiving the data for processing this means the queue resolves the issues that arise if the producer is producing the work faster than the consumer can process it or if the producer or consumer are only intermittently connected to the network.
  • SQS Is a pull based, Not pushed based
  • Messages are 256kb in Size
  • Messages can be kept in the queue from 1 minute to 14 days. The default is 4 days
  • Visibility time out is the amount of time the message is invisible in the SQS Queue after a reader picks up that message.
  • Provided the job is processed before the visibility time out expires, the message will then be deleted from the queue. If the job is not processed within that time. The message will become visible again and another reader will process it
  • This could result in the same messages being delivered twice
  • Visibility time out maximum is 12 hours
  • SQS guarantees that your message will be processed at least once
  • Amazon SQS long polling is a way to retrieve messages from your Amazon SQS queues. While the regular short polling returns immediately. Even if the message queue being polled is empty
  • Long polling doesn’t return a response until a message arrives in the message queue, or the long poll times out

–  There are two types of Queue

  • Standard Queues (Default)
  • FIFO Queues

>> Standard Queues

  • Amazon SQS offers standard as the default queue type. A standard Queue let’s you have a nearly unlimited number of transactions per second
  • Standard Queues guarantee that a message is delivered at least once however occasionally (Because of the highly distributed architecture that allows high throughput) more than one copy of a message might be delivered out of order. Standard Queues provide best effort ordering which ensures that messages are generally delivered in the same order as they are sent

>> FIFO Queues

  • The FIFO Queues compliments the standard queue. The Most important features of the queue type are FIFO (First-in-First-Out) delivery and exactly once processing
  • The order in which messages are sent and received is strictly preserved and a message is delivered once and remains available until a consumer processes and deletes it, duplicates are not introduced in the queue. FIFO queues also supports message groups that allow multiple ordered messages groups within a single queue.
  • FIFO Queues are limited to 300 transactions per second (TPS), but have all the capabilities of standard queues

>> SWF

  • Amazon Simple Workflow Service (SWF) Is a web service that makes it easy to coordinate work across distributed application components.
  • Amazon SWF enables applications for a range of use cases including media processing, web applications back ends, business process workflows and analytics pipelines, to be designed as a coordinator of tasks
  • Tasks represent invocations of various processing steps in an application which can be performed by executable code, web service calls, human actions, and scripts

>> SWF Workers

  • Workers are programs that interact with Amazon SWF to get tasks, process received tasks, and retain the results

>> SWF Decider

  • The Decider is a program that controls the coordination of tasks i.e their redundancy concurrency, and scheduling according to the application logic

>> SWF Workers& Deciders

  • The Workers and the deciders can run on cloud infrastructure, such as amazon EC2, or on machines behind firewalls
  • Amazon SWF Brokers the interactions between workers and the deciders. It allows the decider to get consistency views into the progress of tasks and to initiate new tasks in an ongoing manner
  • At the same time, Amazon SWF stores tasks, assigns them to workers when they are ready, and monitors their progress. It ensures that a task is assigned only once and is never duplicated. Since Amazon SWF maintains the applications state durably, workers and deciders don’t have to keep track of execution state. They run independently and scale quickly.
  • Your Workflow and activity types and the workflow execution itself are all scoped to a domain. Domains isolate a set of types, executions and task list from others within the same account
  • You can register a domain by using the AWS Management console or by using the register domain action in the Amazon SWF API
  • The parameters are specified in javascript object notification (JSON) format
  • Maximum Workflow can be 1 year and the value is always measured in seconds


  • Amazon SWF presents a task oriented API, Whereas Amazon SQS offers a message oriented API
  • Amazon SWF ensures that a task is assigned only once and is never duplicated with amazon SQS. You need to handle duplicated messages and may also need to ensure that a message is processed only once
  • Amazon SWF Keeps track of all the tasks and events in an application, with amazon SQS, you need to implement your own application level tracking, especially if your application uses multiple queries

VPC 101

i’m Planning on taking my AWS Advanced Networking Speciality Exam in October, and this is a key topic in the exam.

  • Think of a VPC as a virtual data center in the cloud.
  • Amazon Virtual Private Cloud lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual network that you define.
  • You have complete control over your virtual networking environment, including selection of your own ip address range, creation of subnets, and configuration of route tables and network gateways.
  • You can easily customize the network configuration for your amazon virtual private cloud. For example, You can create a public facing subnet for your web servers that have access to the internet and place your backend systems such as databases or application servers in a private facing subnet with no internet access.
  • You can leverage multiple layers of security, including security groups and network access control lists, to help control access to amazon Ec2 instances in each subnet.
  • Additionally you can create a hardware virtual private network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter.

>> What Can You Do With A VPC

  • Launch instances into a subnet of your choosing
  • Assign Custom IP Addresses ranges in each subnet
  • Configure Route Tables between subnets
  • Create Internet Gateway and attach it into your VPC
  • Much better security control over your AWS Resources

>> VPC Peering

  • Allows You To Connect One VPC With Another via a Direct network Route Using Private IP addresses
  • Instances Behave as if they were on the same private network.

Database 101

Today in preparation for my AWS- Certified Solutions Architect Exam, I went through databases on AWS (Which was quite to new to me), Below are few of the notes i created

Relational databases are what most of us all are used to. They have been around since the 70’s. Think of a traditional spreadsheet.

– Database, Tables, Row, Fields (Columns)

>> Relational Database Types

  • Aurora
  • MariaDB


  • Amazon DYNAMODB is a fast and flexible NOSQL database service for all applications that need consistent, single digit multi second latency at any scale.
  • It is a fully managed database and supports both document and key value data models. It’s flexible data models and reliable performance make it a great fit for mobile, web , gaming, ad tech, IOT, and many other applications
  • Stored on SSD Storage
  • Spread across 3 geographically distinct data centers
  • Eventual consistent reads (default)
  • Strongly Consistent Reads

>> Aurora

  • Amazon Aurora is a MySQL Compatibility relational database engine that combine the speed and availability of high end commercial databases with the simplicity and cost effectiveness of open source databases
  • Amazon Aurora provides up to five times better performance than MySQL at a price one tenth that of a commercial database while delivering simpler performance and availability


  • Dynamo DB Offers “ Push Button “ Scaling meaning that you can scale your datatbase on the fly, without any down time
  • RDS is not so easy, You usually have to use a bigger instance size or to add a read replica

AWS Route 53

Route 53 Is AWS DNS Service (Which is named after the DNS protocol Number). But before going through it,  here are some few points to understand

  • DNS is used to convert human friendly domain names (Such as HTTP:// into an internet protocol (IP) address (Such as HTTP://
  • IP addresses are used by computers to identify each other on the network
  • IP addresses commonly come in 2 different forms, ipv4 and ipv6


Amazon Route 53 is a highly available and scalable cloud DNS web service that is designed to give developers and organizations an extremely reliable and cost effective way to route end users to internet applications.

The Following are the routing policies available in Route 53.

– Simple

– Weighted

– Latency

– Failover

– Geolocation


  • This is the default routing policy when you create a new record set.
  • This is the most commonly used when you have a single resource that performs a given function for your domain, for example one web server that serves content for he website


  • Weighted routing policies let you split your traffic based on different weights assigned.
  • For example you can set 10% of your traffic to go to US-EAST-1 and 90% to go to EU-West-1


  • Latency based routing allows you to route your traffic based on the lowest network latency for your end user. (i.e which region will give them the fastest response time)
  • To use latency based routing you create a latency resource record set for the Amazon EC2 (Or ELB) Resource in each region that hosts your website.
  • When Amazon Route 53 receives a query for your site. It selects the latency resource record set for the region that gives the user the lowest latency. Route 53 then responds with the value associated with that resource record set


  • Failover routing policies are used when you want to create an Active /Passive set. For Example you may want your primary site to be in EU-West 2 and your secondary DR Site in AP- South East 2
  • Route 53 will monitor the health of your primary site using a health check
  • A Health check monitors the health of your end points


  • Geolocation routing let’s you choose where your traffic will be sent based on the geographic location of your users (I.e the location from which DNS Queries originate) Foe example you might want all queries from Europe to be routed to a fleet of EC2 Instances that are specifically configured for your European customers.
  • These Servers may have the local language of your European customers and all the prices are displayed in euros


AWS Lambda

So Today i Learnt about an AWS compute Service called Lambda

So What Is Lambda?


  • AWS Lambda is a compute service where you can upload your code and create a Lambda function
  • AWS Lambda takes care of provisioning and managing the servers that you use to run the code. You don’t have to worry about operating systems, scaling e.t.c you can use Lambda in the following ways
  • No Servers, Continuous scaling
  • As an event driven compute service, AWS LAMDA runs your code in response to events
  • These event could be changes to data in an Amazon S3 bucket or an Amazon dynamo DB table
  • As a compute service to run your code in response to HTTP requests using amazon API gateway or API calls using AWS SDKs

>> How Is LAMDA Priced?

Number of Requests

  • First Million Requests are free $0.20 per 1 million requests thereafter


  • Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms.
  • The price depends on the amount of memory you allocate to your function. You are charged $0.00001667 for every GB second used.


Elastic Block Storage (EBS) 101

So Today I Worked On AWS Service Called EBS. Which just feels like a external floppy disk or flash

So What Is EBS ?


  • Amazon EBS allows you to create storage volumes and attach them to Amazon EC2 instances, run a database or use them in any other way you would use a block device
  • Amazon EBS volumes are placed in a specific availability zone, where they are automatically replicated to protect you from the failure of a single component

EBS Volume Types


  • General Purpose SSD (GP2)

– General Purpose, balances both price and performance

– Ratio of 3 IOPS per GB with up to 10,000 IOPS and the ability to burst up to 3000 IOPS for extended periods of time for volumes at 3334GB and above

  • Provisioned IOPS SSD (101)

– Designed for I/O intensive applications such as large relational or NoSQL databases

– Use if you need more than 10,000 IOPS

– Can Provision Up to 20,000 IOPS per Volume

  • Throughput Optimized HDD (ST1)

– Big data

– Data warehouses

– Log processing

– Cannot be a boot volume

  • Cold HDD (SC1)

– Lowest cost storage for infrequently accessed workloads

– File Server

– Cannot be a boot volume

  • Magnetic (Standard)

– Lowest Cost per gigabyte of all EBS volume types that is bootable.

– Magnetic volumes are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important.

EC2 101

So today i worked on EC2, So far the most interesting topic, and to me the Cornerstone of AWS

So have fun going through this (P.S there is a Cartoon at the end)

So What Is EC2?


  • Amazon Elastic Compute cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud.
  • Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirement change
  • Amazon EC2 changes the economics of computing by allowing you to pay for only for capacity that you actually use.
  • Amazon EC2 Provides developers the tools to build failure resilient applications and isolate themselves from common failures scenarios

>> EC2 Options

  • On Demand:- Allow you to pay a fixed rate by the hour (or by the second) with no commitment
  • Reserved:- Provide you with a capacity reservation and offer a significant discount on the hourly charge for an instance 1 year or 3 year instance to arms
  • Spot:- Enable you to bid whatever price you want for instance capacity, providing far even greater savings if your application have flexible start and end times
  • Dedicated Hosts:- Physical EC2 servers dedicated for your use. Dedicated hosts can help you reduce costs by allowing you to use your existing server bound server licenses.


>> Use Cases (On Demand)

  • Users that want the low cost and flexibility of Amazon EC2 without any upfront payment of long term commitment
  • Applications with short term, spiky or unpredictable workloads that cannot be interrupted
  • Applications being developed or tested on Amazon EC2 for the first time

>> Use Cases (Reserved)

  • Applications with steady state or predictable usage
  • Applications that require reserved capacity
  • Users able to make upfront payments to reduce their total computing costs even further

Standard Reserved Instances (Up to 75% on demand)

Convertible Reserved Instances (Up to 54 % off on demand)

  • Capability to change the attributes of the RI as long as the exchange results in the creation of reserved instances of equal or greater value.

Scheduled Reserved Instances

  • Available to launch within the time windows your reserve. This option allows you to match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, a week, or a month.

Use Cases (Spot)

  • Applications that have flexible start and end times.
  • Applications that are only feasible at very low compute prices
  • Users with urgent computing need for large amounts of additional capacity

>> Use Cases (Dedicated)

  • Useful for regulatory requirements that may not support multi- tenant virtualization
  • Great for licensing which does not support multi-tenancy or cloud deployment
  • Can be purchased on-demand (hourly)
  • Can be purchased as a reservation for up to 70% off the on-demand price


>> EC2 Instance Types

1) D2 – Dense Storage (Use Case: File Servers, Data Warehousing, Hadoop

2) R4 – Memory Optimized (Use Case: Memory Intensive Apps, DBs

3) M4 – General Purpose (Use Case: Application Servers)
4) C4  – Compute Optimized (Use Case: CPU Intensive Apps, DBs)

5) G2 – Graphics Intensive (Use Case: Video Encoding, 3D Application, Streaming)

6) I2 – High Speed Storage (Use Case: No SQL DBs, Data Warehousing)

7) F1 – Field Programmable Gate Array (Use Case: Hardware Acceleration for your code)

8) T2 – Lowest Cost, General Purpose (Use Case: Web Servers, Small DBs)

9) P2 – Graphics, General Purpose (Use Case: Machine Learning, Bitcoin Mining)

10) X1 – Memory Optimized (Use Case: SAP Hana, Apache Sparks)



D – For Density


M – Main Choice For General Purpose Apps

C- For Compute

G – Graphics

I – For IOPS

F – For FPGA

T – Cheap General Purpose (Think Tc2 Micro)

P – Graphics (Think Pics)

X – Extreme Memory