How I passed Hashicorp Vault exam

Reading Time: 3 minutes

In this post, I will discuss how I prepared for and passed the Hashicorp vault exam.

Note: I only used vault for PoC with a limited scope before I start preparing for the exam and hence some concepts were really new to me.

This exam is very developer-focused and if you are not comfortable with development terms, you might have some difficulties but not impossible.

It took me a total of two weeks to prepare for the exam and during those two weeks, I worked on hands-on activities and read the documentation and some of which I will cover by making short videos in the coming days.

For preparing for this exam I used the official study guide published by Hashicorp and can be found here.

Vault Concepts

What is Vaulthttps://www.vaultproject.io/docs/what-is-vault

Watch video from Mike Møller Nielsen

Intro to Vault (Armon)

11 fundamentals conceptshttps://www.vaultproject.io/docs/concepts – Read them and re-read them as they are really important! Some core topics you should focus on

Vault Fundamentals

Describe authentication methods:

Authentication – https://www.vaultproject.io/docs/auth

Concepts https://www.vaultproject.io/docs/concepts/auth

AWS Auth methodhttps://www.vaultproject.io/docs/auth/aws

Also, I recommend that you complete all the labs for “Authentication” as this is a major topic for the exam.

https://learn.hashicorp.com/collections/vault/auth-methods

CLI and UI – Understand what all CLI commands do and also review all CLI options.

CLI access to Vault https://www.vaultproject.io/docs/commands/index.html

Vault UIhttps://www.vaultproject.io/docs/configuration/ui

Also, review the Vault CLI options within UI why it is different from binary cli.

Vault Policies

This is a very important topic and I recommend that you create a dev mode server, create some policies, create users with policies associated, and play around with concepts.

Knowing how policies work for the exam will help you answer questions quickly as there are many questions related to polices. Also, it is important to understand what “*” does and what “+” does. My recommendation is to go over the following labs:

https://learn.hashicorp.com/tutorials/vault/policies?in=vault/policies

https://learn.hashicorp.com/tutorials/vault/getting-started-policies?in=vault/getting-started

https://learn.hashicorp.com/tutorials/vault/policy-templating?in=vault/policies

Tokens

You have to know this very well! This is the heart and soul of the Vault engine so knowing this and going through hands-on labs will help you understand Vault really well.

Root Token – https://www.vaultproject.io/docs/concepts/tokens

Learn what is the difference between service and batch tokens. Token with parent and orphan tokens.

Also, understand how token lease works TTL on token – token Accessors

See the following video on auto-unseal and batch token:

Secrets Management

It is the core topic and you must know ins and outs of this – understand how each secret engine works and what is the use case for each secret engine. Understand

Review the following topics:

https://www.vaultproject.io/docs/secrets

https://www.vaultproject.io/docs/secrets/databases

https://www.vaultproject.io/docs/secrets/aws

Complete the following labs:

Vault API

You will see questions from Vault API, review the following topics, and understand how token header is sent via CURL:

https://www.vaultproject.io/docs/auth/approle.html

Understand when to use Approle vs other authentication methods.

https://learn.hashicorp.com/tutorials/vault/getting-started-apis

Watch this video from Mike Møller Nielsen – He explains how the API and curl works with response wrapping.

Vault Architecture

This is an important topic as well, you don’t necessarily have to create an HA vault cluster but it helps to understand how the deployment works. I used the following exercises to deploy Vault HA with AWS:

https://github.com/hashicorp/vault-guides/tree/master/operations/provision-vault/quick-start/terraform-aws (you must know how to use #Terrafrom)

Watch Bryan Krausen Vault HA video:

Overall, You must go through all the topics in the study guide here – As the questions are asked from a wide variety of topics but topics and items I have shared should get you comfortable with Vault.

Some tips for taking exams:

  • Arrive 15 mins before the exam starts
  • Read the questions and answers carefully
  • If you don’t know the answer, mark the question and move on
  • You have 60 minutes to answer all the questions
  • Take Ned Bellavance Pluralsight course if you can here

Terraform Cloud Series – Part 4 (remote state)

Reading Time: 2 minutes

Continuing from where left off, In this post, I will discuss how to tap into workspace state file.

In the previous post, we connected workspace dependency allowing execution of child workspace, however, in some cases stack requires fetching data sources in order to cross-reference the resource name, id, etc. allowing us to make terraform code more usable and flexible.

Let’s look at an example of how to pull data from a remote state file stored in the Terraform cloud.

If we look at the execution flow in the previous post, We executed 1-poc-network and stack trigger executed 2-poc-security-groups, but when we execute 2-poc-security-groups it requires vpc_id created in 1-poc-network. So, let’s look at the code and break it down a bit.

module "vote_service_sg" {
  source = "terraform-aws-modules/security-group/aws"
  name        = "access-security-group"
  description = "Security group for user-service with custom ports open 
  within VPC, and PostgreSQL publicly open"
  vpc_id      = "VPC_ID" # --> VPC ID associating Security group to VPC
  ingress_cidr_blocks      = ["10.10.0.0/16","10.10.105.0/24","78.1.10.100"]
  ingress_rules            = ["https-443-tcp"]
  ingress_with_cidr_blocks = [
    {
      from_port   = 8080
      to_port     = 8090
      protocol    = "tcp"
      description = "User-service ports"
      cidr_blocks = "10.10.0.0/16"
    },
    {
      rule        = "postgresql-tcp"
      cidr_blocks = "0.0.0.0/0"
    },
  ]
  tags = var.default_tags
}

Looking at line # 6, notice we have to provide VPC ID every time this code is to be executed.

vpc_id      = "VPC_ID" # --> VPC ID associating Security group to VPC

If we were to change or add this as variable, it will work, but requires someone to find the VPC ID and input the value; a lot of work!

What if we can fetch the data from the previous stack and let terraform figure this out. We need to add the following code block to our terraform stack:

data "terraform_remote_state" "vpc" {
  backend = "remote"
  config = {
    organization = "securectl-poc"
    workspaces = {
      name = "1-poc-network"
    }
  }
}

Let me explain how to interpret the remote state:

data "terraform_remote_state" "vpc" {
  backend = "remote"

The section above indicates that we are setting a remote state called “vpc” and with the backend type of remote.

  config = {
    organization = "securectl-poc"
    workspaces = {
      name = "1-poc-network"

And in the section above, we are setting up our config’s allowing us to fetch the needed data from a remote state file. Notice that there are two required inputs that are needed.

  • organization
  • workspace name

Now that we have our remote-state setup let’s change the code to fetch data from the remote state:

data "terraform_remote_state" "vpc" {
  backend = "remote"
  config = {
    organization = "securectl-poc"
    workspaces = {
      name = "1-poc-network"
    }
  }
}

module "vote_service_sg" {
  source = "terraform-aws-modules/security-group/aws"
  name        = "access-security-group"
  description = "Security group for user-service with custom ports open 
  within VPC, and PostgreSQL publicly open"
  vpc_id      = data.terraform_remote_state.vpc.outputs.vpc_id.vpc_id
  ingress_cidr_blocks      = ["10.10.0.0/16","10.10.105.0/24","78.1.10.100"]
  ingress_rules            = ["https-443-tcp"]
  ingress_with_cidr_blocks = [
    {
      from_port   = 8080
      to_port     = 8090
      protocol    = "tcp"
      description = "User-service ports"
      cidr_blocks = "10.10.0.0/16"
    },
    {
      rule        = "postgresql-tcp"
      cidr_blocks = "0.0.0.0/0"
    },
  ]
  tags = var.default_tags
}

Notice that vpc_id now points to a data value of remote-state file within workspace 1-poc-network.

data.terraform_remote_state.vpc.outputs.vpc_id.vpc_id

As you can see how our code is re-useable allowing us to extract output information from remote-state.

Using this method, we can create dependency within our terraform stack allowing us to use the remote state for extracting required attributes. I hope this helped you understand how the backend/remote state works, try it out yourself!

Terraform Cloud Series – Part 3 (Connect Workspace Trigger)

Reading Time: 4 minutes

In the previous series, we covered how to get started with Terraform Cloud and setup VCS with our source repository, In this post, we will look at how we can use “trigger” capability for dependent workspaces/stack.

For the purpose of the demo, I will create the following resources by using the trigger feature of the TF cloud and in the following order of stack:

  • 1-poc-network
  • 2-poc-security-groups
  • 3-poc-buckets

1-poc-network

This will create the required network i.e VPC, subnets, IGW, SG resource to create AWS EC2 instance and other resources that require a network.

2-poc-security-groups

This will create an application-specific security group for the purpose of the demo

3-poc-buckets

Additional resources needed to support the application i.e. S3 buckets, policies, etc.

Putting it all together, here how the visualization looks like below:

Essentially, we are sort of creating a job dependency, but my experience with the trigger has been mixed as there seem to be a lot of limitations with checks and balances. In my opinion, the workflow above is good for a repeatable process where you don’t care if the process is executed multiple times and you expect the same result every time regardless of number of executions.

What I experienced is that if you run into an error with a parent job during the apply phase, TF cloud will still trigger the jobs downstream, hence there seems to be duplication or no way to tell the downstream job if the parent fails. However, regardless of limitations, it is still good feature allowing you to setup simple chaining. If you need a more task-driven setup, in my opinion, Gitlab CI/CD is a better tool.

Now let’s look at the workspaces and how to setup the trigger for child jobs:

If we look at the 1-poc-network workspace, under the run trigger option we have the option to attach child workspace.

Note: Even if the run trigger is setup, a child job can be executed by itself or via VCS code commit.

Notice that I don’t have trigger setup on the parent job, that is because trigger needs to be executed from 2-poc-security-groups when 1-poc-network executed! and yes, I know it is confusing as it took me by surprise too!

So, let’s look at the trigger properties for 2-poc-security-groups:

So, basically we are saying when 1-poc-network job is executed, TF Cloud should also execute 2-poc-security-groups. Now, let’s also look at the 3-poc-buckets:

Now you get the idea of how the flow works! Also, if you are planning on taking the Hashicorp Terraform Associate exam, knowing TF cloud knowledge is a plus and will help pass the exam. I will do another post on TF Associate exam,

Trigger the parent job

Now – let me trigger the job (in this case from git repo commit) – as soon as I commit Terraform job is scheduled and executed

Noticed, it picked up the trigger and TF cloud will execute the dependent workspace after the apply is completed for the source job.

Similarly like before, 2-poc-security-group also detected the downstream trigger:

Now, noticed that there is nothing to do as my bucket was already created. However, I changed the name on the bucket in repo and the job still executed independently.

Conclusion

Terraform workspace trigger feature allows users to create stack dependency when working with a large stack. This good method when the user needs to create multiple workspaces connected and suppose you may be changing dependent resources that require complete teardown and re-create.

AWS Pricing Calculator **NEW

Reading Time: 3 minutes

Recently I had a need to create a quote for AWS infrastructure and I noticed that AWS is switching from “AWS simple calculator” to “AWS pricing calculator” – So, let’s give it a try.

The process is pretty straight forward, you punch in some input and AWS generates TCO for AWS kit. It is a bit of a learning curve to get around, but not bad.

https://calculator.aws/#/addService


Once you click the URL, you will start with a blank pricing sheet which will allow the user to add by service and you simply input your requirements.

For instance, let’s say we need to provision 10 ec2 instances, simply click configure and add your inputs.

There are two methods:

  • Quick estimate
  • Advance estimate

For this demo, I am sticking with a quick estimate!

Check out the nice feature where I just plug in my numbers for “vCPUs” and “Memory” and AWS automatically suggested that I should use “r5a.8xlarge” – this is pretty nice since I don’t have to scramble with figuring out what shape I need to select for my use case.

Next, I need to define how many ec2 instances I need to add.


Great what about the pricing model, not to worry! The new pricing calculator allows us to select the model for pricing:

Another example with “Standard Reserved Instances”:

Next, we can add storage for EBS block volume:

Finally, we add the ec2 estimate to the overall pricing estimate and continue to work with additional resources.

Give it a try! it is free!

Attach is an example exported output from Pricing calculator:

Getting started with AWS Athena – Part 4

Reading Time: 2 minutesIn previous blog (Part-3),  I compared basic workload with Athena and other query engines, both on-prem and cloud based solution.  In this post, we will do bit deep dive, understand how the service works and how Amazon build Athena service.

First understand the service flow, figure below explains how flow works with AWS Athena service and how you can take the cold data and run analytics on data-set.

Athena flow

Let’s decouple the entire flow –

  • When you create table, the table metadata is stored in metadata indicated with red arrow.
  • The table definition has a reference of where data resides in S3 bucket indicated in blue pointers.
  • Also, Athena will also create S3 bucket to store service logs indicated in doted line
  • AWS Athena rely on Presto query in-memory engine for fast query analytics 
  • The results either can be displayed on the Athena console or can be pushed to AWS QuickSight for data slice and dice.
  • With AWS Quicksight, it is great way to understand, slice and dice data and publish dashboards.

There are some limitations with AWS Athena shown in table below:

Service limits

 

Athena Service limitations 
Action  Limit
Parallel submit 1
Parallel query executions 5
Number of databases 100
Tables per database 100
Partitions per table 20K
S3 buket – log log bucket for service outputs

Conclusion

Again, AWS Athena is good way to start learning about your data quality, data trend and converting raw data to dashboards with few clicks.

In Part-5 I will touch more on AWS Athena + QuickSight and how data can be quickly converted to dashboards.

Hope this post helps understand how AWS Athena workflow.  Comments and questions are welcomed!

Thanks!

Getting started with AWS Athena – Part 3

Reading Time: 3 minutesIn previous blog (Part-2),  I created two tables using JSON and CSV format. In this post (part 3) I will talk about how one can explore dataset,  query large data with predicate filtering and some basic inner joins using Athena. Also, I will compare the performance with Hadoop cluster and AWS EMR.

For this benchmark I am comparing between following platforms:

  • AWS EMR (1 master, 4 cores [m3.xlarge])
  • On-Prem Hadoop cluster (4 nodes)
    • Hive
    • Impala
    • Hive+Spark
  • AWS Athema

First I need to set up my tables, again using similar method from previous blog; I just simply generated my DDL and created “external” table on top of my S3 dataset.

Before we I can create tables, I should give readers context about the dataset.  I downloaded dataset from data.gov and I am using “Consumer Compliant” dataset. For accessibility reasons, I am providing the direct link to dataset:

https://catalog.data.gov/dataset/consumer-complaint-database

Data.gov provides quite extensive amount of open data which can be used for benchmarks and data discovery. I downloaded “csv” formatted file and converted to JSON file.  I did some testing with JSON file, but number don’t seems to be accurate; so I will not include it in this post for now.

DDL for Text Table


 CREATE EXTERNAL TABLE IF NOT EXISTS default.Consumer_Complaint_csv (
 `date_received` string,
 `product` string,
 `sub-product` string,
 `issue` string,
 `sub-issue` string,
 `consumer-complaint-narrative` string,
 `company-public-response` string,
 `company` string,
 `state` string,
 `zip_code` int,
 `tags` string,
 `consumer-consent-provided` string,
 `submitted_via` string,
 `date_sent` string,
 `company-response` string,
 `timely_response` string,
 `consumer-disputed` string,
 `complaint_id` string 
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
 'serialization.format' = ',',
 'field.delim' = ','
) LOCATION 's3://athena1s3bucket/csv/';

Great, I have my table created, now let’s execute some basic queries  to make sure I can access the data from S3 bucket.

select count(*) from Consumer_Complaint_csv;

I’ve created similar tables on AWS EMR and On-Prem Hadoop cluster, I used DDL below to create tables respectively:

create external Table default.Consumer_Complaint_csv
(
 `date_received` string,
 `product` string,
 `sub-product` string,
 `issue` string,
 `sub-issue` string,
 `consumer-complaint-narrative` string,
 `company-public-response` string,
 `company` string,
 `state` string,
 `zip_code` int,
 `tags` string,
 `consumer-consent-provided` string,
 `submitted_via` string,
 `date_sent` string,
 `company-response` string,
 `timely_response` string,
 `consumer-disputed` string,
 complaint_id int
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
 'serialization.format' = ',',
 'field.delim' = ','
) LOCATION '/tmp/Consumer/csv' 
;

HDFS Cluster config:

Now that I have my tables created on all environment, let’s start executing some queries for benchmark purpose. I used queries below to extract data and for consistency purpose, structure is same across the board.

Q1  – Simple count

select count(*) from consumer_complaint_csv;

Q2 – Simple count with predicate filter

select count(*), zip_code From consumer_complaint_csv where state = 'TX' 
group by zip_code having count(*) > 20 ;

Q3 – Self inner join with predicate filter

select a.product, a.company , a.issue , count(*) as count_ttl
from default.Consumer_Complaint_csv a 
join default.Consumer_Complaint_csv b 
on (a.company = b.company)
where a.state = 'TX'
group by a.product, a.company , a.issue 
having count(*) > 50;

Q4 – Self inner join with predicate filter and in list

select a.product, a.company , a.issue , count(*) as count_ttl
from default.Consumer_Complaint_csv a 
join default.Consumer_Complaint_csv b 
on (a.company = b.company)
where a.state = 'TX'
and a.product in ('Debt collection','Credit card')
group by a.product, a.company , a.issue 
having count(*) > 50;

Looking at queries, nothing fancy, just simple sql queries. My goal here is to calculate performance and does AWS Athena holds up to it’s promise and performant enough.  I’m sure I can get better performance with parquet or orc file, but the goal here is to see if service work. I can say that I am impressed, not worrying about what is under the hood or infrastructure, it is a good tool.

Now let’s look at the numbers:

chart_benchamrk

Note:All the timings above are in “seconds.”

One thing to note is that dataset size are 500MB for Q3 & 4 due to self join. For Q1 & 2 dataset size is 263MB.

Conclusion

On aspect of performance, It is not bad but not great. keeping in mind that I don’t have to pay for underlying infrastructure, but only for my executes; that’s bang for the buck!

Overall I like the performance and I will certainly leverage Athena for my future designs.

I am not saying one should ditch the Hadoop cluster or EMR and start using Athena for on-going operations. I think Athena has it’s place in toolkit, can be good starting point to do data discovery and understanding data when one does not know quality of the data.

Hope this post helps understanding bit more about the AWS Athena as service. Do give it a try yourself and let me know your thoughts and comments.

Thanks!

Getting started with AWS Athena – Part 2

Reading Time: 2 minutesIn previous blog, I talked about how to get going with Athena as service.  In this post, we will create two type of tables:

  • Table on existing text data under S3 bucket
  • Table on S3 bucket, formatted by json

From looking at the structure AWS is converting their S3 storage as HDFS external storage via HiveServer2 and HiveMeta store. So, suppose  you have table in Hadoop and you need to migrate the data to S3 or you are landing the data into S3 directly; features like Hive SerDe can transform your data-set on fly when accessing from S3 bucket. Interesting feature, Now let see it in action:

I assume you already have some data in S3 bucket or copy data from from source to S3 bucket.  I might do another post explaining how to extend your HDFS cluster with S3.

First table that I will create is based on comma separated textfile:

DDL for Text Table

CREATE EXTERNAL TABLE IF NOT EXISTS default.kinesis_stream_test (
 `trans_id` string,
 `trans_num` string,
 `source_ip` string,
 `kinesis_stream_event` string 
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
 'serialization.format' = ',',
 'field.delim' = ','
) LOCATION 's3://myS3bucket/kinesis_stream/text/'

Interesting thing to notice here is that I am not using “create table wizard”, I can simple generate my DDL and create external table on top of my data.  All I need is my S3 end-point and my table is ready.  I used similar method for both Text dataset and JSON formatted tables.

athena-text-table

Sample dataset staged in S3 bucket:

ff6d3ee7-cd2b-4d62-8a5c-3e50ff96f120,201701210601,ip-172-31-36-113,AWS Kinesis stream test # 2017012106011484980006
92abd107-333b-4749-8b4c-64fa78a6f8d3,201701210601,ip-172-31-36-113,AWS Kinesis stream test # 2017012106011484980006
a67deb57-36c6-48cd-99b3-0cb89814c1bf,201701210601,ip-172-31-36-113,AWS Kinesis stream test # 2017012106011484980006
96dafd35-b8d1-4334-bb01-73796e823be5,201701210601,ip-172-31-36-113,AWS Kinesis stream test # 2017012106011484980006
2eec7e53-e696-4a93-a541-1f3f2757e804,201701210601,ip-172-31-36-113,AWS Kinesis stream test # 2017012106011484980006
b16e4fa4-8971-4dd9-9701-20a60f603618,201701210601,ip-172-31-36-113,AWS Kinesis stream test # 2017012106011484980006
1e8be437-66d8-4262-9409-5638f7305c33,201701210601,ip-172-31-36-113,AWS Kinesis stream test # 2017012106011484980006
b7345fde-73e5-4fb2-945d-7189af290ea9,201701210601,ip-172-31-36-113,AWS Kinesis stream test # 2017012106011484980006
ac324869-aa5f-4c7b-865d-210ec79a14d8,201701210601,ip-172-31-36-113,AWS Kinesis stream test # 2017012106011484980006

DDL for JSON Table

CREATE EXTERNAL TABLE IF NOT EXISTS default.kinesis_stock_stream (
 `change` double,
 `price` double,
 `sector` string,
 `ticker_symbol` string 
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
 'serialization.format' = '1'
) LOCATION 's3://aeg-awsfirehose/2017/01/';

Getting Started with AWS Athena – Part 1

Reading Time: 3 minutesAt last AWS ReInvent, AWS announced new service called “Athena” (Greek virgin goddess of reason). There has been a lot fuss about AWS S3 service, as I am getting more and more comfortable with AWS platform, I thought let put Athena to test.

In this five part series, we will do deep-dive how the service works out-of-box and with some customizations in mix.

So, let’s get our hands in the mix

  • Login to your AWS console
  • Search for Athena service
  • Once on landing page, you get option to create ELB table
  • You will get to tutorial page, let’s use that for our initial table creation.

  • Wizard will ask you to create new table

  • To keep structure more simple, I created new metadata database for all my Athena tables
    • DB Name = my_athena_db
    • Table name = my_first_athena_table
    • S3 example location = s3://athena-examples-us-east-2/elb/plaintext

  • Next, since we are going to mine weblog – AWS provide RegEx for us

  • Next, pre-populate the columns using prebuilt method

  • Verify and validate everything looks good

  • Execute “Run Query”

So, I created external table using S3 bucket, DDL is below:

CREATE EXTERNAL TABLE IF NOT EXISTS my_athena_db.my_first_athena_table (
  `request_timestamp` string,
  `elb_name` string,
  `request_ip` string,
  `request_port` int,
  `backend_ip` string,
  `backend_port` int,
  `request_processing_time` double,
  `backend_processing_time` double,
  `client_response_time` double,
  `elb_response_code` string,
  `backend_response_code` string,
  `received_bytes` bigint,
  `sent_bytes` bigint,
  `request_verb` string,
  `url` string,
  `protocol` string,
  `user_agent` string,
  `ssl_cipher` string,
  `ssl_protocol` string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
  'serialization.format' = '1',
  'input.regex' = '([^ ]*) ([^ ]*) ([^ ]*):([0-9]*) ([^ ]*):([0-9]*) ([.0-9]*) ([.0-9]*) ([.0-9]*) (-|[0-9]*) (-|[0-9]*) ([-0-9]*) ([-0-9]*) \\\"([^ ]*) ([^ ]*) (- |[^ ]*)\\\" (\"[^\"]*\") ([A-Z0-9-]+) ([A-Za-z0-9.-]*)$'
) LOCATION 's3://athena-examples-us-west-2/elb/plaintext/';

Notice that above format looks quite similar to Hive SerDe table.

Now that I have my table created, let take a look what we can do with it..

select count(*) from my_athena_db.my_first_athena_table;

Pretty cool! I just extracted data from S3 bucket without having any database/Hadoop cluster. Although, I don’t think performance will be that great with Athena, but I think depending on use-case this service might be great and cost effective.

Now that we are on subject of cost, with Athena, you only pay for Queries that you execute. there is no cost compute or storage(Storage cost still apply for S3).

Another feature that I noticed, Athena front-end/UI looks too similar to Cloudera HUE, but I can’t find any document that is referencing that.

But, from offering aspect, I think Athena will do great! comments or feedback are welcome!

Note: Athena is currently only available in North Virginia, Ohio and Oregon.