Archive of posts from 2018

AWS Certification SME

AWS Certification SME program helps AWS Certification team, develop the certification exams. It’s a complicated process which as many steps, but I won’t get into now. However, I have now done two workshops on two different steps, one an item writing workshop back in November and now a Standard setting workshop.

The most interesting aspect is fellow partitioners create the exams with certifications, there are people to facilitate, validate and review the information.

The questions are designed to have you apply AWS experience and knowledge of situations. Someone asked if labs would be a replacement, maybe running thru a hundred labs would be the equivalent of real-world experience.

Doing the course, reading all the FAQs and whitepapers and watching all the 400 reinvent videos would be the minimum.

AWS Certification SME program helps AWS Certification team, develop the certification exams. It’s a complicated process which as many steps, but I won’t get into now. However, I have now done two workshops on two different steps, one an item writing workshop back in November and now a Standard...

Master's in Computer Science

I completed my Master’s in computer science through Georgia Institute of Technology.   It took 3.5 years, and hundreds of hours each semester.   Normally I would take 2 classes per term.   Saying it was hard would be an understatement.   The hardest part are hours lost studying and away from my family.    It was like having a second full-time job.   I feel honored to be part of the 5th graduating class.   I’ve met some great people along the way.   Whether it was my first group in CS6310, only 3 of the original 5 made it thru that class.   The other group from CS8803 Cyber-Physical Design and Analysis who happened to be in the on-campus class.  I had the privilege to meet in person when I opened a retail store in Atlanta.   There were groups in other classes too.   The work taking a class two times to graduate.   

 I know the program is not meant to be easy, as the brainchild of and Dr. Zyi who will retire later this year.  He and faculty created something special, and the unbelievable amount work of my fellow students spent as TA’s.   I hope this program continues to educate the masses and continues to grow and develop.   I know it has made a positive impact on my life and hopefully, with time I’ll find ways to give back to the program.   

I completed my Master’s in computer science through Georgia Institute of Technology.   It took 3.5 years, and hundreds of hours each semester.   Normally I would take 2 classes per term.   Saying it was hard would be an understatement.   The hardest part are hours lost studying and away from my family.  ...

Starting New Position with AWS

Today I officially started with Amazon Web Services as a Senior Cloud Architect. The position is with Professional Services working with Strategic Accounts.

I am looking forward to helping AWS customers continue to build on their cloud journey.

Today I officially started with Amazon Web Services as a Senior Cloud Architect. The position is with Professional Services working with Strategic Accounts.

I am looking forward to helping AWS customers continue to build on their cloud journey.

AWS re:Invent 2018

Every year 10s of thousands of AWS customers and prospect customers desend on Las Vegas. For those of us to don’t make the trek Amazon live streams the the daily Key Notes. Those are where AWS announces it’s newest products and changes. Each year I build a list before November as AWS has a tendency to leak smaller items. This year my wish list for AWS was as follows:

  1. Mixing sizes and types in ASG - Announced
  2. DNS fixed for Collapsed AD - Announced
  3. Cross regional replication for Aurora PostGreSQL - Regions expanded  still waiting on the cross regions to be announced 
  4. Lambda and more Lambda integrations  - Announced 
  5. AWS Config adding machine learning based on account.  
  6. Account level S3 bucket control - Partly Announced 
  7. 40Gbps Direct Connect 

There a lot of announcements, far too many to recap if interested in them all go read the AWS News Blog.   I do like to find two announcements which shock me and two things that seem interesting. 

The two items which shocked me were:

  1. DynamoDB added transactional support (ACID).   This means someone could build an e-commerce or banking application which requires consistent transactions on dynamoDB.  
  2. AWS Outposts and AWS RDS on VMware allows you to deploy AWS on-premise and AWS will manage this for you.   I can only assume this is to help with migrations or workloads so sensitive they can’t move off-premise.     It would be interesting to see how AWS manages storage capacity and compute resources as many companies struggle with these and how the management model will work.   However, given the push to move away from traditional data centers, so reserves that course.   It will be interesting to see how it plays out over the next year and what services this provides a company migrating to the cloud. 

On my passions is security, so the two things which interested me are

  • AWS Security Hub and  AWS Control Tower  - I consider these one thing as they will be used in tandem.   Control Center will provide security launch zone for an organization while AWS Security Hub will provide governance and monitoring of security 
  • The ARM processor in the a1 instances which Amazon developed internally.   Based on pricing these instances seem to offer cost advantages to the existing instance types.   

What did you find interesting, amusing or shocking?   What were you looking for which wasn’t announced? 

Every year 10s of thousands of AWS customers and prospect customers desend on Las Vegas. For those of us to don’t make the trek Amazon live streams the the daily Key Notes. Those are where AWS announces it’s newest products and changes. Each year I build a list before November...

AWS Certified DevOps Engineer - Professional

Sat the AWS Certified DevOps Engineer - Professional Exam last this afternoon.  The exam is hard, as it scenario based.   Most of the exam questions were to pick the best solution for deployments which comprised CloudFormation, Elastic Beanstalk and OpsWorks.   Every one of those questions had 2 good answers, it came down to which was more correct based on the keywords cost, speed, redundancy, roll back capabilities.  

I did the course on acloud.guru and a lot of AWS pages. At some point I will make a page of all the links I collected when studying for this exam.

The exam took me about two-thirds of the allowed time, I read fast and have a tendency to flag questions I don’t know the answer to and come back later and work thru them. This exam, I flagged 20 questions. Most of them I could figure out, once I thought about them for a while. But flagging questions and going back helps manage the time.

Upon submission, I got the “Congratulations! You have successfully completed the AWS Certified DevOps Engineer - Professional…”

I got my score email very quickly:

Overall Score: 82%

Topic Level Scoring:

1.0 Continuous Delivery and Process Automation: 79%
2.0 Monitoring, Metrics, and Logging:  87%
3.0 Security, Governance, and Validation:  75%
4.0 High Availability and Elasticity:  91%

That now makes my 7th AWS Certification.

Sat the AWS Certified DevOps Engineer - Professional Exam last this afternoon.  The exam is hard, as it scenario based.   Most of the exam questions were to pick the best solution for deployments which comprised CloudFormation, Elastic Beanstalk and OpsWorks.   Every one of those questions had 2...

Amazon Certification

Last week, I got the privilege to attend an Item Development Workshop for the Associate Architect Exam.   I participated as a Subject Matter Expert as the certification program pulls both Amazonians and industry professionals together to develop questions.   I’m not going to go into details about the workshop or share any content, because of the NDA.  I do want to share 3 observations I found during my time in the in the workshop:

  1. AWS takes certification, the validity of certification and the value of certifications with immense regard.   The program is designed to recognize those who have AWS knowledge.  As the certification is not about memorization but the ability to learn, understand and apply.
  2. The AWS certification team is amazing.
  3. AWS people are very intelligent and have a deep understand of both AWS and technology.  

The experience was a learning fascinating experience and hope to continue to participate as an SME for other workshops.  

Last week, I got the privilege to attend an Item Development Workshop for the Associate Architect Exam.   I participated as a Subject Matter Expert as the certification program pulls both Amazonians and industry professionals together to develop questions.   I’m not going to go into details about the workshop or share...

What Have you Containerized Today?

I was listening to the Architech podcast.  There was a question asked, ”Does everything today tie back to Kubernetes?”   The more general version of the question is, “Does everything today tie back to containers?”.    The answer is quickly becoming yes.    Something Google figured out years ago with its environment that everything was containerized is becoming mainstream.

To support this  Amazon now has 3 different Container technologies and one in the works.

ECS which is Amazon’s first container offering.    ECS is container orchestration which supports Docker containers.    

Fairgate ECS which is managed offering of ECS where all you do is deploy Docker images and AWS owns full management.  More exciting is that  Fairgate for EKS has been announced and pending release.  This will be a fully managed Kubernetes.    

EKS is the latest offering which was GA’d in June.   This is a fully managed control plane for Kubernetes.   The worker nodes are EC2 instances you manage, which can run an Amazon Linux AMI or one you create.

Lately, I’ve been exploring EKS so that will be the next blog article, how to get started on EKS.

In the meantime, what have you containerized today?

I was listening to the Architech podcast.  There was a question asked, ”Does everything today tie back to Kubernetes?”   The more general version of the question is, “Does everything today tie back to containers?”.    The answer is quickly becoming yes.    Something Google figured out years ago with its...

Cloud Native Application Security

A new study sponsored by Capsule8, Duo Security, and Signal Sciences was published about Cloud Native Application Security.   Cloud Native Applications are applications specifically built for the Cloud.  The study entitled,  The State of Cloud Native Security.  The observations and conclusions of the survey are interesting.   What was surprising is the complete lack of discussion of moving the traditional SECOPS to a SecDevOps model.  

The other item, which found shocking with all the recent breaches, that page 22 shows that only  71% of the surveyed companies have a SECOPs function. 

A new study sponsored by Capsule8, Duo Security, and Signal Sciences was published about Cloud Native Application Security.   Cloud Native Applications are applications specifically built for the Cloud.  The study entitled,  The State of Cloud Native Security.  The observations and conclusions of the survey are interesting.   What was surprising is...

Devops needs SecDevOps

DevOps is well defined and has a great definition Wikipedia. We could argue all day about who is really doing DevOps(see this post for context). Let’s assume that there is efficient and effective DevOps organization, If this is the case, DevOps requires a partner in security. Security needs to manage compliance, security governance, requirements and risks. This requires functions in development, operations and analysis. How can security keep up with effective DevOps? Building DevOps organization for security which we call SecDevOps. SecDevOps is about automating and monitoring compliance, security governance, requirements and risks, and especially updating analysis as DevOps changes the environment.

As organization support DevOps but don’t seem as ready to support SecDevOps, how can security organization evolve to support DevOps?

DevOps is well defined and has a great definition Wikipedia. We could argue all day about who is really doing DevOps(see this post for context). Let’s assume that there is efficient and effective DevOps organization, If this is the case, DevOps requires a partner in security. Security needs...

Future of Software

Open source has been around for decades, but the real initiatives started in 1998. Due to some recent experiences, I started pondering open source and the future of software.

I believe the future of software is open source where a company which wraps enterprise support around it.

Take any open source software,  if you need a feature typically someone has built it.  If they haven’t, your team ends up creating it and adding it back to the project.    Open source has the power of community vs a company with a product manager and deadlines to ship based on some roadmap built by committee.  I made it too simple, open source has a product manager, really in most communities they are gate keeper.   They own accepting features and setting direction,  in some cases it’s the original developer like Linux, or sometimes it’s a committee,  However, at the end of the day either commercial or open source has a product owner.

It’s an interesting paradux which created two opposing questions. First, why isn’t all software open sourced?   Why would a company who has spent millions In development going to give the software away and charge for services?

The answer to the first question is see question two.  The answer to the second question is giving away software is not financially viable if millions have been invested unless a robust software support model is supporting the development of software.

I worked for many organizations who’s IT budget was lean and agile,   Open source was was minimal budget dollars.  I have worked for other organizations whose budget is exceptionally robust and requires supported software as part of governance.

Why not replace the license model with a support model, and allow me or even more importantly the community access to the source code, contribute and drive innovation. Based on users, revenue or some other metric charge me for support or allow me to opt out. Seems like a reasonable future to me.

Open source has been around for decades, but the real initiatives started in 1998. Due to some recent experiences, I started pondering open source and the future of software.

I believe the future of software is open source where a company which wraps enterprise support around it.

Take any open...

Data-safe Cloud...

Amazon recently released a presentation on Data-safe Cloud.  It appears to be based on some Gartner question and other data AWS collected.  The presentation discusses 6 core benefits of a secure cloud.

  1. Inherit Strong Security and Compliance Controls
  2. Scale with Enhanced Visibility and Control
  3. Protect Your Privacy and Data
  4. Find Trusted Security Partners and Solutions
  5. Use Automation to Improve Security and Save Time
  6. Continually Improve with Security Features.  

I find this marketing material to be confusing at best, let’s analyze what it is saying. 

For point 1, Inherit Strong and Compliance Controls, which reference all the compliance AWS achieves.  However, it loses track of the shared responsibility model and doesn’t even mention until page 16.   Amazon has compliance in place which is exceptional, and most data center operators or SaaS providers struggle to achieve.   This does not mean my data or services running within the Amazon environment meet those compliances

For point 2,  4  and 6 those are not benefits of the secure cloud.  Those might be high-level objects one uses to form a strategy on how to get to a secure cloud.  

Point 3 I don’t even understand, the protection of privacy and data has to be the number one concern when building out workloads in the cloud or private data centers.   It’s not a benefit of the secure cloud, but a requirement.  

For point 5, I am a big fan of automation and automating everything.   Again this is not a benefit of a secure cloud, but how to have a repeatable, secure process wrapped in automation which leads to a secure cloud.

Given the discussions around cloud and security given all the negative press, including the recent AWS S3 Godaddy Bucket exposure, Amazon should be publishing better content to help move forward the security discussion.  

Amazon recently released a presentation on Data-safe Cloud.  It appears to be based on some Gartner question and other data AWS collected.  The presentation discusses 6 core benefits of a secure cloud.

  1. Inherit Strong Security and Compliance Controls
  2. Scale with Enhanced Visibility and Control
  3. Protect Your Privacy and Data
  4. ...

Security as Code

One of the things I’ve been fascinated of late is the concept of Security as Code.   I’ve just started to read the book DevOpSec by Jim Bird.   One of the things the book talks about is injecting security into the CI/CD pipeline for applications.  Basically merging developers and security, as DevOps merged developers and operations.   I’ve argued for years DevOps is a lot of things, but fundamentally it was a way for operations to become part of the development process which led to the automation of routine operational tasks and recovery.  So now if we look at DevOpsSec, this would assume security is part of the development process. I mean more than just the standard code analysis using Veracode.  What would it mean if security processes and recovery could be automated?  

Security Operations Centers (SOCs) where people are interpreting security events and reacting.  Over the last few years, much of the improvements in SOCs has been made via AI and machine learning reducing the head count required to operate a SOC.   What if security operations were automated?   Could some code be generated based on the security triggers and provided to the developer for review and incorporation into the next release?

We talk about infrastructure as code, where some data can be generated to create rules and infrastructure using automation.   Obviously on AWS you can install security tool based AMIs, Security Groups and NACLs with Cloudformation.  My thoughts go to firewall based AMIs, appliances  for external access.   The appliance access-lists required are complex, require enormous review and processing within an organization.  Could access lists be constructed based on a mapping of the code and automatically generated for review?  Could the generated access list be compared against existing access-list for deduplication detection.

It’s definitely an interesting topic and hopefully evolves over the next few years. 

One of the things I’ve been fascinated of late is the concept of Security as Code.   I’ve just started to read the book DevOpSec by Jim Bird.   One of the things the book talks about is injecting security into the CI/CD pipeline for applications.  Basically merging developers and security,...

AWS Release T3 Instances

Earlier today AWS released t3 instances.   There are a bunch of press releases about the topic.    The performance is supposed to be 30% better than T2.   Hopefully, in the next few days, independently published benchmarks will be released to confirm if the instances are 30% faster.   In the interim go to the Amazon pages for all the details on T3 instances.   The cost is a few cents less.   For example, a reserved instance from T2.small to  T3.small with no upfront went from .17 cents to .15 cents in the US-WEST-2 region.    

Before today awsarch.io ran off T2 instances, to build this blog article it was updated to T3 instances.    AWS makes it easy to change instance type, just shut down the instance and from the AWS console go to Instance Settings->Change Instance type.  Then select the appropriate t3 instance.   It can be done via the AWS CLI as well.

Change Instance

T3 force you to select EBS optimized volumes.  EBS optimized volumes for T3 provide additional IOPS.  Here is the link for the complete EBS optimized information.

T3 EBS Optimized

The T3 instance uses an ENA adapter so before starting your instance change the ENA adapter thru the AWS command line:  

aws ec2 modify-instance-attribute –instance-id --ena-support

Lastly, I notice mount points changed.   Previously the EBS volumes devices in the Linux /dev directory changes.   Before the change to T3 they were /dev/xvdf1, /dev/xvdf2, etc.  Now the devices are /dev/nvme1n1p1, /dev/nvme1n1p2, etc.   Something to keep in mind if you have additional volumes with mount points on the ec2 instance. 

Earlier today AWS released t3 instances.   There are a bunch of press releases about the topic.    The performance is supposed to be 30% better than T2.   Hopefully, in the next few days, independently published benchmarks will be released to confirm if the instances are 30% faster.   In the interim go...

AWS Logging Solution

Amazon generates a lot of logs via VPC Flow Logs, CloudTrail, S3 access logs, CloudWatch (See the end of the blog article for a full list.)   Additionally, there are OS, Application, web server logs.   That is a lot of data which provides valuable insight into your running AWS environment.   What are you doing to manage this log files?  What are you doing with those log files?  What are you doing to analysis these log files?

There are a lot of logging solutions available that integrate with AWS. Honestly, I’m a big fan of Splunk and have set it up multiple times.  However, I wanted to look at something else for this blog article. Something open source and relatively low cost. This blog is going to explain what I did to setup Graylog. Graylog has no charges for the software, but you’re going to get charged for the instance, Kinesis, SQS, and data storage.  It actually a good exercise if to familiarize yourself with AWS services, especially for the Sysops exams.  

Graylog provides great instructions.   I followed the steps remember to use their image which is already self-built on Ubuntu.   One difference with this setup, I didn’t use a 4GB memory systems.   I picked a t2.small which proves 1vCPU and 2GB of memory.    I didn’t notice performance issues.  Remember to allow ports 443 and 9000 in security groups and the Networking ACLs.   I prefer to run this over HTTPS.  And it bugs me when you see NOT SECURE HTTP:  I installed an SSL certificate, and this is how I did it.

  1. Create a DNS name 
  2. Get a free certificate 
  3. Install the Certificate as such 

Now my instance is up, and I can log into the console.  I want to get my AWS logs into Graylog.   To do this is requires the logs sent to Kinesis or SQS.  I am not going to explain the SQS setup as there plenty of resources for the specific AWS Service.   Also, the Graylog Plugin describes how to do this.  Graylog plugin for CloudTrail, CloudWatch and VPC Flow logs is available on Github at Graylog Plugin for AWS.

What about access_logs?  Graylog has the Graylog Collector Sidecar.      I’m not going to rehash the installation instructions here as there are great installation instructions.     Graylog has a great documentation.   Also if you are looking for something not covered here, it will be in the documentation or in their Github project. 

What are you using as your log collection processing service on Amazon?  

List of AWS Servers generating logs:

Amazon S3 Access logs Amazon CloudFront Access logs Elastic Load Balancer (ELB) logs Amazon Relational Database Service (RDS) logs Amazon Elastic MapReduce (EMR) logs Amazon Redshift logs AWS Elastic Beanstalk logs AWS OpsWorks logs (or this link) AWS Import/Export logs AWS Data Pipeline logs AWS CloudTrail logs

Amazon generates a lot of logs via VPC Flow Logs, CloudTrail, S3 access logs, CloudWatch (See the end of the blog article for a full list.)   Additionally, there are OS, Application, web server logs.   That is a lot of data which provides valuable insight into your running AWS...

My Favorite Cloud Update for July

All three code platforms AWS, Google Cloud, Azure release features all the time.    However, Google Cloud took a major leap by providing great tool developers by integrating with IntelliJ.   Google did a great job covering the how it works in there Platform blog which is worth reading.

I have used Eclipse since it was released, prior to that I would use Emacs.   However, for my master program over the last 3 years, I have been using IntelliJ.   It’s become my go-to platform for coding work because IntelliJ is easy to use,  and my various class groups typically use it.   IntelliJ makes it free for students, which is a great way to develop a  user base given its price tag.

Providing an easy to use a tool, which has an existing user based was smart by Google Cloud especially as it continues its to close the gap with AWS.

Finally, I’m not a big fan of Cloud9 on AWS.   What do you think?   Are you an IntelliJ or Cloud9 user?

All three code platforms AWS, Google Cloud, Azure release features all the time.    However, Google Cloud took a major leap by providing great tool developers by integrating with IntelliJ.   Google did a great job covering the how it works in there Platform blog which is worth reading.

...

Provide 10Gbps and 40 Gbps Ports But Less Throughput

A longtime issue with networking vendors is providing ports at one speed and the throughput at another speed.  I remember dealing with it back in 2005 with the first generation of Cisco ASA’s which primarily replaced the PIX Firewall.   Those firewalls provided 1Gbps ports, but the throughput the ASA could handle was about half that bandwidth.

Some marketing genius created the term wire speed and throughput.

If you’re curious about this go look at Cisco Firepower NGFW firewalls.  The 4100 series have 40Gbps interfaces, but depending on the model throughput is between 10Gbps and 24Gbps with FW+AVC+IPS turned on.

I have referenced several  Cisco devices, but it’s not a specific issue to Cisco.    Take a look at Palo Alto Networks Firewalls specifically the PA-52XX have four 40Gbps ports, but can support between 9Gbps and 30Gbps of throughput with full threat protection on.

The technology exists so why aren’t networking vendors able to provide wire-speed throughput between ports, even with the full inspection of traffic turned on?    I would very like to know your thoughts on this topic please leave a comment.

A longtime issue with networking vendors is providing ports at one speed and the throughput at another speed.  I remember dealing with it back in 2005 with the first generation of Cisco ASA’s which primarily replaced the PIX Firewall.   Those firewalls provided 1Gbps ports, but the throughput the ASA could...

HTTP Get or Post DDoS attacks

DDoS attacks are too frequent on the internet. A DDoS attack sends more requests that can be processed. Many times, the requestors machine has been compromised to be part of a more massive DDoS network. This article is not going to explain all the various types as there is a whole list of them here. Let’s discuss a particular type of DDoS attack designed to overwhelm your web server. This traffic will appear as legitimate requests using GET or POST Requests. GET would be for /index.html or any other page at 50 requests per minute. A POST would hit your myApi.php and attempt to post data at 50 plus requests per minute.

This is going to focus on some recommendations using AWS and other technologies to stop a recent HTTP DDoS attacks. The first step is to identify the DDoS attack versus regular traffic. The second question is how does one prevent a DDoS HTTP attack.

Identifying a DDoS attack there various DDoS The first step is to understand your existing traffic, if you have 2,000 requests per day and all of a sudden you have 2,000,000 requests in the morning, its a good indication it’s under a DDoS attack. The easiest way to identify this is to look at the access_log and pull this into a monitoring service like Splunk, AllenVaultGraylog, etc. From there trend analysis in real-time would show the issues. If the Web servers are behind an ALB make sure the ALB is logging requests and that those requests are being analysis instead of the web server access logs. ALB still support the X-Forwarded-For so it can be passed.

Preventing a DDoS attack There is no way to truly prevent an HTTP DDoS attack.  Specifically to deal with this event, the following mitigation techniques the were explored:

  1. AWS Shield - this provides advance WAF functions, and there rules to limit.

  2. The free-ware would be to use Apache and NGINX have rate limited for specific IP addresses.   In Apache, this is implemented by a number of modules.   ModSecurity is usually at the top of the list, a great configuration example is available on Github which includes the X-Forwarded-For.

  3. An EC2 instance in front of the web server can be run as a Proxy. The proxy can be configured to suppress the traffic using ModSecurity or other MarketPlace offerings including other WAF options.

  4. The ALB or CloudFront can deploy an AWS WAF.

  5. Lastly, the most expensive option is to deploy Auto-scaling groups to absorb all traffic.

Please leave a comment if there other options which should have been investigated.

To solve this specific issue, an AWS WAF was deployed on the ALB.   One thing to consider is to make sure to prevent attacks from directly hitting the website.   This is easily accomplished by allowing HTTP/HTTPS from anywhere only to the ALB.    ALB and EC2 instance sharing a security group which allows HTTPS/HTTP to everything in that security group.

DDoS attacks are too frequent on the internet. A DDoS attack sends more requests that can be processed. Many times, the requestors machine has been compromised to be part of a more massive DDoS network. This article is not going to explain all the various types as there is a...

Cisco Press CCNP Route Books not aligned with CCNP Route Exam Blueprint

To my disappointment having completely read the CCNP Routing and Switching ROUTE 300-101 Official Cert Guide and the Implementing Cisco IP Routing (ROUTE) Foundation Learning Guide (CCNP ROUTE 300-101) for the CCNP Route Exam, these books are not aligned with the exam blueprint.

Looking at the exam blueprint, topics like CHAPv2 and Frame-Relay are still covered but are not used as much.   CHAPv2 is not mentioned in either book.   Secondly, technologies like IPSec VPN and MPLS get little coverage in the books but are prevalent in deployments today.   Additionally there no real configuration examples for DMVPN.

Cisco Press claims to be the official certification guides for the exams, it gives me great concern that the exam blueprint and the official certification guide are not in sync.  Wendell Odom [https://www.certskills.com/]. who wrote a number of the original certification guides always did a great job in matching the book to the exam blueprint and providing exercises to reinforce learning.  He no longer the author on the CCNP certification guides as Wendell focuses on the CCNA Routing and Switching.

The last time I went thru CCNP certification I used the Cisco Press Exam Certification Guides and Sybex CCNP books which included exercises.   Sybex no longer publishes CCNP books.

Before taking the test, I think I’ll find a lab workbook and execute the exercises on VIRL.

To my disappointment having completely read the CCNP Routing and Switching ROUTE 300-101 Official Cert Guide and the Implementing Cisco IP Routing (ROUTE) Foundation Learning Guide (CCNP ROUTE 300-101) for the CCNP Route Exam, these books are not aligned with the exam blueprint.

Looking at the exam blueprint, topics like CHAPv2...

Starting a new position today

Starting a new position today as Consultant - Cloud Architect with Taos.   Super excited to for this opportunity.

I wanted a position as a solution architect working with the Cloud, so I couldn’t be more thrilled with the role.   I am looking forward to helping Taos customers adopt the cloud and a Cloud First Strategy.

It’s an amazing journey for me, as Taos was the first to offer me a Unix System administrator position when I graduated from Penn State some 18 years ago, and I passed on the offer and went to work for IBM.

I am really looking forward to working with the great people at Taos.

Starting a new position today as Consultant - Cloud Architect with Taos.   Super excited to for this opportunity.

I wanted a position as a solution architect working with the Cloud, so I couldn’t be more thrilled with the role.   I am looking forward to helping Taos customers adopt the...

My Favorite Things About Amazon Well Architected Framework

Amazon released AWS Well Architected Framework to help customers Architect solutions within AWS.   The amazon certifications require detailed knowledge of 5 white papers which make up the Well Architected Framework.   Given I have recently completed 6 Amazon certifications, I decided I was going to write a blog which pulled my favorite lines from each paper.

Operational excellence pillar The whitepaper says on page 15, “When things fail you will want to ensure that your team, as well as your larger engineering community, learns from those failures.”   It doesn’t say “If things fail”, it says “When things fail” implying straight away things are going to fail.

security pillar On page 18, “Data classification provides a way to categorize organizational data based on levels of sensitivity. This includes understanding what data types are available, where is the data located and access levels and protection of the data”.  This to me sums up how security needs to be defined. Modern data security is not about firewalls and having a hard outside shell or malware detectors.  It about protecting the data based on its classification from both internal (employees, contractors, vendors) actors and hostile actors.

reliability pillar The document is 45 pages long and the word failure appears 100 times and the word fail exists 33 times. The document is really about how to architect an AWS environment to respond to failure and what portion of your environment based on business requirements should be over-engineered to withstand multiple failures.

performance efficiency pillar Page 24 the line, “When architectures perform badly this is normally because of a performance review process has not been put into place or is broken”.   When I first read this line, I was perplexed.  I immediately thought this implies a bad architecture can perform well if there is a performance review in place.  Then I thought when has a bad architecture ever performed well under load?   Now I get the point this is trying to make.

cost optimization On page 2, is my favorite line from this white paper, “A cost-optimized system will fully utilize all resources, achieve an outcome at the lowest possible price point, and meet your functional requirements.”   It made me immediately think back to before the cloud, every solution had to have a factor over the life of hardware for growth it was part of the requirements.    In the cloud you need to support capacity today, if you need more capacity tomorrow, you just scale. This is one of the biggest benefits of cloud computing, no more guessing about capacity.

Amazon released AWS Well Architected Framework to help customers Architect solutions within AWS.   The amazon certifications require detailed knowledge of 5 white papers which make up the Well Architected Framework.   Given I have recently completed 6 Amazon certifications, I decided I was going to write a blog which pulled my...

The Promises of Enterprise Data Warehouses Fulfilled with Big Data

Remember back in the 1990s/2000s Data Warehouses were all the rage.    The idea was to take data from all the transactional databases behind the multiple e-Commerce, CRM, financials, lead generation and ERP systems deployed in the company and merge them into one data platform.  It was the dream, CIOs were ponying up big dollars for these because they thought it would solve finance, sales, and marketing most significant problems.  It was even termed Enterprise Data Warehouse or EDW.  The new EDW would take 18 months to deploy as ETLs would be written from the various systems and data would have to be normalized to work within the EDW.  In some cases, the team made bad decisions about how to normalize the data causing all types of future issues.   When the project finished, there would be this beautiful new data warehouse, and no one would be using it.  The EDW needed a report writer, to make fancy reports, in a specialized tool like Cognos, Crystal Reports, Hyperion, SAS, etc.   A meeting would be called to discuss data, with 12 people and all 12 people would have different reports and numbers depending on the formulas in the report.  That lead to eventually someone from Finance who was part of the analysis, budgeting and forecasting group would learn the tool and be the go-to person and work with the team from technology assigned to create reports.

Then Big Data came along. Big data even sounds better than Enterprise Data Warehouse, and frankly given the issues back in 1990s/2000s the branding to Big Data doesn’t have the same negative connotations.

Big Data isn’t a silver bullet, but it does a lot of things right.  First and foremost the data doesn’t require normalization.  Actually normalization is discouraged.  Big Data absorbs the transactional database data, social feeds, eCommerce analytics, IoT sensor data, and a whole host of other data and puts it all in one data repository. The person from finance has been replaced with a team of data scientists who are highly trained and develop analysis models and extracts data with statistical (R programming language) and Natural Language Processing (NLP). The data scientists spend days pouring over the data, extracting information, building models, rebuilding models and looking for patterns within the data. The data could be text, voice, video, images, social feeds, transaction data and the data scientist is looking for something interesting.

Big Data has huge impacts as the benefits are immense.  However, my favorite is predictive analytics.  Predictive analytics tells you something’s behavior based on previous history and current data. It’s going to predict the future.  Predictive analysis is all over retail as you see it on sites as “Other Customers Bought” or recommending purchases based on your history.   Airlines use it to predict component failure of planes.  Investors use it to predict changes in stock, and the list of industries using it goes on and on.

The cloud is a huge player in the Big Data space Amazon, Google and Azure are offering Hadoop and Spark as services.    The best thing about the cloud is when the data is absorbed in Gigabytes or Terabytes that the cloud is providing the storage space for all this data.  Lastly given it’s in the cloud, it’s relatively easy to deploy a Big Data cluster, and hopefully,  soon AI in the cloud will replace the data scientists as well.

Remember back in the 1990s/2000s Data Warehouses were all the rage.    The idea was to take data from all the transactional databases behind the multiple e-Commerce, CRM, financials, lead generation and ERP systems deployed in the company and merge them into one data platform.  It was the dream, CIOs...

BGP Route Reflectors

Studying for the CCNP Route 300-101 Route exam, there is no discussion of Border Gateway Protocol(BGP) Route Reflectors.    It doesn’t even make the exam blueprint.  BGP Route Reflectors are one of the most important elements for multi-home, multi-location BGP.    This blog post is not going to be a lesson in BGP, as there are plenty of resources do a great job explaining the topic.   Within an Autonomous system(AS) if there are multiple BGP routers, an iBGP full mesh is required.   Its a fancy way of saying all the BGP routers need to be connected within an AS.  Let’s take an example of a large company which has Internet peering in New York, Atlanta and San Francisco.   If the large company is the same AS number, that means it has at least 3 BGP routers, and for business reasons, the routers are dual and dual homed.   That makes 6 BGP routers.  Remember the formula for a full mesh is: N(N-1)/2.   Based on the formula, it would require 15 iBGP peering connections.  iBGP makes a logical connection over TCP, but it still needs 15 configurations.   This is a small example, but it doesn’t scale if we increased to 10 routers, that means 45 iBGP connections and configurations.

What does a route reflector do?

A Route Reflector readvertise routes learn from internal peers to other internal peers.   Only the route reflector needs a full mesh with its internal routers.  The elegance of this solution is that it is a way of making iBGP hierarchical.

The previous example of 6 routers, there are many ways to organize the network with Router Reflectors.   One Cluster with two route reflectors, two clusters with two route reflectors, etc.

 The astonishing part is something so fundamental to leveraging BGP is not cover on the CCNP Routing Exam according to the exam blueprint.

Studying for the CCNP Route 300-101 Route exam, there is no discussion of Border Gateway Protocol(BGP) Route Reflectors.    It doesn’t even make the exam blueprint.  BGP Route Reflectors are one of the most important elements for multi-home, multi-location BGP.    This blog post is not going to be...

Exhaustion of IPv4 and IPv6

IPv4 exhaustion is technology’s version of chicken little and sky is failing.     The sky has been falling on this for 20+ years, as we have been warned IPv4 is exhausting since the late 1990s.   Here comes the IoT including Smart Home were supposed to strain the IPv4 space.    I don’t know about you, but I don’t want my refrigerate and smart thermostat on the internet.

However, every time I go into AWS, I can generate an IPv4 address.   Home ISP are stilling handing out static IPv4 if you are willing to pay a monthly fee.     Enterprise ISP will hand you a /28 or /29 block without to much effort.    Sure lots of companies, AWS, Google, Microsoft have properties on IPv6.   But it’s not widely adopted.   The original RFC on IPv6 was published in December of 1995.

I believe the lack of adaption is due to the complexity of the address. If my refrigerators IPv4 address is 192.168.0.33.    It’s IPv6 address is 2001:AAB4:0000:0000:0000:0000:1010:FE01 which could be shorten to  2001:AAB4::1010:FE01.   Imagine calling that into tech support or being tech support taking that call.  Why didn’t the inventors of IPv6 add octets to the existing IP address?   For instance, the address 192.168.0.33.5.101.49, would have been so much more elegant and easier to understand.     I think it will take another 15-20 years before IPv6 is widely adapted and another 50 years before IPv4 is no longer routed within networks.

IPv4 exhaustion is technology’s version of chicken little and sky is failing.     The sky has been falling on this for 20+ years, as we have been warned IPv4 is exhausting since the late 1990s.   Here comes the IoT including Smart Home were supposed to strain the IPv4 space.    I...

To The Cloud and Beyond...

I was having a conversation with an old colleague late Friday afternoon.    (Friday was a day of former colleagues, had lunch with a great mentor).   He’s responsible for infrastructure and operations for a good size company.    His team is embarking on a project to migrate to the cloud as their contract for space will be up in 2020. There three things which were interesting in the discussion which I thought were interesting and probably the same issues others face on their journey to the cloud.

The first was the concern about security.    The cloud is no less or more secure than your data center. If your data center is private your cloud asset can be private, if your need public facing services, they would be secured like the public facing services in your own data center.    Data security is your responsibility in the cloud, but the cloud doesn’t make your data any less secure.

The other concern was the movement of VMware images to the cloud.   Most of the environment was virtualized years ago.   However, there are a lot of windows 2003 and 2008 servers.    Windows 2008  end of support is  2020, and Windows 2003 has been out of support since July 2015.     It’s odd the concern about security, given the age of the Windows environment.      If it was my world, I’d probably figure out how to move those servers to Windows 2016 or retire ones no longer needed, keeping in mind OS upgrades are always dependent on the applications.   Right or wrong, my roadmap would leave Windows 2003 and 2008 in whatever datacenter facility is left behind.

Lastly, there was concern about Serverless, and the application teams wanting to leverage this over his group’s infrastructure services.   There was real concern about a loss of resources if the application teams turn towards Serverless, as his organization would have fewer servers (physical/virtual instances)  to support.  Like many technology shops, infrastructure and operations resources are formulated by the total number of servers.   I find this hugely exciting.    I would push resources from “keeping the lights on” to roles focused on growing the business and speed to market, which are the most significant benefit of serverless.   Based on this discussion, people look at it from their own prism.

I was having a conversation with an old colleague late Friday afternoon.    (Friday was a day of former colleagues, had lunch with a great mentor).   He’s responsible for infrastructure and operations for a good size company.    His team is embarking on a project to migrate to the cloud...

Power of Digital Note Taking

There hundreds of note taking apps.    My favorites are Evernote, GoodNotes, and Quip.   I’m not going to get into the benefits or pros and cons of each application.  There plenty of BLOGs, youtube videos which do this in great detail.    Here is how I used them:

  • Evernote is my document and note repository.

  • GoodNotes is for taking handwritten notes on my iPad, and the PDFs are loaded into Evernote.

  • Quip is for team collaboration and sharing notes and documents.

I’ve been digital for 4+ years.  Today, I read an ebook from Microsoft, entitled “The Innovator’s Guide to Modern Note Taking.“  I was curious as to Microsoft’s ideas on the digital note-taking.   The ebook is worth a read.    I found there three big takeaways from the ebook:

First - The ebook quotes, “average employee spends 76 hours a year looking for misplaced notes, items, and files.   In other words, we spend annual $177 billion across the U.S”.

Second - The ebook explains that the left side of the brain is used when typing on a keyboard,  and the right side of the brain is when writing notes.  The left side of the brain is more clinical, and the right side of the brain is more creative, particular asking the “What If” questions.  Also covered on page 12 of the ebook handwriting notes improves retention.  Lastly on page 13 one of my favorites as I am a doodler, “Doodlers recall on average 29% more information than non-doodlers”.   There is a substantial difference in typing vs. writing notes, and there is a great blog article from NPR if you want to learn more.

_Third - _Leverage the cloud, whether it’s to share, process, access anywhere.

Those are fundamentally the three reasons that I went all digital for notes.  As described before I write notes in GoodNotes and put them in Evernote, I use the Evernote OCR for PDFs to search them.    My workflow covers the main points described above.   Makes me think I might be ahead of a coming trend.

There hundreds of note taking apps.    My favorites are Evernote, GoodNotes, and Quip.   I’m not going to get into the benefits or pros and cons of each application.  There plenty of BLOGs, youtube videos which do this in great detail.    Here is how I used them:

...

Multi-cloud environments are going to be the most important technology investment in 2018/2019

I believe that Multi-cloud environments are going to be the most important technology investment in 2018/2019.   This will drive education and new skill development among various technology workers.  Apparently, it’s not just me, IDC prediction is that “More than 85% of Enterprise IT Organizations Will Commit to Multicloud Architectures by 2018, Driving up the Rate and Pace of Change in IT Organizations”.There some great resources online for multi-cloud, strategy, benefits, all worth reading:

The list could be hundreds of articles.   I wanted to provide a few, that I thought were interesting and relevant to this discussion of why Multi-cloud.   There are four drivers behind this trend:

First -  Containers will allow you to deploy your application anywhere, including all the major cloud players have Kubernetes, Docker support.    This means you could deploy to AWS, Azure, and Google without rewriting any code.    Application support, development, maintenance is what drives technology dollars.   Maintaining one set of code that runs anywhere doesn’t cost any more and gives you complete autonomy.

Second -  Companies like JoyentNetlify,  HashiCorp Terraform and many more are building their solutions for multi-cloud, giving the control, manageability, ease of use, etc.    Technology is like Field of Dreams, quote, “if you build it they will come.”   Very few large companies jump into something without support, they wait for some level of maturity to be developed and then wade in slowly.

Third -  The biggest reason is a lack of trust putting all your technology assets into one company.    Most companies had for years multi-data center strategies, using a combination of self-created, leverage multiple companies like  Wipro, IBM, HP, Digital Realty Trust, etc., and various co-location.   For big companies when the cloud became popular, it was how do I augment my existing environment with Cloud.    Now many companies are applying a Cloud First Strategy .    So why wouldn’t principles that were applied for decades in technology, be applied to the cloud.   Everyone remembers the saying, don’t put all your eggs in one basket.    I understand there are regions, multi-AZ, resiliency, and redundancy, but at the end of the day one cloud provider is one cloud provider, and all my technology eggs are in that one basket.

Fourth - The last reason is pricing.   If you can move your entire workload from Amazon to Google within minutes, it forces cloud vendors to keep costs low as cloud service charges for what you use.   I understand if you have a workload with petabytes of data, it’s not going to move.  But have web services with small data behind them, they can move and relatively quickly with the right deployment tools in place.

What do you think?   Leave me a comment with your feedback or ideas?

I believe that Multi-cloud environments are going to be the most important technology investment in 2018/2019.   This will drive education and new skill development among various technology workers.  Apparently, it’s not just me, IDC prediction is that “More than 85% of Enterprise IT Organizations Will Commit to Multicloud Architectures by 2018, Driving...

AWS Certified Security – Specialty


Sat the AWS Certified Security - Speciality Exam this morning.  The exam is hard, as it scenario based.   Most of the exam questions were to pick the best security scenario.   It could be renamed the Certified Architect - Security.    Every one of those questions had 2 good answers, it came down to which was more correct and more secure.       It’s the hardest exam I’ve taken to date.   I think it is harder than the Solution Architect - Professional exam. The majority of the exam questions where on KMS, IAM, securing S3, CloudTrail, CloudWatch, multiple AWS account access, Config, VPC, security groups, NACLs, and WAF.

I did the course on acloud.guru and I think the whitepapers and links really helped me in the studying for this exam:

The exam took me about half the allocated time, I read fast and have a tendency to flag questions I don’t know the answer to and come back later and work thru them.    This exam, I flagged 20 questions, highest of any AWS exam taken to date.     Most of them I could figure out, once I thought about them for a while.      Thru the exam, I was unsure of my success or failure.

Upon submission, I got the “Congratulations! You have successfully completed the AWS Certified Security - Specialty exam…”

Unfortunately, I didn’t get my score, I got the email, which says, “Thank you for taking the  AWS Certified Security - Specialty exam. Within 5 business days of completing your exam,”

That now makes my 6th AWS Certification.


Sat the AWS Certified Security - Speciality Exam this morning.  The exam is hard, as it scenario based.   Most of the exam questions were to pick the best security scenario.   It could be renamed the Certified Architect - Security.    Every one of those questions...

AWS Config, KMS and EBS encryption

If you have an AWS deployment, make sure you turn on AWS Config.       It has a whole bunch of built-in rules, and you can add your own to validate the security of your AWS environment as it relates to AWS services.   Amazon provides good documentation, a GitHub repo,  and SumoLogic does a quick How-to turn it on.      It’s straightforward to turn on and use.   AWS provides some pre-configured rules, and that’s what this AWS environment will validate against.  There is a screenshot below of the results.   Aside from turning it on, you have to decide which rules are valid for you.   For instance, not all S3 buckets have business requirements to replicate, so I’d expect this to always be a noncompliant resource.However, one of my findings yesterday was missing EBS encrypted volumes. In order to make EBS volumes encrypted its 9 easy steps:

  1. Make a snapshot of the EBS Volumes.

  2. Copy the snapshot of the EBS Volume to a Snapshot, but select encryption.   Use the AWS KMS key you prefer or Amazon default aws/ebs.

  3. Create an AMI image from the encrypted Snapshot.

  4. Launch the AMI image from the encrypted Snapshot to create a new instance.

  5. Check the new instance is functioning correctly, and there are no issues.

  6. Update EIPs, load balancers, DNS, etc. to point to the new instance.

  7. Stop the old un-encrypted instances.

  8. Delete the un-encrypted snapshots.

  9. Terminate the old un-encrypted instances.

Remember KMS gives you 20,000 request per month for free, then the service is billable.

If you have an AWS deployment, make sure you turn on AWS Config.       It has a whole bunch of built-in rules, and you can add your own to validate the security of your AWS environment as it relates to AWS services.   Amazon provides good documentation, a

Amazon Crashing on Prime Day

Amazon is crashing on Prime Day,  made breaking news.   Appears the company is having issues with the traffic load.

Given Amazon runs from AWS as of 2011. Not a great sign for either Amazon or the scalability model they deployed on AWS.

Amazon is crashing on Prime Day,  made breaking news.   Appears the company is having issues with the traffic load.

Given Amazon runs from AWS as of 2011. Not a great sign for either Amazon or the scalability model they deployed on...

Minimum Security Standards and Data Breaches

Why do agencies post minimum security standards?     The UK government recently released a minimum security standards document which all departments must meet or exceed.    The document is available here:  Minimum Cyber Security Standard.

The document is concise, short, and clear.   It contains some relevant items for decent security, covering most common practices over the last 10 years.   I’m not a UK citizen, but if agencies are protecting my data, why do they have to meet minimum standards.    If an insurer was using the minimum standards, it would be “lowest acceptable criteria that a risk must meet in order to be insured”.     Do I really want to be in that class lowest acceptable criteria for a risk to my data and privacy?

Given now, you know government agencies apply minimum standards, let’s look at breach data.   Breaches are becoming more common and more expensive and this is confirmed by a report from Ponemon Institue commissioned by IBM.   The report states that a Breach will cost $3.86 million, and the kicker is that there is a recurrence 27.8% of the time.

There two other figures in this report that astound me:

  • The mean time to identify (MTTI) was 197 days

  • The mean time to contain (MTTC) was 69 days

That means that after a company is breached, it takes on average 6 months to identify the breach and 2 months to contain it.   The report goes on to say that 27% of the time a breach is due to human error and 25% of the time because of a system glitch.

So interpolate this, someone or system makes a mistake and it takes 6 months to identify and 2 months to contain.    Those numbers should be scaring every CISO, CIO, CTO, other executives, security architects, as the biggest security threats are people and systems working for the company.

Maybe it’s time to move away from minimum standards and start forcing agencies and companies to adhere to a set of best practices for data security?

Why do agencies post minimum security standards?     The UK government recently released a minimum security standards document which all departments must meet or exceed.    The document is available here:  Minimum Cyber Security Standard.

The document is concise, short, and clear.   It contains some relevant items for decent security, covering...

SaaS based CI/CD

Let’s start with some basics of software development.    It still seems no matter what methodology of software development lifecycle that is followed it includes some level of definition, development, QA, UAT, and Production Release.   Somewhere in the process, there is a merge of multiple items into a release.   This still means your release to production could be monolithic.

The mighty big players like  GoogleFacebook, and Netflix (click any of them to see their development process) have revolutionized the concept of Continous Integration (CI) and Continous Deployment (CD).

I want to question the future of CI/CD,  instead of consolidating a release, why not release a single item into production, validate over a defined period of time and push the next release.   This entire process would happen automatically based on a queue (FIFO) system.

Taking it to the world of corporate IT and SaaS Platforms.   I’m really thinking about software like Salesforce Commerce Cloud,  or Oracle’s NetSuite.      I would want the SaaS platform to provide me this FIFO system to load my user code updates.  The system would push and update the existing code, while it continues to handle the requests and the users wouldn’t see discrepancies.    Some validation would happen, the code would activate and a timer would start on the next release.  If validation failed the code could be rolled back automatically or manually.

Could this be a reality?

Let’s start with some basics of software development.    It still seems no matter what methodology of software development lifecycle that is followed it includes some level of definition, development, QA, UAT, and Production Release.   Somewhere in the process, there is a merge of multiple items into a release....

Anycast

IPv6 implemented Anycast for many benefits. The premise behind Anycast is multiple nodes can share the same address, and the network routes the traffic to the Anycast interface address closest to the nearest neighbor.

There is a lot of information on it for the Internet as it relates to IPv6.  Starting with a deep dive in the RFC RFC 4291 - IP Version 6 Addressing Architecture.   Also, there is a document on Cisco Information IPv6 Configuration Guide.

The more interesting item which was a technical interview topic this week was the extension into IPv4. The basic premise is that BGP can have multiple subnets in different geographic regions with the same IP address and because of how internet routing works, traffic to that address is routed to the closest address based on BGP path.

However, this presents two issues if the path in BGP disappears that means the traffic would end up at another node, which would present state issues. The other issues are with BGP as it routes based on path length. So depending on how upstream ISP is peered and routed, a node physically closer, could not be in the preferred path and therefore add latency.

One of the concepts behind this is DDoS Mitigation, which is deployed with the Root Name Servers and also CDN providers. Several RFC papers discuss Anycast as a possible DDoS Mitigation technique:

RFC 7094 - Architectural Considerations of IP Anycast

RFC 4786 - Operation of Anycast Services

CloudFlare(a CDN provider) discusses their Anycast Solution:  What is Anycast.

Finally, I’m a big advocate of conference papers, maybe because of my Master’s degree or 20 years ago if you wanted to learn something it was either from a book or post-conference proceedings. In the research, for this blog article, I came across a well-written research paper from 2015 on the topic of DDoS mitigation with Anycast Characterizing IPv4 Anycast Adoption and Deployment.  It’s definitely worth a read, and especially on interesting how Anycast has been deployed to protect the Root DNS servers and CDNs.

IPv6 implemented Anycast for many benefits. The premise behind Anycast is multiple nodes can share the same address, and the network routes the traffic to the Anycast interface address closest to the nearest neighbor.

There is a lot of information on it for the Internet as it relates to IPv6. ...

Server Virtualization

I see a lot of trends between Containers in 2018 and the server virtualization movement started in 2001 with VMWare.  So I started taking a trip down memory lane.  My history started in 2003/2004 when I was leveraging Virtualization for datacenter and server consolation. At IBM we were pushing it to consolidate unused server capacity especially in test and development environments with IT leadership.  The delivery primary focused on VMWare GSX and local storage initially.  I recall the release of vMotion and additional Storage Virtualization tools, lead to a deserve to move from local storage to SAN-based storage.   That allowed us to discuss the reduction of downtime and potential for production deployments.  I also remember there was much buzz when EMC in 2004 acquired VMWare and it made sense given the push into Storage Virtualization.

Back then it was the promise of reduced cost, smaller data center footprint, improved development environments, and better resource utilization.   Sounds like the promises of Cloud and Containers today.

I see a lot of trends between Containers in 2018 and the server virtualization movement started in 2001 with VMWare.  So I started taking a trip down memory lane.  My history started in 2003/2004 when I was leveraging Virtualization for datacenter and server consolation. At IBM we were pushing it...

Serverless 2018

Serverless is becoming the 2018 technology hype.   I remember when containers were gaining traction in 2012, and Docker in 2013.  At technology conventions, all the cool developers were using containers.   It solved a lot of challenges, but it was not a silver bullet. (But that’s a blog article for another day.)

Today after an interview I was asking myself,  have Containers lived up to the hype?   They are great for CI/CD, getting rid of system administrator bottlenecks, helping with rapid deployment, and some would argue fundamental to DevOps.  So I started researching the hype.   People over at  Cloud Foundry published a container report in  2017 and 2016.

Per the 2016 report, “our survey, a majority of companies (53%) had either deployed (22%) or were in the process of evaluating (31%) containers.”

Per the 2017 report, “increase of 14 points among users and evaluators for a total of 67 percent using  (25%) or evaluating (42%).”

As a former technology VP/director/manager, I was always evaluating technology which had some potential to save costs, improve processes, speed development and improve production deployments.   But a 25% adaption rate and a 3% uptick over last year, is not moving the technology needle.

However, I am starting to see the same trend, Serverless is the new exciting technology which is going to solve the development challenges, save costs, improve the development process and you are cool if you’re using it.       But is it really Serverless or just a simpler way to use a container?

AWS Lambda is basically a container.  (Another blog article will dig into the underpinnings of Lambda.)   Where does the container run? ** A Server. **

Just means I don’t have to understand the underlying container, server etc.etc.etc.     So is it truly serverless?   Or is it just the 2018 technology hype to get all us development geeks excited, we don’t need to learn Docker or Kubernetes, or ask our Sysadmin friends provision us another server.

Let me know your thoughts.

Serverless is becoming the 2018 technology hype.   I remember when containers were gaining traction in 2012, and Docker in 2013.  At technology conventions, all the cool developers were using containers.   It solved a lot of challenges, but it was not a silver bullet. (But that’s a blog article for another...

CCNA Certificate

Got my CCNA certificate today via email.   Far from the day of getting a beautiful package in the mail.       The best is how Cisco lets you recertify after a long hiatus.

Got my CCNA certificate today via email.   Far from the day of getting a beautiful package in the mail.       The best is how Cisco lets you recertify after a long hiatus.

Certification Logos

I think Certification logos are interesting, I would not include them in emails, but some do.   Also,  I would probably not include certifications in an email signature anymore.

Here are the ones I’ve collected in the last few weeks.     I think moving forward, I’ll update the site header to include logos.

I think Certification logos are interesting, I would not include them in emails, but some do.   Also,  I would probably not include certifications in an email signature anymore.

Here are the ones I’ve collected in the last few weeks.     I think moving forward, I’ll update the site header to...

Passed Cisco 200-301 Designing for Cisco Internetwork Solutions

This morning I sat and passed Cisco 200-301 Designing for Cisco Internetwork Solutions.    The exam is not easy, it required an 860 to pass the exam.   17 years ago when I took it only required a 755.    I got 844 17 years ago.    This time I got an 884.    It’s a tough exam as it requires deep and broad networking knowledge across all domains routing, switching, unified communications, WLANs and how to use them in network designs.

That exam officially gives me a CCDA.   That officially makes 7 certifications (5 AWS and 2 Cisco) in 5 weeks.

Next up is the Cisco Exam for 300-101 ROUTE.

This morning I sat and passed Cisco 200-301 Designing for Cisco Internetwork Solutions.    The exam is not easy, it required an 860 to pass the exam.   17 years ago when I took it only required a 755.    I got 844 17 years ago.    This time I...

Strong Technical Interview

Had a strong technical interview today. The interviewer asked questions about the topics in this outline.

  • Virtualization and Hypervisors

  • Security

  • Docker and Kubernetes

    • Storage for Docker
  • OpenStack/CloudStack - which I lack experience

  • Chef

    • Security of data - see Databags
  • CloudFormation - immutable infrastructure

  • DNS

    • Bind

    • Route53

    • Anycast

      • Resolvers routes to different servers
    • Cloud

      • Azure

      • Moving Monolithic application to AWS

        • Which raises a bunch of questions on the Application

          • Virtual or Physical Hardware

          • Backend / Technology

          • Requirements

          • Where is this application on the Roadmap in 3 - 5 years

          • How much of the application being used

        • Cost Optimization

        • Security

      • Explain the underpinnings of IaaS, PaaS, SaaS.

These were all great questions for a technical interview. The interviewer was easy to converse and it was much more of a great discussion than an interview. The breadth and depth of the interview questions were impressive. I was very impressed with the answers to my questions by the interviewer. I left the interview hoping that the interviewer would become a colleague.

Had a strong technical interview today. The interviewer asked questions about the topics in this outline.

  • Virtualization and Hypervisors

  • Security

  • Docker and Kubernetes

    • Storage for Docker
  • OpenStack/CloudStack - which I lack experience

  • Chef

    • Security of data -...

CCNA Exam

I passed the 200-125 CCNA exam today.   Actually scored higher than I did 17 years ago.     However the old CCNA covered much more material.   Technically per Cisco guidelines it’s 3 - 5 days  before I become officially certified.

Primarily I used VIRL to get the necessary hands-on experience and Ciscopress CCNA study guide.   Wendell Odom always does a good job and his blog is beneficial in studying.   The practice tests from MeasureUp are ok, but I wouldn’t get them again.

Next up the 200-310 DESGN.

I passed the 200-125 CCNA exam today.   Actually scored higher than I did 17 years ago.     However the old CCNA covered much more material.   Technically per Cisco guidelines it’s 3 - 5 days  before I become officially certified.

Primarily I used VIRL to get the necessary...

CISCO Certifications

Last time,  I started studying for Cisco Certifications, I built a 6 router one switch lab on my desk.   One router had console ports for all the other routers, and the management port was connected to my home network so I could telnet into each of the routers via their console ports.     It was exciting and a great way to learn and stimulate complex configurations.     The routers had just enough power to run BGP and IPSec tunnels.

This time, I found VIRL, which is interesting as you build a network inside an Eclipse environment.   On the backend, the simulator creates a  network of multiple VMs.

So far,  I built a simple switch network.   I’m using it with the Cloud service Packet as the memory and CPU requirements exceed my laptop.    Packet provides a bare-metal server which is required for how VIRL does a pxe-boot.   I wish there was a bare-metal option on AWS.

I’m still trying to figure out how to upload complex configurations and troubleshoot them.

The product is very interesting as it provides a learning environment for a few hundred dollars vs. the couple thousand which I spent last time to build my lab.

Last time,  I started studying for Cisco Certifications, I built a 6 router one switch lab on my desk.   One router had console ports for all the other routers, and the management port was connected to my home network so I could telnet into each of the routers via their...

AWS Certifications done, What's next?

I had a goal 4 weeks ago, to pass 5 AWS certifications in 4 weeks.      I completed this goal:

  • AWS Certified Solutions Architect – Associate

  • AWS Certified Developer – Associate

  • AWS Certified SysOps Administrator – Associate

  • AWS Certified Advanced Networking – Specialty

  • AWS Certified Solutions Architect – Professional

For the time being,  I’m going to be done with AWS certifications, unless I get a position which leverages AWS.      This week I made a list of the certifications that I will look at over the coming year with a goal to complete all of them by August of 2019.    I still have a fall semester to finish, so I’ll stop certifications at the end of August until December to focus on finishing my masters.

The list of Certifications I made.

  • Azure

  • GCP

  • CISSP

  • CISM

  • Cisco

    • CCNA

    • CCDA

    • CCNP

    • CCDP

  • TOGAF

  • ITIL

  • Linux Certification

Anyone know of any other ones to pursue?    Think it’s a good list for a Solution Architect as it has a broad range of cloud technologies, networking, and security.

I have decided, that my next challenge will be 2 Cisco Certifications in the next 2 weeks.     After that, we’ll see what is next on the list.

I had a goal 4 weeks ago, to pass 5 AWS certifications in 4 weeks.      I completed this goal:

  • AWS Certified Solutions Architect – Associate

  • AWS Certified Developer – Associate

  • AWS Certified SysOps Administrator – Associate

  • AWS Certified Advanced Networking...

AWS Certified Solutions Architect – Professional

I sat the AWS Certified Solutions Architect - Professional exam this morning.   This exam is hard, probably the hardest of the AWS exams I have taken to date.    I did it in about half the allowed time.   Generally, the test is challenging as it covers a lot of topics and each answer always had two correct choices.   The entire exam is a challenge to pick the more correct answer based on the scenario and question with a driving factor of one more or more of the following,  scalability, cost, recovery time, performance or security.

I felt like I passed the exam while doing it, but its always a relief to see:

Congratulations! You have successfully completed the AWS Certified Solutions Architect - Professional exam and you are now AWS Certified.

Here is my score breakdown from the exam.

Topic Level Scoring:
1.0 High Availability and Business Continuity: 81%
2.0 Costing: 75%
3.0 Deployment Management: 85%
4.0 Network Design: 85%
5.0 Data Storage: 81%
6.0 Security: 85%
7.0 Scalability & Elasticity: 63%
8.0 Cloud Migration & Hybrid Architecture: 57%

I sat the AWS Certified Solutions Architect - Professional exam this morning.   This exam is hard, probably the hardest of the AWS exams I have taken to date.    I did it in about half the allowed time.   Generally, the test is challenging as it covers a lot of topics and...

Interviewing

The best part of interviewing is when you spend a day with people who are skilled, interested and have great discussions.    I spent Friday in 5 one hour interviews which were great.   The people genuinely liked the company and their contributions to the company and looking to add talented people to their team.   I felt like I fit in, and would be a great place to work.

Never know what happens, but looking forward to the next steps.

The best part of interviewing is when you spend a day with people who are skilled, interested and have great discussions.    I spent Friday in 5 one hour interviews which were great.   The people genuinely liked the company and their contributions to the company and looking to add talented...

Wrote about Ghosting before

Wrote about Employers Ghosting before, seems like now employees or candidates are doing it too.

https://www.inc.com/justin-bariso/what-is-employee-ghosting-how-companies-created-their-own-worst-nightmare.html

https://www.linkedin.com/pulse/people-ghosting-work-its-driving-companies-crazy-chip-cutter/

What happened to decorum?

Wrote about Employers Ghosting before, seems like now employees or candidates are doing it too.

https://www.inc.com/justin-bariso/what-is-employee-ghosting-how-companies-created-their-own-worst-nightmare.html

https://www.linkedin.com/pulse/people-ghosting-work-its-driving-companies-crazy-chip-cutter/

What happened to decorum?

AWS Certified SysOps Administrator - Associate

I sat the AWS Certified SysOps Administrator - Associate this morning.   That makes two exams this week in 3 days.

The exam was a little bit harder than the two other Associate exams as it went a level deeper.   It focused on CloudFormation, CloudWatch, and deployment strategies.      There were nine questions I struggled with the right answer, as all nine had two good answers.     There were about 35 questions I knew cold.    There were three questions duplicated on the other associate exams.   All of the network questions I was over-thinking, probably based on the networking exam this week.    Given this, I wasn’t worried when I ended the test.   However,  it’s always a relief when you get the Congratulations! You have successfully completed the AWS Certified SysOps Administrator - Associate.

Within 10 minutes I got my score email:

Congratulations again on your achievement!

Overall Score: 84%
Topic Level Scoring:
1.0 Monitoring and Metrics: 80%
2.0 High Availability: 83%
3.0 Analysis: 100%
4.0 Deployment and Provisioning: 100%
5.0 Data Management: 83%
6.0 Security: 100%
7.0 Networking: 42%

The score reflected over thinking the networking questions.    I wouldn’t recommend sitting two different exams in the same few days.

That make 4 AWS certifications in 3 weeks:

  • AWS Certified SysOps Administrator - Associate
  • AWS Certified Advanced Networking - Specialty
  • AWS Certified Developer - Associate
  • AWS Certified Solutions Architect - Associate (Released February 2018)

Guess now it’s time to focus on the last of the Amazon Certifications I’ll work on for now which is the  AWS Certified Solutions Architect – Professional.

I sat the AWS Certified SysOps Administrator - Associate this morning.   That makes two exams this week in 3 days.

The exam was a little bit harder than the two other Associate exams as it went a level deeper.   It focused on CloudFormation, CloudWatch, and deployment strategies.      There were...

DevOps

Is DevOps the most overused word in technology right now?

The full definition from Wikipedia.  Here what DevOps really is about.   It about taking monolithic code with complex infrastructure supported by developers, operational personnel, testers, system administrators and simplifying it, monitoring it and taking automated corrective actions or notification.

It’s really about reducing resources who aren’t helping the business grow and using that headcount toward a position which can help revenue growth.

It’s done in 3 pieces.

Piece 1. The Infrastructure

It starts by simplifying the infrastructure build-out, whether it in the cloud where environments can be spun up and down instantly based on some known configuration like AWS CloudFormation,  using Docker or Kubernettes.   Recently, Function as a Service (FaaS), AWS Lambda,  Google Cloud Functions or Azure Functions. This reduces reliance on a DBA, Unix or Windows System Administrator and Network Engineers.   Now the developer has the space they need instantly.   The developer can deploy their code quicker, which speeds time to market.

Piece 2.  Re-use and Buy vs. Build

Piece 2 of this is the Re-use and Buy vs. Build.   Meaning if someone has a service re-use it, don’t go building your own.    An example is Auth0 for authentication and Google Maps for mapping locations or directions.

Piece 3.  When building or creating software do it as Microservices.

To simplify it you are going to implement microservices.   Basically, you create code that does one thing well.  It’s small, efficient and manageable.    It outputs JSON which can be parsed by upstream Services.   The JSON can extend without causing issues to upstream Services.   This now reduces the size of the code base a developer is touching, as it one service.   It reduces regression testing footprint.      So now the number of testers, unit tests, regression tests and integration tests have been shrunk.   This means faster releases to production, and also means a reduction in resources.

You’re not doing DevOps if any of these conditions apply?

  1. You have monolithic software you’ve put some web services in front of.

  2. Developers are still asking to provision environments to work.

  3. People are still doing capacity planning and analysis.

  4. NewRelic (or any other system)  is monitoring the environment, but no one is aware of what is happening.

  5. Production pushes happen at most once a month because of the effort and amount of things which break.

Doing DevOps

  1. Take the monolithic software and break it into web services.

  2. Developers can provision environments per a Service Catalog as required.

  3. Automate capacity analysis.

  4. Automatic SLAs which trigger notifications and tickets.

  5. NewRelic is monitoring the environment, and it providing data to systems which are self-correcting issues, and there are feedback loops on releases.

  6. Consistently (multiple times a week)  pushing to production to enhance the customer experience.

Is DevOps the most overused word in technology right now?

The full definition from Wikipedia.  Here what DevOps really is about.   It about taking monolithic code with complex infrastructure supported by developers, operational personnel, testers, system administrators and simplifying it, monitoring it and taking automated corrective actions or notification.

It’s really...

Studying for the AWS Certified SysOps Administrator – Associate

The material for the AWS Certified SysOps Administrator – Associate seems to be a lot of the material cover under the Associate Architect and Associate Developer.   I would have thought the material more focus on setting up and troubleshooting issues with EC2, RDS, ELB, VPC etc.    It also spends a lot of time looking at CloudWatch, but doesn’t really provide strategies for leveraging the logs.  Studying was a combination of the acloud.guru and the official study guide, and the Amazon Whitepapers.

I took the AWS supplied practice test using a free test voucher and score the following:

Congratulations! You have successfully completed the AWS Certified SysOps Administrator Associate - Practice Exam
Overall Score: 90%
Topic Level Scoring: 1.0 Monitoring and Metrics: 100% 2.0 High Availability: 100% 3.0 Analysis: 66% 4.0 Deployment and Provisioning: 100% 5.0 Data Management: 100% 6.0 Security: 100% 7.0 Networking: 66%

It interesting the networking score was so low as I just passed the Network Speciality.

This is the last Associate exam to pass for me.    If I successfully pass it, I will begin the process of studying for the Certified Solution Architect - Professional.    That will probably be my last AWS certification as I’ll look at either starting on something like  TOGAF certification,  Redhat or Linux Institute, Cisco, GCP  or Azure, depending on where my interest lies in a few weeks.

The material for the AWS Certified SysOps Administrator – Associate seems to be a lot of the material cover under the Associate Architect and Associate Developer.   I would have thought the material more focus on setting up and troubleshooting issues with EC2, RDS, ELB, VPC etc.    It also spends a...

Passed the AWS Certified Advanced Networking – Specialty Exam

I passed the AWS Certified Advanced Networking – Specialty Exam this morning.    The exam is hard.   My career started with a  networking as I had multiple Nortel and Cisco Certifications and was studying to the CCIE Lab back then.  But over the last 12 years,  I got away from networking.    Doing this exam was going back to something I loved for a long time, as  BGP, Networking, Load Balancers, WAF makes me excited.

My exam results

Topic Level Scoring:
1.0  Design and implement hybrid IT network architectures at scale: 75%
2.0  Design and implement AWS networks: 57%
3.0  Automate AWS tasks: 100%
4.0  Configure network integration with application services: 85%
5.0  Design and implement for security and compliance: 83%
6.0  Manage, optimize, and troubleshoot the network: 57%

I have limited experience with AWS networking prior to this exam.   I had the standard things likes load balancers, VPCs, Elastic IPs and Route 53.   This exam tests your knowledge of these areas and more.      To prepare I used the acloud.guru course, also the book  AWS Certified Advanced Networking Official Study Guide: Specialty Exam and the Udemy Practice Tests.    With the course and book, I set up VPC peers, Endpoints, nat instances, gateways, CloudFront distributions.    I put about 50 hours into doing the course, reading the book, doing various exercise, and studying etc.

Based on my experience the acloud.guru course is lacking the details on the ELBs, the WAF, private DNS, and implementation within CloudFormation.     The book comes closer to the exam, but also doesn’t cover CloudFormation, WAF or ELBs as deep as the exam.   The Udemy practice tests were close to the exam, but lack some of the more complex scenario questions.

I plan to sit the AWS Certified SysOps Administrator - Associate exam later this week.

I passed the AWS Certified Advanced Networking – Specialty Exam this morning.    The exam is hard.   My career started with a  networking as I had multiple Nortel and Cisco Certifications and was studying to the CCIE Lab back then.  But over the last 12 years,  I got away from networking.   ...

kubernetes

What’s up with interviewers asking about kubernetes experience lately?   Two different interviewers raised the question today.

Kubernetes Is only 4 years old .   GCP has supported it for a while. AWS released it in beta at Re:invent 2017 and it went general release June 5 2018.   Azure went GA June 13, 2018.

So how widely deployed is it?     Also if it is supposed to speed deployments, how complex can it be?   How many hours to learn it?

Next week I will be learning it.  Looking forward to answering these questions.

What’s up with interviewers asking about kubernetes experience lately?   Two different interviewers raised the question today.

Kubernetes Is only 4 years old .   GCP has supported it for a while. AWS released it in beta at Re:invent 2017 and it went general release June 5 2018.   Azure went...

Finally got my AWS Certified Solution Architect - Associate Results

The pdf provided this:

The AWS Certified Solutions Architect - Associate (Released February 2018) (SAA-C01) has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 720.

I got a 932….

The pdf provided this:

The AWS Certified Solutions Architect - Associate (Released February 2018) (SAA-C01) has a scaled score between 100 and 1,000. The scaled score needed to pass the exam is 720.

I got a 932….

awsarch went SSL

awsarch went SSL.

Amazon offers free SSL certificates if your domain is hosted on an ELB, CloudFront, Elastic Beanstalk, API Gateway or AWS CloudFormation.

For more information on the Amazon ACM service. 

So basically it required setting up an Application Load Balancer, updating DNS, making updates to .htaccess and a fix to the wp-config file.

Now the site is HTTPS and the weird non-HTTPs browser messages went away.     Come July Chrome will start carrying a warning sign per this Verge Article.

Free SSL certificates can also be acquired here https://letsencrypt.org/ 

awsarch went SSL.

Amazon offers free SSL certificates if your domain is hosted on an ELB, CloudFront, Elastic Beanstalk, API Gateway or AWS CloudFormation.

For more information on the Amazon ACM service. 

So basically it required setting up an Application Load Balancer, updating DNS, making updates to .htaccess and a...

Exam For AWS Certified Developer – Associate

I sat the exam for the AWS Certified Developer - Associate this morning.     I felt lucky as the system kept asking questions I knew in depth.   There were only 4 questions I didn’t know the answer to and took an educated guess.

I did the exam in 20 minutes for 55 questions.    I only review questions I flag, and I only flagged about 8 questions.    I felt really lucky as the exam was playing to my knowledge of DynamoDB, S3, EC2, and IAM.   There were other questions about Lambda, CloudFormation, CloudFront, and API calls.   But the majority of the questions focused on 4 areas of AWS, I knew really well.

At the end of the exam, I got the Congratulations have successfully completed the AWS Certified Developer  – Associate exam.

Also within 15 minutes, I got the email confirming my score:

Congratulations again on your achievement!

Overall Score: 90%
1.0 AWS Fundamentals: 100%
2.0 Designing and Developing: 85%
3.0 Deployment and Security: 87%
4.0 Debugging: 100%

I’m still waiting on my score from my Solution Architect - Associate Exam.    In the meantime, I’ll get back to studying my AWS Networking Speciality.

I sat the exam for the AWS Certified Developer - Associate this morning.     I felt lucky as the system kept asking questions I knew in depth.   There were only 4 questions I didn’t know the answer to and took an educated guess.

I did the exam in 20 minutes...

AWS Practice Test for Certified Developer – Associate

AWS offers practices emails through PSI exams.   Cost $20 and gives you 20 questions for practice.    I did the exam today.   Here is the results email.

Congratulations! You have successfully completed the AWS Certified Developer Associate - Practice Exam
Overall Score: 95%
Topic Level Scoring: 1.0  AWS Fundamentals: 100% 2.0  Designing and Developing: 87% 3.0  Deployment and Security: 100% 4.0  Debugging: 100%

That’s a confidence builder going into the exam tomorrow morning.

AWS offers practices emails through PSI exams.   Cost $20 and gives you 20 questions for practice.    I did the exam today.   Here is the results email.

...

Back to Studying for my Developer Exam

I had scheduled the test for June 14 for AWS Certified Developer – Associate.   I need to stop studying the Network information and finish studying for the developer exam.    I had completed the   https://acloud.guru/ course on Sunday.      I decided to purchase  AWS Certified Developer - Associate Guide: Your one-stop solution to passing the AWS developer’s certification

The book was good, it covers all the major topics for the associate developer certification, but it lacks hands-on lab and there are several errors in the mock exams.

I had scheduled the test for June 14 for AWS Certified Developer – Associate.   I need to stop studying the Network information and finish studying for the developer exam.    I had completed the   https://acloud.guru/ course on Sunday.      I decided to purchase  AWS Certified Developer - Associate Guide: Your...

Interview Observations

The number one problem with interviewers, recruiters, etc is the lack of follow-up.   I refer to it as ghosting.   At least have the curiosity to reach out even via email and say, thank you, but you’re not a fit.

Why do people ask about are you ok with a manager title?    If you were a VP or director and now applying for a manager position.    Do they really not think you can’t read the job title?     I took time to apply for this position, research the company and prep for the phone call and this is the first question you’re going to ask me, is do I understand this is a manager position.   Argh!   Assume if I applied for the position there was something which interests me, ask me why this position?

This has happened a few times recently

  1. _Have you ever been in an interview and half way thru are you thinking, does this person want to report to me or be the manager instead? _Decide this before you start interviewing people.   It’s a waste of time.

  2. _Have you ever been in an interview and half way thru you are thinking there is no way I want to manage this person?   _Why would a company have this person interview you, are they trying to scare you away.

Best Technical Interview questions so far…

  1. What is callback hell in Javascript?     First I don’t know if you’re a good programmer why you’d even need to know what this is.

  2. _What is the difference between inheritance in Python and Java?     _Python natively supports multiple inheritances, whereas Java  Class B would extend Class A,  and could Class C extend class B.

  3. How does a bash shell work? 

  4. _When to use swap and when not to use swap?    _Ugh, the answer is it depends on the application.

The number one problem with interviewers, recruiters, etc is the lack of follow-up.   I refer to it as ghosting.   At least have the curiosity to reach out even via email and say, thank you, but you’re not a fit.

Why do people ask about are you ok with a manager...

Waiting for Score Email

Still waiting on the score from the AWS Certified Solution Architect – Associate exam.

However, I started also studying for the AWS Certified Advanced Network - Speciality.

I love networks and networking, especially VPNs and BGP.     So I felt it was a good challenge as well as something I enjoyed doing.    12 years ago, I had multiple Cisco routers on my desk and would run BGP configurations, OSPF and EIGRP configurations.       Maybe I need an AWS DirectConnect…..

Still waiting on the score from the AWS Certified Solution Architect – Associate exam.

However, I started also studying for the AWS Certified Advanced Network - Speciality.

I love networks and networking, especially VPNs and BGP.     So I felt it was a good challenge as well as something I enjoyed...

AWS Certified Solutions Architect – Associate

I sat the AWS Certified Solution Architect - Associate exam.    It was challenging as it covers a broad set of AWS services.   I sat the February 2018 version which is the new one.

At the end of the exam, I got a Congratulations have successfully completed the AWS Certified Solution Architect - Associate exam.

I decided that I would complete the AWS Certified Developer - Associate next.

I sat the AWS Certified Solution Architect - Associate exam.    It was challenging as it covers a broad set of AWS services.   I sat the February 2018 version which is the new one.

At the end of the exam, I got a Congratulations have successfully completed the AWS Certified Solution...

Congratulations! You have successfully completed the AWS Certified Developer Associate - Practice Exam
Overall Score: 95%
Thank you for taking the AWS Certified Solutions Architect - Associate - Practice (Released February 2018) exam. Please examine the following information to determine which topics may require additional preparation.
Overall Score: 80%
Topic Level Scoring: 1.0 Design Resilient Architectures: 100% 2.0 Define Performant Architectures: 71% 3.0 Specify Secure Applications and Architectures: 66% 4.0 Design Cost-Optimized Architectures: 50% 5.0 Define Operationally-Excellent Architectures: 100%

I was a little concerned after the practice exam.   I spent the rest of the evening studying.    There various blogs which talk about the exam, but it seems depending on the day, exam, the location you could need anywhere from a 65% to a 72% to pass the exam.    Based on the practice I didn’t have a lot of room for error.

AWS offers practices emails through PSI exams.   Cost $20 and gives you 20 questions for practice.    I did the practice exam today.   Here is the results email.

Thank you for taking the AWS Certified Solutions Architect - Associate - Practice (Released February 2018) exam. Please examine the...

Feeling Sorry

I spent 24 hours feeling sorry for myself.   I had already started studying the AWS Certified Solution Architect - Associate as I wanted to be prepared if I got an offer from Amazon.       Now was their no reason to continue the preparation for the exam?

What was I going to do?     I decided to continue to study for the exam, and credential myself.   Possible with the end goal of either going back to work for Amazon or working somewhere else.    Certification couldn’t hurt me.

It had been over a decade since I was last certified.    At one point, I had multiple networking certifications from Nortel, my CCNA, CCDA, CCNP and was studying for a CCIE.     The process is always hard, but I love learning.    So I decided to push forward with my AWS certification.

I spent 24 hours feeling sorry for myself.   I had already started studying the AWS Certified Solution Architect - Associate as I wanted to be prepared if I got an offer from Amazon.       Now was their no reason to continue the preparation for the exam?

What was I going...

Amazon Interview

Completed the Amazon interview on site.   It’s a meeting with 5 Amazonians.    The interviews are between 30 minutes and 45 minutes.   They start promptly on time and end promptly on time.    Each question is about telling a story and relating back to the Amazon Leadership Principles.

There were a lot of great questions during the session.   It requires you to be detailed, communicate clearly and explain your answers.

There three questions which stood out:

  1. What Leadership Principle do you associate the most with?

  2. What Leadership Principle do you associate the least with or disagree with?

  3. What was a decision you made wrong and why?

The other notable things I noticed, is all the interviewers loved their jobs and loved the culture of Amazon.    Also, if the right amount of lean in, any idea could take shape and become part of AWS.

The other thing, I noticed is Amazon has a long lead process for developing Solution Architects.    They want a person to know AWS before speaking with customers, which could take 6 to 9 months and require multiple AWS certifications.    Also if you want to speak on behalf of Amazon, you have to get public speaking certified within Amazon.

It’s clear they want the smartest people with the most AWS knowledge.

Completed the Amazon interview on site.   It’s a meeting with 5 Amazonians.    The interviews are between 30 minutes and 45 minutes.   They start promptly on time and end promptly on time.    Each question is about telling a story and relating back to the Amazon Leadership Principles.

There...

Writing Sample for Amazon

Amazon requires a writing sample of 2 pages about a topic they provide.    To do this I wrote 3 paragraphs on 6 different key events from my career.   Then I took 3 of those topics and made it a page long.   Finally, I decided on which topic covered the most Amazon Leadership Principles and used that on for the final essay by elaborating and extending it to two full pages.

The writing sample has been submitted let’s see how it goes.

Let’s see how it goes.

Amazon requires a writing sample of 2 pages about a topic they provide.    To do this I wrote 3 paragraphs on 6 different key events from my career.   Then I took 3 of those topics and made it a page long.   Finally, I decided on which topic covered the...

AWSARCH

This is my blog about my experience becoming find a new position.

This started in May with the beginning of an interview process with Amazon.

I called it AWSARCH just because it’s a truly obscure name.

This is my blog about my experience becoming find a new position.

This started in May with the beginning of an interview process with Amazon.

I called it AWSARCH just because it’s a truly obscure name.