AWS Certified Security – Specialty


Sat the AWS Certified Security - Speciality Exam this morning.  The exam is hard, as it scenario based.   Most of the exam questions were to pick the best security scenario.   It could be renamed the Certified Architect - Security.    Every one of those questions had 2 good answers, it came down to which was more correct and more secure.       It’s the hardest exam I’ve taken to date.   I think it is harder than the Solution Architect - Professional exam. The majority of the exam questions where on KMS, IAM, securing S3, CloudTrail, CloudWatch, multiple AWS account access, Config, VPC, security groups, NACLs, and WAF.

I did the course on acloud.guru and I think the whitepapers and links really helped me in the studying for this exam:

The exam took me about half the allocated time, I read fast and have a tendency to flag questions I don’t know the answer to and come back later and work thru them.    This exam, I flagged 20 questions, highest of any AWS exam taken to date.     Most of them I could figure out, once I thought about them for a while.      Thru the exam, I was unsure of my success or failure.

Upon submission, I got the “Congratulations! You have successfully completed the AWS Certified Security - Specialty exam…”

Unfortunately, I didn’t get my score, I got the email, which says, “Thank you for taking the  AWS Certified Security - Specialty exam. Within 5 business days of completing your exam,”

That now makes my 6th AWS Certification.


Sat the AWS Certified Security - Speciality Exam this morning.  The exam is hard, as it scenario based.   Most of the exam questions were to pick the best security scenario.   It could be renamed the Certified Architect - Security.    Every one of those questions...

AWS Config, KMS and EBS encryption

If you have an AWS deployment, make sure you turn on AWS Config.       It has a whole bunch of built-in rules, and you can add your own to validate the security of your AWS environment as it relates to AWS services.   Amazon provides good documentation, a GitHub repo,  and SumoLogic does a quick How-to turn it on.      It’s straightforward to turn on and use.   AWS provides some pre-configured rules, and that’s what this AWS environment will validate against.  There is a screenshot below of the results.   Aside from turning it on, you have to decide which rules are valid for you.   For instance, not all S3 buckets have business requirements to replicate, so I’d expect this to always be a noncompliant resource.However, one of my findings yesterday was missing EBS encrypted volumes. In order to make EBS volumes encrypted its 9 easy steps:

  1. Make a snapshot of the EBS Volumes.

  2. Copy the snapshot of the EBS Volume to a Snapshot, but select encryption.   Use the AWS KMS key you prefer or Amazon default aws/ebs.

  3. Create an AMI image from the encrypted Snapshot.

  4. Launch the AMI image from the encrypted Snapshot to create a new instance.

  5. Check the new instance is functioning correctly, and there are no issues.

  6. Update EIPs, load balancers, DNS, etc. to point to the new instance.

  7. Stop the old un-encrypted instances.

  8. Delete the un-encrypted snapshots.

  9. Terminate the old un-encrypted instances.

Remember KMS gives you 20,000 request per month for free, then the service is billable.

If you have an AWS deployment, make sure you turn on AWS Config.       It has a whole bunch of built-in rules, and you can add your own to validate the security of your AWS environment as it relates to AWS services.   Amazon provides good documentation, a

Amazon Crashing on Prime Day

Amazon is crashing on Prime Day,  made breaking news.   Appears the company is having issues with the traffic load.

Given Amazon runs from AWS as of 2011. Not a great sign for either Amazon or the scalability model they deployed on AWS.

Amazon is crashing on Prime Day,  made breaking news.   Appears the company is having issues with the traffic load.

Given Amazon runs from AWS as of 2011. Not a great sign for either Amazon or the scalability model they deployed on...

Minimum Security Standards and Data Breaches

Why do agencies post minimum security standards?     The UK government recently released a minimum security standards document which all departments must meet or exceed.    The document is available here:  Minimum Cyber Security Standard.

The document is concise, short, and clear.   It contains some relevant items for decent security, covering most common practices over the last 10 years.   I’m not a UK citizen, but if agencies are protecting my data, why do they have to meet minimum standards.    If an insurer was using the minimum standards, it would be “lowest acceptable criteria that a risk must meet in order to be insured”.     Do I really want to be in that class lowest acceptable criteria for a risk to my data and privacy?

Given now, you know government agencies apply minimum standards, let’s look at breach data.   Breaches are becoming more common and more expensive and this is confirmed by a report from Ponemon Institue commissioned by IBM.   The report states that a Breach will cost $3.86 million, and the kicker is that there is a recurrence 27.8% of the time.

There two other figures in this report that astound me:

  • The mean time to identify (MTTI) was 197 days

  • The mean time to contain (MTTC) was 69 days

That means that after a company is breached, it takes on average 6 months to identify the breach and 2 months to contain it.   The report goes on to say that 27% of the time a breach is due to human error and 25% of the time because of a system glitch.

So interpolate this, someone or system makes a mistake and it takes 6 months to identify and 2 months to contain.    Those numbers should be scaring every CISO, CIO, CTO, other executives, security architects, as the biggest security threats are people and systems working for the company.

Maybe it’s time to move away from minimum standards and start forcing agencies and companies to adhere to a set of best practices for data security?

Why do agencies post minimum security standards?     The UK government recently released a minimum security standards document which all departments must meet or exceed.    The document is available here:  Minimum Cyber Security Standard.

The document is concise, short, and clear.   It contains some relevant items for decent security, covering...

SaaS based CI/CD

Let’s start with some basics of software development.    It still seems no matter what methodology of software development lifecycle that is followed it includes some level of definition, development, QA, UAT, and Production Release.   Somewhere in the process, there is a merge of multiple items into a release.   This still means your release to production could be monolithic.

The mighty big players like  GoogleFacebook, and Netflix (click any of them to see their development process) have revolutionized the concept of Continous Integration (CI) and Continous Deployment (CD).

I want to question the future of CI/CD,  instead of consolidating a release, why not release a single item into production, validate over a defined period of time and push the next release.   This entire process would happen automatically based on a queue (FIFO) system.

Taking it to the world of corporate IT and SaaS Platforms.   I’m really thinking about software like Salesforce Commerce Cloud,  or Oracle’s NetSuite.      I would want the SaaS platform to provide me this FIFO system to load my user code updates.  The system would push and update the existing code, while it continues to handle the requests and the users wouldn’t see discrepancies.    Some validation would happen, the code would activate and a timer would start on the next release.  If validation failed the code could be rolled back automatically or manually.

Could this be a reality?

Let’s start with some basics of software development.    It still seems no matter what methodology of software development lifecycle that is followed it includes some level of definition, development, QA, UAT, and Production Release.   Somewhere in the process, there is a merge of multiple items into a release....

Anycast

IPv6 implemented Anycast for many benefits. The premise behind Anycast is multiple nodes can share the same address, and the network routes the traffic to the Anycast interface address closest to the nearest neighbor.

There is a lot of information on it for the Internet as it relates to IPv6.  Starting with a deep dive in the RFC RFC 4291 - IP Version 6 Addressing Architecture.   Also, there is a document on Cisco Information IPv6 Configuration Guide.

The more interesting item which was a technical interview topic this week was the extension into IPv4. The basic premise is that BGP can have multiple subnets in different geographic regions with the same IP address and because of how internet routing works, traffic to that address is routed to the closest address based on BGP path.

However, this presents two issues if the path in BGP disappears that means the traffic would end up at another node, which would present state issues. The other issues are with BGP as it routes based on path length. So depending on how upstream ISP is peered and routed, a node physically closer, could not be in the preferred path and therefore add latency.

One of the concepts behind this is DDoS Mitigation, which is deployed with the Root Name Servers and also CDN providers. Several RFC papers discuss Anycast as a possible DDoS Mitigation technique:

RFC 7094 - Architectural Considerations of IP Anycast

RFC 4786 - Operation of Anycast Services

CloudFlare(a CDN provider) discusses their Anycast Solution:  What is Anycast.

Finally, I’m a big advocate of conference papers, maybe because of my Master’s degree or 20 years ago if you wanted to learn something it was either from a book or post-conference proceedings. In the research, for this blog article, I came across a well-written research paper from 2015 on the topic of DDoS mitigation with Anycast Characterizing IPv4 Anycast Adoption and Deployment.  It’s definitely worth a read, and especially on interesting how Anycast has been deployed to protect the Root DNS servers and CDNs.

IPv6 implemented Anycast for many benefits. The premise behind Anycast is multiple nodes can share the same address, and the network routes the traffic to the Anycast interface address closest to the nearest neighbor.

There is a lot of information on it for the Internet as it relates to IPv6. ...