Archive of posts from August 2018

Data-safe Cloud...

Amazon recently released a presentation on Data-safe Cloud.  It appears to be based on some Gartner question and other data AWS collected.  The presentation discusses 6 core benefits of a secure cloud.

  1. Inherit Strong Security and Compliance Controls
  2. Scale with Enhanced Visibility and Control
  3. Protect Your Privacy and Data
  4. Find Trusted Security Partners and Solutions
  5. Use Automation to Improve Security and Save Time
  6. Continually Improve with Security Features.  

I find this marketing material to be confusing at best, let’s analyze what it is saying. 

For point 1, Inherit Strong and Compliance Controls, which reference all the compliance AWS achieves.  However, it loses track of the shared responsibility model and doesn’t even mention until page 16.   Amazon has compliance in place which is exceptional, and most data center operators or SaaS providers struggle to achieve.   This does not mean my data or services running within the Amazon environment meet those compliances

For point 2,  4  and 6 those are not benefits of the secure cloud.  Those might be high-level objects one uses to form a strategy on how to get to a secure cloud.  

Point 3 I don’t even understand, the protection of privacy and data has to be the number one concern when building out workloads in the cloud or private data centers.   It’s not a benefit of the secure cloud, but a requirement.  

For point 5, I am a big fan of automation and automating everything.   Again this is not a benefit of a secure cloud, but how to have a repeatable, secure process wrapped in automation which leads to a secure cloud.

Given the discussions around cloud and security given all the negative press, including the recent AWS S3 Godaddy Bucket exposure, Amazon should be publishing better content to help move forward the security discussion.  

Amazon recently released a presentation on Data-safe Cloud.  It appears to be based on some Gartner question and other data AWS collected.  The presentation discusses 6 core benefits of a secure cloud.

  1. Inherit Strong Security and Compliance Controls
  2. Scale with Enhanced Visibility and Control
  3. Protect Your Privacy and Data
  4. ...

Security as Code

One of the things I’ve been fascinated of late is the concept of Security as Code.   I’ve just started to read the book DevOpSec by Jim Bird.   One of the things the book talks about is injecting security into the CI/CD pipeline for applications.  Basically merging developers and security, as DevOps merged developers and operations.   I’ve argued for years DevOps is a lot of things, but fundamentally it was a way for operations to become part of the development process which led to the automation of routine operational tasks and recovery.  So now if we look at DevOpsSec, this would assume security is part of the development process. I mean more than just the standard code analysis using Veracode.  What would it mean if security processes and recovery could be automated?  

Security Operations Centers (SOCs) where people are interpreting security events and reacting.  Over the last few years, much of the improvements in SOCs has been made via AI and machine learning reducing the head count required to operate a SOC.   What if security operations were automated?   Could some code be generated based on the security triggers and provided to the developer for review and incorporation into the next release?

We talk about infrastructure as code, where some data can be generated to create rules and infrastructure using automation.   Obviously on AWS you can install security tool based AMIs, Security Groups and NACLs with Cloudformation.  My thoughts go to firewall based AMIs, appliances  for external access.   The appliance access-lists required are complex, require enormous review and processing within an organization.  Could access lists be constructed based on a mapping of the code and automatically generated for review?  Could the generated access list be compared against existing access-list for deduplication detection.

It’s definitely an interesting topic and hopefully evolves over the next few years. 

One of the things I’ve been fascinated of late is the concept of Security as Code.   I’ve just started to read the book DevOpSec by Jim Bird.   One of the things the book talks about is injecting security into the CI/CD pipeline for applications.  Basically merging developers and security,...

AWS Release T3 Instances

Earlier today AWS released t3 instances.   There are a bunch of press releases about the topic.    The performance is supposed to be 30% better than T2.   Hopefully, in the next few days, independently published benchmarks will be released to confirm if the instances are 30% faster.   In the interim go to the Amazon pages for all the details on T3 instances.   The cost is a few cents less.   For example, a reserved instance from T2.small to  T3.small with no upfront went from .17 cents to .15 cents in the US-WEST-2 region.    

Before today awsarch.io ran off T2 instances, to build this blog article it was updated to T3 instances.    AWS makes it easy to change instance type, just shut down the instance and from the AWS console go to Instance Settings->Change Instance type.  Then select the appropriate t3 instance.   It can be done via the AWS CLI as well.

Change Instance

T3 force you to select EBS optimized volumes.  EBS optimized volumes for T3 provide additional IOPS.  Here is the link for the complete EBS optimized information.

T3 EBS Optimized

The T3 instance uses an ENA adapter so before starting your instance change the ENA adapter thru the AWS command line:  

aws ec2 modify-instance-attribute –instance-id --ena-support

Lastly, I notice mount points changed.   Previously the EBS volumes devices in the Linux /dev directory changes.   Before the change to T3 they were /dev/xvdf1, /dev/xvdf2, etc.  Now the devices are /dev/nvme1n1p1, /dev/nvme1n1p2, etc.   Something to keep in mind if you have additional volumes with mount points on the ec2 instance. 

Earlier today AWS released t3 instances.   There are a bunch of press releases about the topic.    The performance is supposed to be 30% better than T2.   Hopefully, in the next few days, independently published benchmarks will be released to confirm if the instances are 30% faster.   In the interim go...

AWS Logging Solution

Amazon generates a lot of logs via VPC Flow Logs, CloudTrail, S3 access logs, CloudWatch (See the end of the blog article for a full list.)   Additionally, there are OS, Application, web server logs.   That is a lot of data which provides valuable insight into your running AWS environment.   What are you doing to manage this log files?  What are you doing with those log files?  What are you doing to analysis these log files?

There are a lot of logging solutions available that integrate with AWS. Honestly, I’m a big fan of Splunk and have set it up multiple times.  However, I wanted to look at something else for this blog article. Something open source and relatively low cost. This blog is going to explain what I did to setup Graylog. Graylog has no charges for the software, but you’re going to get charged for the instance, Kinesis, SQS, and data storage.  It actually a good exercise if to familiarize yourself with AWS services, especially for the Sysops exams.  

Graylog provides great instructions.   I followed the steps remember to use their image which is already self-built on Ubuntu.   One difference with this setup, I didn’t use a 4GB memory systems.   I picked a t2.small which proves 1vCPU and 2GB of memory.    I didn’t notice performance issues.  Remember to allow ports 443 and 9000 in security groups and the Networking ACLs.   I prefer to run this over HTTPS.  And it bugs me when you see NOT SECURE HTTP:  I installed an SSL certificate, and this is how I did it.

  1. Create a DNS name 
  2. Get a free certificate 
  3. Install the Certificate as such 

Now my instance is up, and I can log into the console.  I want to get my AWS logs into Graylog.   To do this is requires the logs sent to Kinesis or SQS.  I am not going to explain the SQS setup as there plenty of resources for the specific AWS Service.   Also, the Graylog Plugin describes how to do this.  Graylog plugin for CloudTrail, CloudWatch and VPC Flow logs is available on Github at Graylog Plugin for AWS.

What about access_logs?  Graylog has the Graylog Collector Sidecar.      I’m not going to rehash the installation instructions here as there are great installation instructions.     Graylog has a great documentation.   Also if you are looking for something not covered here, it will be in the documentation or in their Github project. 

What are you using as your log collection processing service on Amazon?  

List of AWS Servers generating logs:

Amazon S3 Access logs Amazon CloudFront Access logs Elastic Load Balancer (ELB) logs Amazon Relational Database Service (RDS) logs Amazon Elastic MapReduce (EMR) logs Amazon Redshift logs AWS Elastic Beanstalk logs AWS OpsWorks logs (or this link) AWS Import/Export logs AWS Data Pipeline logs AWS CloudTrail logs

Amazon generates a lot of logs via VPC Flow Logs, CloudTrail, S3 access logs, CloudWatch (See the end of the blog article for a full list.)   Additionally, there are OS, Application, web server logs.   That is a lot of data which provides valuable insight into your running AWS...

My Favorite Cloud Update for July

All three code platforms AWS, Google Cloud, Azure release features all the time.    However, Google Cloud took a major leap by providing great tool developers by integrating with IntelliJ.   Google did a great job covering the how it works in there Platform blog which is worth reading.

I have used Eclipse since it was released, prior to that I would use Emacs.   However, for my master program over the last 3 years, I have been using IntelliJ.   It’s become my go-to platform for coding work because IntelliJ is easy to use,  and my various class groups typically use it.   IntelliJ makes it free for students, which is a great way to develop a  user base given its price tag.

Providing an easy to use a tool, which has an existing user based was smart by Google Cloud especially as it continues its to close the gap with AWS.

Finally, I’m not a big fan of Cloud9 on AWS.   What do you think?   Are you an IntelliJ or Cloud9 user?

All three code platforms AWS, Google Cloud, Azure release features all the time.    However, Google Cloud took a major leap by providing great tool developers by integrating with IntelliJ.   Google did a great job covering the how it works in there Platform blog which is worth reading.

...

Provide 10Gbps and 40 Gbps Ports But Less Throughput

A longtime issue with networking vendors is providing ports at one speed and the throughput at another speed.  I remember dealing with it back in 2005 with the first generation of Cisco ASA’s which primarily replaced the PIX Firewall.   Those firewalls provided 1Gbps ports, but the throughput the ASA could handle was about half that bandwidth.

Some marketing genius created the term wire speed and throughput.

If you’re curious about this go look at Cisco Firepower NGFW firewalls.  The 4100 series have 40Gbps interfaces, but depending on the model throughput is between 10Gbps and 24Gbps with FW+AVC+IPS turned on.

I have referenced several  Cisco devices, but it’s not a specific issue to Cisco.    Take a look at Palo Alto Networks Firewalls specifically the PA-52XX have four 40Gbps ports, but can support between 9Gbps and 30Gbps of throughput with full threat protection on.

The technology exists so why aren’t networking vendors able to provide wire-speed throughput between ports, even with the full inspection of traffic turned on?    I would very like to know your thoughts on this topic please leave a comment.

A longtime issue with networking vendors is providing ports at one speed and the throughput at another speed.  I remember dealing with it back in 2005 with the first generation of Cisco ASA’s which primarily replaced the PIX Firewall.   Those firewalls provided 1Gbps ports, but the throughput the ASA could...

HTTP Get or Post DDoS attacks

DDoS attacks are too frequent on the internet. A DDoS attack sends more requests that can be processed. Many times, the requestors machine has been compromised to be part of a more massive DDoS network. This article is not going to explain all the various types as there is a whole list of them here. Let’s discuss a particular type of DDoS attack designed to overwhelm your web server. This traffic will appear as legitimate requests using GET or POST Requests. GET would be for /index.html or any other page at 50 requests per minute. A POST would hit your myApi.php and attempt to post data at 50 plus requests per minute.

This is going to focus on some recommendations using AWS and other technologies to stop a recent HTTP DDoS attacks. The first step is to identify the DDoS attack versus regular traffic. The second question is how does one prevent a DDoS HTTP attack.

Identifying a DDoS attack there various DDoS The first step is to understand your existing traffic, if you have 2,000 requests per day and all of a sudden you have 2,000,000 requests in the morning, its a good indication it’s under a DDoS attack. The easiest way to identify this is to look at the access_log and pull this into a monitoring service like Splunk, AllenVaultGraylog, etc. From there trend analysis in real-time would show the issues. If the Web servers are behind an ALB make sure the ALB is logging requests and that those requests are being analysis instead of the web server access logs. ALB still support the X-Forwarded-For so it can be passed.

Preventing a DDoS attack There is no way to truly prevent an HTTP DDoS attack.  Specifically to deal with this event, the following mitigation techniques the were explored:

  1. AWS Shield - this provides advance WAF functions, and there rules to limit.

  2. The free-ware would be to use Apache and NGINX have rate limited for specific IP addresses.   In Apache, this is implemented by a number of modules.   ModSecurity is usually at the top of the list, a great configuration example is available on Github which includes the X-Forwarded-For.

  3. An EC2 instance in front of the web server can be run as a Proxy. The proxy can be configured to suppress the traffic using ModSecurity or other MarketPlace offerings including other WAF options.

  4. The ALB or CloudFront can deploy an AWS WAF.

  5. Lastly, the most expensive option is to deploy Auto-scaling groups to absorb all traffic.

Please leave a comment if there other options which should have been investigated.

To solve this specific issue, an AWS WAF was deployed on the ALB.   One thing to consider is to make sure to prevent attacks from directly hitting the website.   This is easily accomplished by allowing HTTP/HTTPS from anywhere only to the ALB.    ALB and EC2 instance sharing a security group which allows HTTPS/HTTP to everything in that security group.

DDoS attacks are too frequent on the internet. A DDoS attack sends more requests that can be processed. Many times, the requestors machine has been compromised to be part of a more massive DDoS network. This article is not going to explain all the various types as there is a...

Cisco Press CCNP Route Books not aligned with CCNP Route Exam Blueprint

To my disappointment having completely read the CCNP Routing and Switching ROUTE 300-101 Official Cert Guide and the Implementing Cisco IP Routing (ROUTE) Foundation Learning Guide (CCNP ROUTE 300-101) for the CCNP Route Exam, these books are not aligned with the exam blueprint.

Looking at the exam blueprint, topics like CHAPv2 and Frame-Relay are still covered but are not used as much.   CHAPv2 is not mentioned in either book.   Secondly, technologies like IPSec VPN and MPLS get little coverage in the books but are prevalent in deployments today.   Additionally there no real configuration examples for DMVPN.

Cisco Press claims to be the official certification guides for the exams, it gives me great concern that the exam blueprint and the official certification guide are not in sync.  Wendell Odom [https://www.certskills.com/]. who wrote a number of the original certification guides always did a great job in matching the book to the exam blueprint and providing exercises to reinforce learning.  He no longer the author on the CCNP certification guides as Wendell focuses on the CCNA Routing and Switching.

The last time I went thru CCNP certification I used the Cisco Press Exam Certification Guides and Sybex CCNP books which included exercises.   Sybex no longer publishes CCNP books.

Before taking the test, I think I’ll find a lab workbook and execute the exercises on VIRL.

To my disappointment having completely read the CCNP Routing and Switching ROUTE 300-101 Official Cert Guide and the Implementing Cisco IP Routing (ROUTE) Foundation Learning Guide (CCNP ROUTE 300-101) for the CCNP Route Exam, these books are not aligned with the exam blueprint.

Looking at the exam blueprint, topics like CHAPv2...