Devops needs SecDevOps

DevOps is well defined and has a great definition Wikipedia. We could argue all day about who is really doing DevOps(see this post for context). Let’s assume that there is efficient and effective DevOps organization, If this is the case, DevOps requires a partner in security. Security needs to manage compliance, security governance, requirements and risks. This requires functions in development, operations and analysis. How can security keep up with effective DevOps? Building DevOps organization for security which we call SecDevOps. SecDevOps is about automating and monitoring compliance, security governance, requirements and risks, and especially updating analysis as DevOps changes the environment.

As organization support DevOps but don’t seem as ready to support SecDevOps, how can security organization evolve to support DevOps?

DevOps is well defined and has a great definition Wikipedia. We could argue all day about who is really doing DevOps(see this post for context). Let’s assume that there is efficient and effective DevOps organization, If this is the case, DevOps requires a partner in security. Security needs...

Future of Software

Open source has been around for decades, but the real initiatives started in 1998. Due to some recent experiences, I started pondering open source and the future of software.

I believe the future of software is open source where a company which wraps enterprise support around it.

Take any open source software,  if you need a feature typically someone has built it.  If they haven’t, your team ends up creating it and adding it back to the project.    Open source has the power of community vs a company with a product manager and deadlines to ship based on some roadmap built by committee.  I made it too simple, open source has a product manager, really in most communities they are gate keeper.   They own accepting features and setting direction,  in some cases it’s the original developer like Linux, or sometimes it’s a committee,  However, at the end of the day either commercial or open source has a product owner.

It’s an interesting paradux which created two opposing questions. First, why isn’t all software open sourced?   Why would a company who has spent millions In development going to give the software away and charge for services?

The answer to the first question is see question two.  The answer to the second question is giving away software is not financially viable if millions have been invested unless a robust software support model is supporting the development of software.

I worked for many organizations who’s IT budget was lean and agile,   Open source was was minimal budget dollars.  I have worked for other organizations whose budget is exceptionally robust and requires supported software as part of governance.

Why not replace the license model with a support model, and allow me or even more importantly the community access to the source code, contribute and drive innovation. Based on users, revenue or some other metric charge me for support or allow me to opt out. Seems like a reasonable future to me.

Open source has been around for decades, but the real initiatives started in 1998. Due to some recent experiences, I started pondering open source and the future of software.

I believe the future of software is open source where a company which wraps enterprise support around it.

Take any open...

Data-safe Cloud...

Amazon recently released a presentation on Data-safe Cloud.  It appears to be based on some Gartner question and other data AWS collected.  The presentation discusses 6 core benefits of a secure cloud.

  1. Inherit Strong Security and Compliance Controls
  2. Scale with Enhanced Visibility and Control
  3. Protect Your Privacy and Data
  4. Find Trusted Security Partners and Solutions
  5. Use Automation to Improve Security and Save Time
  6. Continually Improve with Security Features.  

I find this marketing material to be confusing at best, let’s analyze what it is saying. 

For point 1, Inherit Strong and Compliance Controls, which reference all the compliance AWS achieves.  However, it loses track of the shared responsibility model and doesn’t even mention until page 16.   Amazon has compliance in place which is exceptional, and most data center operators or SaaS providers struggle to achieve.   This does not mean my data or services running within the Amazon environment meet those compliances

For point 2,  4  and 6 those are not benefits of the secure cloud.  Those might be high-level objects one uses to form a strategy on how to get to a secure cloud.  

Point 3 I don’t even understand, the protection of privacy and data has to be the number one concern when building out workloads in the cloud or private data centers.   It’s not a benefit of the secure cloud, but a requirement.  

For point 5, I am a big fan of automation and automating everything.   Again this is not a benefit of a secure cloud, but how to have a repeatable, secure process wrapped in automation which leads to a secure cloud.

Given the discussions around cloud and security given all the negative press, including the recent AWS S3 Godaddy Bucket exposure, Amazon should be publishing better content to help move forward the security discussion.  

Amazon recently released a presentation on Data-safe Cloud.  It appears to be based on some Gartner question and other data AWS collected.  The presentation discusses 6 core benefits of a secure cloud.

  1. Inherit Strong Security and Compliance Controls
  2. Scale with Enhanced Visibility and Control
  3. Protect Your Privacy and Data
  4. ...

Security as Code

One of the things I’ve been fascinated of late is the concept of Security as Code.   I’ve just started to read the book DevOpSec by Jim Bird.   One of the things the book talks about is injecting security into the CI/CD pipeline for applications.  Basically merging developers and security, as DevOps merged developers and operations.   I’ve argued for years DevOps is a lot of things, but fundamentally it was a way for operations to become part of the development process which led to the automation of routine operational tasks and recovery.  So now if we look at DevOpsSec, this would assume security is part of the development process. I mean more than just the standard code analysis using Veracode.  What would it mean if security processes and recovery could be automated?  

Security Operations Centers (SOCs) where people are interpreting security events and reacting.  Over the last few years, much of the improvements in SOCs has been made via AI and machine learning reducing the head count required to operate a SOC.   What if security operations were automated?   Could some code be generated based on the security triggers and provided to the developer for review and incorporation into the next release?

We talk about infrastructure as code, where some data can be generated to create rules and infrastructure using automation.   Obviously on AWS you can install security tool based AMIs, Security Groups and NACLs with Cloudformation.  My thoughts go to firewall based AMIs, appliances  for external access.   The appliance access-lists required are complex, require enormous review and processing within an organization.  Could access lists be constructed based on a mapping of the code and automatically generated for review?  Could the generated access list be compared against existing access-list for deduplication detection.

It’s definitely an interesting topic and hopefully evolves over the next few years. 

One of the things I’ve been fascinated of late is the concept of Security as Code.   I’ve just started to read the book DevOpSec by Jim Bird.   One of the things the book talks about is injecting security into the CI/CD pipeline for applications.  Basically merging developers and security,...

AWS Release T3 Instances

Earlier today AWS released t3 instances.   There are a bunch of press releases about the topic.    The performance is supposed to be 30% better than T2.   Hopefully, in the next few days, independently published benchmarks will be released to confirm if the instances are 30% faster.   In the interim go to the Amazon pages for all the details on T3 instances.   The cost is a few cents less.   For example, a reserved instance from T2.small to  T3.small with no upfront went from .17 cents to .15 cents in the US-WEST-2 region.    

Before today awsarch.io ran off T2 instances, to build this blog article it was updated to T3 instances.    AWS makes it easy to change instance type, just shut down the instance and from the AWS console go to Instance Settings->Change Instance type.  Then select the appropriate t3 instance.   It can be done via the AWS CLI as well.

Change Instance

T3 force you to select EBS optimized volumes.  EBS optimized volumes for T3 provide additional IOPS.  Here is the link for the complete EBS optimized information.

T3 EBS Optimized

The T3 instance uses an ENA adapter so before starting your instance change the ENA adapter thru the AWS command line:  

aws ec2 modify-instance-attribute –instance-id --ena-support

Lastly, I notice mount points changed.   Previously the EBS volumes devices in the Linux /dev directory changes.   Before the change to T3 they were /dev/xvdf1, /dev/xvdf2, etc.  Now the devices are /dev/nvme1n1p1, /dev/nvme1n1p2, etc.   Something to keep in mind if you have additional volumes with mount points on the ec2 instance. 

Earlier today AWS released t3 instances.   There are a bunch of press releases about the topic.    The performance is supposed to be 30% better than T2.   Hopefully, in the next few days, independently published benchmarks will be released to confirm if the instances are 30% faster.   In the interim go...

AWS Logging Solution

Amazon generates a lot of logs via VPC Flow Logs, CloudTrail, S3 access logs, CloudWatch (See the end of the blog article for a full list.)   Additionally, there are OS, Application, web server logs.   That is a lot of data which provides valuable insight into your running AWS environment.   What are you doing to manage this log files?  What are you doing with those log files?  What are you doing to analysis these log files?

There are a lot of logging solutions available that integrate with AWS. Honestly, I’m a big fan of Splunk and have set it up multiple times.  However, I wanted to look at something else for this blog article. Something open source and relatively low cost. This blog is going to explain what I did to setup Graylog. Graylog has no charges for the software, but you’re going to get charged for the instance, Kinesis, SQS, and data storage.  It actually a good exercise if to familiarize yourself with AWS services, especially for the Sysops exams.  

Graylog provides great instructions.   I followed the steps remember to use their image which is already self-built on Ubuntu.   One difference with this setup, I didn’t use a 4GB memory systems.   I picked a t2.small which proves 1vCPU and 2GB of memory.    I didn’t notice performance issues.  Remember to allow ports 443 and 9000 in security groups and the Networking ACLs.   I prefer to run this over HTTPS.  And it bugs me when you see NOT SECURE HTTP:  I installed an SSL certificate, and this is how I did it.

  1. Create a DNS name 
  2. Get a free certificate 
  3. Install the Certificate as such 

Now my instance is up, and I can log into the console.  I want to get my AWS logs into Graylog.   To do this is requires the logs sent to Kinesis or SQS.  I am not going to explain the SQS setup as there plenty of resources for the specific AWS Service.   Also, the Graylog Plugin describes how to do this.  Graylog plugin for CloudTrail, CloudWatch and VPC Flow logs is available on Github at Graylog Plugin for AWS.

What about access_logs?  Graylog has the Graylog Collector Sidecar.      I’m not going to rehash the installation instructions here as there are great installation instructions.     Graylog has a great documentation.   Also if you are looking for something not covered here, it will be in the documentation or in their Github project. 

What are you using as your log collection processing service on Amazon?  

List of AWS Servers generating logs:

Amazon S3 Access logs Amazon CloudFront Access logs Elastic Load Balancer (ELB) logs Amazon Relational Database Service (RDS) logs Amazon Elastic MapReduce (EMR) logs Amazon Redshift logs AWS Elastic Beanstalk logs AWS OpsWorks logs (or this link) AWS Import/Export logs AWS Data Pipeline logs AWS CloudTrail logs

Amazon generates a lot of logs via VPC Flow Logs, CloudTrail, S3 access logs, CloudWatch (See the end of the blog article for a full list.)   Additionally, there are OS, Application, web server logs.   That is a lot of data which provides valuable insight into your running AWS...