Category Archives: Cloud

What Have you Containerized Today?

I was listening to the Architech podcast.  There was a question asked, ”Does everything today tie back to Kubernetes?”   The more general version of the question is, “Does everything today tie back to containers?”.    The answer is quickly becoming yes.    Something Google figured out years ago with its environment that everything was containerized is becoming mainstream.

To support this  Amazon now has 3 different Container technologies and one in the works.

ECS which is Amazon’s first container offering.    ECS is container orchestration which supports Docker containers.    

Fairgate ECS which is managed offering of ECS where all you do is deploy Docker images and AWS owns full management.  More exciting is that  Fairgate for EKS has been announced and pending release.  This will be a fully managed Kubernetes.    

EKS is the latest offering which was GA’d in June.   This is a fully managed control plane for Kubernetes.   The worker nodes are EC2 instances you manage, which can run an Amazon Linux AMI or one you create.

Lately, I’ve been exploring EKS so that will be the next blog article, how to get started on EKS.

In the meantime, what have you containerized today?

I was listening to the Architech podcast.  There was a question asked, ”Does everything today tie back to Kubernetes?”   The more general version of the question is, “Does everything today tie back to containers?”.    The answer is quickly becoming yes.    Something Google figured out years ago with its...

Data-safe Cloud...

Amazon recently released a presentation on Data-safe Cloud.  It appears to be based on some Gartner question and other data AWS collected.  The presentation discusses 6 core benefits of a secure cloud.

  1. Inherit Strong Security and Compliance Controls
  2. Scale with Enhanced Visibility and Control
  3. Protect Your Privacy and Data
  4. Find Trusted Security Partners and Solutions
  5. Use Automation to Improve Security and Save Time
  6. Continually Improve with Security Features.  

I find this marketing material to be confusing at best, let’s analyze what it is saying. 

For point 1, Inherit Strong and Compliance Controls, which reference all the compliance AWS achieves.  However, it loses track of the shared responsibility model and doesn’t even mention until page 16.   Amazon has compliance in place which is exceptional, and most data center operators or SaaS providers struggle to achieve.   This does not mean my data or services running within the Amazon environment meet those compliances

For point 2,  4  and 6 those are not benefits of the secure cloud.  Those might be high-level objects one uses to form a strategy on how to get to a secure cloud.  

Point 3 I don’t even understand, the protection of privacy and data has to be the number one concern when building out workloads in the cloud or private data centers.   It’s not a benefit of the secure cloud, but a requirement.  

For point 5, I am a big fan of automation and automating everything.   Again this is not a benefit of a secure cloud, but how to have a repeatable, secure process wrapped in automation which leads to a secure cloud.

Given the discussions around cloud and security given all the negative press, including the recent AWS S3 Godaddy Bucket exposure, Amazon should be publishing better content to help move forward the security discussion.  

Amazon recently released a presentation on Data-safe Cloud.  It appears to be based on some Gartner question and other data AWS collected.  The presentation discusses 6 core benefits of a secure cloud.

  1. Inherit Strong Security and Compliance Controls
  2. Scale with Enhanced Visibility and Control
  3. Protect Your Privacy and Data
  4. ...

Starting a new position today

Starting a new position today as Consultant - Cloud Architect with Taos.   Super excited to for this opportunity.

I wanted a position as a solution architect working with the Cloud, so I couldn’t be more thrilled with the role.   I am looking forward to helping Taos customers adopt the cloud and a Cloud First Strategy.

It’s an amazing journey for me, as Taos was the first to offer me a Unix System administrator position when I graduated from Penn State some 18 years ago, and I passed on the offer and went to work for IBM.

I am really looking forward to working with the great people at Taos.

Starting a new position today as Consultant - Cloud Architect with Taos.   Super excited to for this opportunity.

I wanted a position as a solution architect working with the Cloud, so I couldn’t be more thrilled with the role.   I am looking forward to helping Taos customers adopt the...

My Favorite Things About Amazon Well Architected Framework

Amazon released AWS Well Architected Framework to help customers Architect solutions within AWS.   The amazon certifications require detailed knowledge of 5 white papers which make up the Well Architected Framework.   Given I have recently completed 6 Amazon certifications, I decided I was going to write a blog which pulled my favorite lines from each paper.

Operational excellence pillar The whitepaper says on page 15, “When things fail you will want to ensure that your team, as well as your larger engineering community, learns from those failures.”   It doesn’t say “If things fail”, it says “When things fail” implying straight away things are going to fail.

security pillar On page 18, “Data classification provides a way to categorize organizational data based on levels of sensitivity. This includes understanding what data types are available, where is the data located and access levels and protection of the data”.  This to me sums up how security needs to be defined. Modern data security is not about firewalls and having a hard outside shell or malware detectors.  It about protecting the data based on its classification from both internal (employees, contractors, vendors) actors and hostile actors.

reliability pillar The document is 45 pages long and the word failure appears 100 times and the word fail exists 33 times. The document is really about how to architect an AWS environment to respond to failure and what portion of your environment based on business requirements should be over-engineered to withstand multiple failures.

performance efficiency pillar Page 24 the line, “When architectures perform badly this is normally because of a performance review process has not been put into place or is broken”.   When I first read this line, I was perplexed.  I immediately thought this implies a bad architecture can perform well if there is a performance review in place.  Then I thought when has a bad architecture ever performed well under load?   Now I get the point this is trying to make.

cost optimization On page 2, is my favorite line from this white paper, “A cost-optimized system will fully utilize all resources, achieve an outcome at the lowest possible price point, and meet your functional requirements.”   It made me immediately think back to before the cloud, every solution had to have a factor over the life of hardware for growth it was part of the requirements.    In the cloud you need to support capacity today, if you need more capacity tomorrow, you just scale. This is one of the biggest benefits of cloud computing, no more guessing about capacity.

Amazon released AWS Well Architected Framework to help customers Architect solutions within AWS.   The amazon certifications require detailed knowledge of 5 white papers which make up the Well Architected Framework.   Given I have recently completed 6 Amazon certifications, I decided I was going to write a blog which pulled my...

The Promises of Enterprise Data Warehouses Fulfilled with Big Data

Remember back in the 1990s/2000s Data Warehouses were all the rage.    The idea was to take data from all the transactional databases behind the multiple e-Commerce, CRM, financials, lead generation and ERP systems deployed in the company and merge them into one data platform.  It was the dream, CIOs were ponying up big dollars for these because they thought it would solve finance, sales, and marketing most significant problems.  It was even termed Enterprise Data Warehouse or EDW.  The new EDW would take 18 months to deploy as ETLs would be written from the various systems and data would have to be normalized to work within the EDW.  In some cases, the team made bad decisions about how to normalize the data causing all types of future issues.   When the project finished, there would be this beautiful new data warehouse, and no one would be using it.  The EDW needed a report writer, to make fancy reports, in a specialized tool like Cognos, Crystal Reports, Hyperion, SAS, etc.   A meeting would be called to discuss data, with 12 people and all 12 people would have different reports and numbers depending on the formulas in the report.  That lead to eventually someone from Finance who was part of the analysis, budgeting and forecasting group would learn the tool and be the go-to person and work with the team from technology assigned to create reports.

Then Big Data came along. Big data even sounds better than Enterprise Data Warehouse, and frankly given the issues back in 1990s/2000s the branding to Big Data doesn’t have the same negative connotations.

Big Data isn’t a silver bullet, but it does a lot of things right.  First and foremost the data doesn’t require normalization.  Actually normalization is discouraged.  Big Data absorbs the transactional database data, social feeds, eCommerce analytics, IoT sensor data, and a whole host of other data and puts it all in one data repository. The person from finance has been replaced with a team of data scientists who are highly trained and develop analysis models and extracts data with statistical (R programming language) and Natural Language Processing (NLP). The data scientists spend days pouring over the data, extracting information, building models, rebuilding models and looking for patterns within the data. The data could be text, voice, video, images, social feeds, transaction data and the data scientist is looking for something interesting.

Big Data has huge impacts as the benefits are immense.  However, my favorite is predictive analytics.  Predictive analytics tells you something’s behavior based on previous history and current data. It’s going to predict the future.  Predictive analysis is all over retail as you see it on sites as “Other Customers Bought” or recommending purchases based on your history.   Airlines use it to predict component failure of planes.  Investors use it to predict changes in stock, and the list of industries using it goes on and on.

The cloud is a huge player in the Big Data space Amazon, Google and Azure are offering Hadoop and Spark as services.    The best thing about the cloud is when the data is absorbed in Gigabytes or Terabytes that the cloud is providing the storage space for all this data.  Lastly given it’s in the cloud, it’s relatively easy to deploy a Big Data cluster, and hopefully,  soon AI in the cloud will replace the data scientists as well.

Remember back in the 1990s/2000s Data Warehouses were all the rage.    The idea was to take data from all the transactional databases behind the multiple e-Commerce, CRM, financials, lead generation and ERP systems deployed in the company and merge them into one data platform.  It was the dream, CIOs...

To The Cloud and Beyond...

I was having a conversation with an old colleague late Friday afternoon.    (Friday was a day of former colleagues, had lunch with a great mentor).   He’s responsible for infrastructure and operations for a good size company.    His team is embarking on a project to migrate to the cloud as their contract for space will be up in 2020. There three things which were interesting in the discussion which I thought were interesting and probably the same issues others face on their journey to the cloud.

The first was the concern about security.    The cloud is no less or more secure than your data center. If your data center is private your cloud asset can be private, if your need public facing services, they would be secured like the public facing services in your own data center.    Data security is your responsibility in the cloud, but the cloud doesn’t make your data any less secure.

The other concern was the movement of VMware images to the cloud.   Most of the environment was virtualized years ago.   However, there are a lot of windows 2003 and 2008 servers.    Windows 2008  end of support is  2020, and Windows 2003 has been out of support since July 2015.     It’s odd the concern about security, given the age of the Windows environment.      If it was my world, I’d probably figure out how to move those servers to Windows 2016 or retire ones no longer needed, keeping in mind OS upgrades are always dependent on the applications.   Right or wrong, my roadmap would leave Windows 2003 and 2008 in whatever datacenter facility is left behind.

Lastly, there was concern about Serverless, and the application teams wanting to leverage this over his group’s infrastructure services.   There was real concern about a loss of resources if the application teams turn towards Serverless, as his organization would have fewer servers (physical/virtual instances)  to support.  Like many technology shops, infrastructure and operations resources are formulated by the total number of servers.   I find this hugely exciting.    I would push resources from “keeping the lights on” to roles focused on growing the business and speed to market, which are the most significant benefit of serverless.   Based on this discussion, people look at it from their own prism.

I was having a conversation with an old colleague late Friday afternoon.    (Friday was a day of former colleagues, had lunch with a great mentor).   He’s responsible for infrastructure and operations for a good size company.    His team is embarking on a project to migrate to the cloud...

Power of Digital Note Taking

There hundreds of note taking apps.    My favorites are Evernote, GoodNotes, and Quip.   I’m not going to get into the benefits or pros and cons of each application.  There plenty of BLOGs, youtube videos which do this in great detail.    Here is how I used them:

  • Evernote is my document and note repository.

  • GoodNotes is for taking handwritten notes on my iPad, and the PDFs are loaded into Evernote.

  • Quip is for team collaboration and sharing notes and documents.

I’ve been digital for 4+ years.  Today, I read an ebook from Microsoft, entitled “The Innovator’s Guide to Modern Note Taking.“  I was curious as to Microsoft’s ideas on the digital note-taking.   The ebook is worth a read.    I found there three big takeaways from the ebook:

First - The ebook quotes, “average employee spends 76 hours a year looking for misplaced notes, items, and files.   In other words, we spend annual $177 billion across the U.S”.

Second - The ebook explains that the left side of the brain is used when typing on a keyboard,  and the right side of the brain is when writing notes.  The left side of the brain is more clinical, and the right side of the brain is more creative, particular asking the “What If” questions.  Also covered on page 12 of the ebook handwriting notes improves retention.  Lastly on page 13 one of my favorites as I am a doodler, “Doodlers recall on average 29% more information than non-doodlers”.   There is a substantial difference in typing vs. writing notes, and there is a great blog article from NPR if you want to learn more.

_Third - _Leverage the cloud, whether it’s to share, process, access anywhere.

Those are fundamentally the three reasons that I went all digital for notes.  As described before I write notes in GoodNotes and put them in Evernote, I use the Evernote OCR for PDFs to search them.    My workflow covers the main points described above.   Makes me think I might be ahead of a coming trend.

There hundreds of note taking apps.    My favorites are Evernote, GoodNotes, and Quip.   I’m not going to get into the benefits or pros and cons of each application.  There plenty of BLOGs, youtube videos which do this in great detail.    Here is how I used them:

...

Multi-cloud environments are going to be the most important technology investment in 2018/2019

I believe that Multi-cloud environments are going to be the most important technology investment in 2018/2019.   This will drive education and new skill development among various technology workers.  Apparently, it’s not just me, IDC prediction is that “More than 85% of Enterprise IT Organizations Will Commit to Multicloud Architectures by 2018, Driving up the Rate and Pace of Change in IT Organizations”.There some great resources online for multi-cloud, strategy, benefits, all worth reading:

The list could be hundreds of articles.   I wanted to provide a few, that I thought were interesting and relevant to this discussion of why Multi-cloud.   There are four drivers behind this trend:

First -  Containers will allow you to deploy your application anywhere, including all the major cloud players have Kubernetes, Docker support.    This means you could deploy to AWS, Azure, and Google without rewriting any code.    Application support, development, maintenance is what drives technology dollars.   Maintaining one set of code that runs anywhere doesn’t cost any more and gives you complete autonomy.

Second -  Companies like JoyentNetlify,  HashiCorp Terraform and many more are building their solutions for multi-cloud, giving the control, manageability, ease of use, etc.    Technology is like Field of Dreams, quote, “if you build it they will come.”   Very few large companies jump into something without support, they wait for some level of maturity to be developed and then wade in slowly.

Third -  The biggest reason is a lack of trust putting all your technology assets into one company.    Most companies had for years multi-data center strategies, using a combination of self-created, leverage multiple companies like  Wipro, IBM, HP, Digital Realty Trust, etc., and various co-location.   For big companies when the cloud became popular, it was how do I augment my existing environment with Cloud.    Now many companies are applying a Cloud First Strategy .    So why wouldn’t principles that were applied for decades in technology, be applied to the cloud.   Everyone remembers the saying, don’t put all your eggs in one basket.    I understand there are regions, multi-AZ, resiliency, and redundancy, but at the end of the day one cloud provider is one cloud provider, and all my technology eggs are in that one basket.

Fourth - The last reason is pricing.   If you can move your entire workload from Amazon to Google within minutes, it forces cloud vendors to keep costs low as cloud service charges for what you use.   I understand if you have a workload with petabytes of data, it’s not going to move.  But have web services with small data behind them, they can move and relatively quickly with the right deployment tools in place.

What do you think?   Leave me a comment with your feedback or ideas?

I believe that Multi-cloud environments are going to be the most important technology investment in 2018/2019.   This will drive education and new skill development among various technology workers.  Apparently, it’s not just me, IDC prediction is that “More than 85% of Enterprise IT Organizations Will Commit to Multicloud Architectures by 2018, Driving...

SaaS based CI/CD

Let’s start with some basics of software development.    It still seems no matter what methodology of software development lifecycle that is followed it includes some level of definition, development, QA, UAT, and Production Release.   Somewhere in the process, there is a merge of multiple items into a release.   This still means your release to production could be monolithic.

The mighty big players like  GoogleFacebook, and Netflix (click any of them to see their development process) have revolutionized the concept of Continous Integration (CI) and Continous Deployment (CD).

I want to question the future of CI/CD,  instead of consolidating a release, why not release a single item into production, validate over a defined period of time and push the next release.   This entire process would happen automatically based on a queue (FIFO) system.

Taking it to the world of corporate IT and SaaS Platforms.   I’m really thinking about software like Salesforce Commerce Cloud,  or Oracle’s NetSuite.      I would want the SaaS platform to provide me this FIFO system to load my user code updates.  The system would push and update the existing code, while it continues to handle the requests and the users wouldn’t see discrepancies.    Some validation would happen, the code would activate and a timer would start on the next release.  If validation failed the code could be rolled back automatically or manually.

Could this be a reality?

Let’s start with some basics of software development.    It still seems no matter what methodology of software development lifecycle that is followed it includes some level of definition, development, QA, UAT, and Production Release.   Somewhere in the process, there is a merge of multiple items into a release....