Archive of posts from July 2018

Starting a new position today

Starting a new position today as Consultant - Cloud Architect with Taos.   Super excited to for this opportunity.

I wanted a position as a solution architect working with the Cloud, so I couldn’t be more thrilled with the role.   I am looking forward to helping Taos customers adopt the cloud and a Cloud First Strategy.

It’s an amazing journey for me, as Taos was the first to offer me a Unix System administrator position when I graduated from Penn State some 18 years ago, and I passed on the offer and went to work for IBM.

I am really looking forward to working with the great people at Taos.

Starting a new position today as Consultant - Cloud Architect with Taos.   Super excited to for this opportunity.

I wanted a position as a solution architect working with the Cloud, so I couldn’t be more thrilled with the role.   I am looking forward to helping Taos customers adopt the...

My Favorite Things About Amazon Well Architected Framework

Amazon released AWS Well Architected Framework to help customers Architect solutions within AWS.   The amazon certifications require detailed knowledge of 5 white papers which make up the Well Architected Framework.   Given I have recently completed 6 Amazon certifications, I decided I was going to write a blog which pulled my favorite lines from each paper.

Operational excellence pillar The whitepaper says on page 15, “When things fail you will want to ensure that your team, as well as your larger engineering community, learns from those failures.”   It doesn’t say “If things fail”, it says “When things fail” implying straight away things are going to fail.

security pillar On page 18, “Data classification provides a way to categorize organizational data based on levels of sensitivity. This includes understanding what data types are available, where is the data located and access levels and protection of the data”.  This to me sums up how security needs to be defined. Modern data security is not about firewalls and having a hard outside shell or malware detectors.  It about protecting the data based on its classification from both internal (employees, contractors, vendors) actors and hostile actors.

reliability pillar The document is 45 pages long and the word failure appears 100 times and the word fail exists 33 times. The document is really about how to architect an AWS environment to respond to failure and what portion of your environment based on business requirements should be over-engineered to withstand multiple failures.

performance efficiency pillar Page 24 the line, “When architectures perform badly this is normally because of a performance review process has not been put into place or is broken”.   When I first read this line, I was perplexed.  I immediately thought this implies a bad architecture can perform well if there is a performance review in place.  Then I thought when has a bad architecture ever performed well under load?   Now I get the point this is trying to make.

cost optimization On page 2, is my favorite line from this white paper, “A cost-optimized system will fully utilize all resources, achieve an outcome at the lowest possible price point, and meet your functional requirements.”   It made me immediately think back to before the cloud, every solution had to have a factor over the life of hardware for growth it was part of the requirements.    In the cloud you need to support capacity today, if you need more capacity tomorrow, you just scale. This is one of the biggest benefits of cloud computing, no more guessing about capacity.

Amazon released AWS Well Architected Framework to help customers Architect solutions within AWS.   The amazon certifications require detailed knowledge of 5 white papers which make up the Well Architected Framework.   Given I have recently completed 6 Amazon certifications, I decided I was going to write a blog which pulled my...

The Promises of Enterprise Data Warehouses Fulfilled with Big Data

Remember back in the 1990s/2000s Data Warehouses were all the rage.    The idea was to take data from all the transactional databases behind the multiple e-Commerce, CRM, financials, lead generation and ERP systems deployed in the company and merge them into one data platform.  It was the dream, CIOs were ponying up big dollars for these because they thought it would solve finance, sales, and marketing most significant problems.  It was even termed Enterprise Data Warehouse or EDW.  The new EDW would take 18 months to deploy as ETLs would be written from the various systems and data would have to be normalized to work within the EDW.  In some cases, the team made bad decisions about how to normalize the data causing all types of future issues.   When the project finished, there would be this beautiful new data warehouse, and no one would be using it.  The EDW needed a report writer, to make fancy reports, in a specialized tool like Cognos, Crystal Reports, Hyperion, SAS, etc.   A meeting would be called to discuss data, with 12 people and all 12 people would have different reports and numbers depending on the formulas in the report.  That lead to eventually someone from Finance who was part of the analysis, budgeting and forecasting group would learn the tool and be the go-to person and work with the team from technology assigned to create reports.

Then Big Data came along. Big data even sounds better than Enterprise Data Warehouse, and frankly given the issues back in 1990s/2000s the branding to Big Data doesn’t have the same negative connotations.

Big Data isn’t a silver bullet, but it does a lot of things right.  First and foremost the data doesn’t require normalization.  Actually normalization is discouraged.  Big Data absorbs the transactional database data, social feeds, eCommerce analytics, IoT sensor data, and a whole host of other data and puts it all in one data repository. The person from finance has been replaced with a team of data scientists who are highly trained and develop analysis models and extracts data with statistical (R programming language) and Natural Language Processing (NLP). The data scientists spend days pouring over the data, extracting information, building models, rebuilding models and looking for patterns within the data. The data could be text, voice, video, images, social feeds, transaction data and the data scientist is looking for something interesting.

Big Data has huge impacts as the benefits are immense.  However, my favorite is predictive analytics.  Predictive analytics tells you something’s behavior based on previous history and current data. It’s going to predict the future.  Predictive analysis is all over retail as you see it on sites as “Other Customers Bought” or recommending purchases based on your history.   Airlines use it to predict component failure of planes.  Investors use it to predict changes in stock, and the list of industries using it goes on and on.

The cloud is a huge player in the Big Data space Amazon, Google and Azure are offering Hadoop and Spark as services.    The best thing about the cloud is when the data is absorbed in Gigabytes or Terabytes that the cloud is providing the storage space for all this data.  Lastly given it’s in the cloud, it’s relatively easy to deploy a Big Data cluster, and hopefully,  soon AI in the cloud will replace the data scientists as well.

Remember back in the 1990s/2000s Data Warehouses were all the rage.    The idea was to take data from all the transactional databases behind the multiple e-Commerce, CRM, financials, lead generation and ERP systems deployed in the company and merge them into one data platform.  It was the dream, CIOs...

BGP Route Reflectors

Studying for the CCNP Route 300-101 Route exam, there is no discussion of Border Gateway Protocol(BGP) Route Reflectors.    It doesn’t even make the exam blueprint.  BGP Route Reflectors are one of the most important elements for multi-home, multi-location BGP.    This blog post is not going to be a lesson in BGP, as there are plenty of resources do a great job explaining the topic.   Within an Autonomous system(AS) if there are multiple BGP routers, an iBGP full mesh is required.   Its a fancy way of saying all the BGP routers need to be connected within an AS.  Let’s take an example of a large company which has Internet peering in New York, Atlanta and San Francisco.   If the large company is the same AS number, that means it has at least 3 BGP routers, and for business reasons, the routers are dual and dual homed.   That makes 6 BGP routers.  Remember the formula for a full mesh is: N(N-1)/2.   Based on the formula, it would require 15 iBGP peering connections.  iBGP makes a logical connection over TCP, but it still needs 15 configurations.   This is a small example, but it doesn’t scale if we increased to 10 routers, that means 45 iBGP connections and configurations.

What does a route reflector do?

A Route Reflector readvertise routes learn from internal peers to other internal peers.   Only the route reflector needs a full mesh with its internal routers.  The elegance of this solution is that it is a way of making iBGP hierarchical.

The previous example of 6 routers, there are many ways to organize the network with Router Reflectors.   One Cluster with two route reflectors, two clusters with two route reflectors, etc.

 The astonishing part is something so fundamental to leveraging BGP is not cover on the CCNP Routing Exam according to the exam blueprint.

Studying for the CCNP Route 300-101 Route exam, there is no discussion of Border Gateway Protocol(BGP) Route Reflectors.    It doesn’t even make the exam blueprint.  BGP Route Reflectors are one of the most important elements for multi-home, multi-location BGP.    This blog post is not going to be...

Exhaustion of IPv4 and IPv6

IPv4 exhaustion is technology’s version of chicken little and sky is failing.     The sky has been falling on this for 20+ years, as we have been warned IPv4 is exhausting since the late 1990s.   Here comes the IoT including Smart Home were supposed to strain the IPv4 space.    I don’t know about you, but I don’t want my refrigerate and smart thermostat on the internet.

However, every time I go into AWS, I can generate an IPv4 address.   Home ISP are stilling handing out static IPv4 if you are willing to pay a monthly fee.     Enterprise ISP will hand you a /28 or /29 block without to much effort.    Sure lots of companies, AWS, Google, Microsoft have properties on IPv6.   But it’s not widely adopted.   The original RFC on IPv6 was published in December of 1995.

I believe the lack of adaption is due to the complexity of the address. If my refrigerators IPv4 address is 192.168.0.33.    It’s IPv6 address is 2001:AAB4:0000:0000:0000:0000:1010:FE01 which could be shorten to  2001:AAB4::1010:FE01.   Imagine calling that into tech support or being tech support taking that call.  Why didn’t the inventors of IPv6 add octets to the existing IP address?   For instance, the address 192.168.0.33.5.101.49, would have been so much more elegant and easier to understand.     I think it will take another 15-20 years before IPv6 is widely adapted and another 50 years before IPv4 is no longer routed within networks.

IPv4 exhaustion is technology’s version of chicken little and sky is failing.     The sky has been falling on this for 20+ years, as we have been warned IPv4 is exhausting since the late 1990s.   Here comes the IoT including Smart Home were supposed to strain the IPv4 space.    I...

To The Cloud and Beyond...

I was having a conversation with an old colleague late Friday afternoon.    (Friday was a day of former colleagues, had lunch with a great mentor).   He’s responsible for infrastructure and operations for a good size company.    His team is embarking on a project to migrate to the cloud as their contract for space will be up in 2020. There three things which were interesting in the discussion which I thought were interesting and probably the same issues others face on their journey to the cloud.

The first was the concern about security.    The cloud is no less or more secure than your data center. If your data center is private your cloud asset can be private, if your need public facing services, they would be secured like the public facing services in your own data center.    Data security is your responsibility in the cloud, but the cloud doesn’t make your data any less secure.

The other concern was the movement of VMware images to the cloud.   Most of the environment was virtualized years ago.   However, there are a lot of windows 2003 and 2008 servers.    Windows 2008  end of support is  2020, and Windows 2003 has been out of support since July 2015.     It’s odd the concern about security, given the age of the Windows environment.      If it was my world, I’d probably figure out how to move those servers to Windows 2016 or retire ones no longer needed, keeping in mind OS upgrades are always dependent on the applications.   Right or wrong, my roadmap would leave Windows 2003 and 2008 in whatever datacenter facility is left behind.

Lastly, there was concern about Serverless, and the application teams wanting to leverage this over his group’s infrastructure services.   There was real concern about a loss of resources if the application teams turn towards Serverless, as his organization would have fewer servers (physical/virtual instances)  to support.  Like many technology shops, infrastructure and operations resources are formulated by the total number of servers.   I find this hugely exciting.    I would push resources from “keeping the lights on” to roles focused on growing the business and speed to market, which are the most significant benefit of serverless.   Based on this discussion, people look at it from their own prism.

I was having a conversation with an old colleague late Friday afternoon.    (Friday was a day of former colleagues, had lunch with a great mentor).   He’s responsible for infrastructure and operations for a good size company.    His team is embarking on a project to migrate to the cloud...

Power of Digital Note Taking

There hundreds of note taking apps.    My favorites are Evernote, GoodNotes, and Quip.   I’m not going to get into the benefits or pros and cons of each application.  There plenty of BLOGs, youtube videos which do this in great detail.    Here is how I used them:

  • Evernote is my document and note repository.

  • GoodNotes is for taking handwritten notes on my iPad, and the PDFs are loaded into Evernote.

  • Quip is for team collaboration and sharing notes and documents.

I’ve been digital for 4+ years.  Today, I read an ebook from Microsoft, entitled “The Innovator’s Guide to Modern Note Taking.“  I was curious as to Microsoft’s ideas on the digital note-taking.   The ebook is worth a read.    I found there three big takeaways from the ebook:

First - The ebook quotes, “average employee spends 76 hours a year looking for misplaced notes, items, and files.   In other words, we spend annual $177 billion across the U.S”.

Second - The ebook explains that the left side of the brain is used when typing on a keyboard,  and the right side of the brain is when writing notes.  The left side of the brain is more clinical, and the right side of the brain is more creative, particular asking the “What If” questions.  Also covered on page 12 of the ebook handwriting notes improves retention.  Lastly on page 13 one of my favorites as I am a doodler, “Doodlers recall on average 29% more information than non-doodlers”.   There is a substantial difference in typing vs. writing notes, and there is a great blog article from NPR if you want to learn more.

_Third - _Leverage the cloud, whether it’s to share, process, access anywhere.

Those are fundamentally the three reasons that I went all digital for notes.  As described before I write notes in GoodNotes and put them in Evernote, I use the Evernote OCR for PDFs to search them.    My workflow covers the main points described above.   Makes me think I might be ahead of a coming trend.

There hundreds of note taking apps.    My favorites are Evernote, GoodNotes, and Quip.   I’m not going to get into the benefits or pros and cons of each application.  There plenty of BLOGs, youtube videos which do this in great detail.    Here is how I used them:

...

Multi-cloud environments are going to be the most important technology investment in 2018/2019

I believe that Multi-cloud environments are going to be the most important technology investment in 2018/2019.   This will drive education and new skill development among various technology workers.  Apparently, it’s not just me, IDC prediction is that “More than 85% of Enterprise IT Organizations Will Commit to Multicloud Architectures by 2018, Driving up the Rate and Pace of Change in IT Organizations”.There some great resources online for multi-cloud, strategy, benefits, all worth reading:

The list could be hundreds of articles.   I wanted to provide a few, that I thought were interesting and relevant to this discussion of why Multi-cloud.   There are four drivers behind this trend:

First -  Containers will allow you to deploy your application anywhere, including all the major cloud players have Kubernetes, Docker support.    This means you could deploy to AWS, Azure, and Google without rewriting any code.    Application support, development, maintenance is what drives technology dollars.   Maintaining one set of code that runs anywhere doesn’t cost any more and gives you complete autonomy.

Second -  Companies like JoyentNetlify,  HashiCorp Terraform and many more are building their solutions for multi-cloud, giving the control, manageability, ease of use, etc.    Technology is like Field of Dreams, quote, “if you build it they will come.”   Very few large companies jump into something without support, they wait for some level of maturity to be developed and then wade in slowly.

Third -  The biggest reason is a lack of trust putting all your technology assets into one company.    Most companies had for years multi-data center strategies, using a combination of self-created, leverage multiple companies like  Wipro, IBM, HP, Digital Realty Trust, etc., and various co-location.   For big companies when the cloud became popular, it was how do I augment my existing environment with Cloud.    Now many companies are applying a Cloud First Strategy .    So why wouldn’t principles that were applied for decades in technology, be applied to the cloud.   Everyone remembers the saying, don’t put all your eggs in one basket.    I understand there are regions, multi-AZ, resiliency, and redundancy, but at the end of the day one cloud provider is one cloud provider, and all my technology eggs are in that one basket.

Fourth - The last reason is pricing.   If you can move your entire workload from Amazon to Google within minutes, it forces cloud vendors to keep costs low as cloud service charges for what you use.   I understand if you have a workload with petabytes of data, it’s not going to move.  But have web services with small data behind them, they can move and relatively quickly with the right deployment tools in place.

What do you think?   Leave me a comment with your feedback or ideas?

I believe that Multi-cloud environments are going to be the most important technology investment in 2018/2019.   This will drive education and new skill development among various technology workers.  Apparently, it’s not just me, IDC prediction is that “More than 85% of Enterprise IT Organizations Will Commit to Multicloud Architectures by 2018, Driving...

AWS Certified Security – Specialty


Sat the AWS Certified Security - Speciality Exam this morning.  The exam is hard, as it scenario based.   Most of the exam questions were to pick the best security scenario.   It could be renamed the Certified Architect - Security.    Every one of those questions had 2 good answers, it came down to which was more correct and more secure.       It’s the hardest exam I’ve taken to date.   I think it is harder than the Solution Architect - Professional exam. The majority of the exam questions where on KMS, IAM, securing S3, CloudTrail, CloudWatch, multiple AWS account access, Config, VPC, security groups, NACLs, and WAF.

I did the course on acloud.guru and I think the whitepapers and links really helped me in the studying for this exam:

The exam took me about half the allocated time, I read fast and have a tendency to flag questions I don’t know the answer to and come back later and work thru them.    This exam, I flagged 20 questions, highest of any AWS exam taken to date.     Most of them I could figure out, once I thought about them for a while.      Thru the exam, I was unsure of my success or failure.

Upon submission, I got the “Congratulations! You have successfully completed the AWS Certified Security - Specialty exam…”

Unfortunately, I didn’t get my score, I got the email, which says, “Thank you for taking the  AWS Certified Security - Specialty exam. Within 5 business days of completing your exam,”

That now makes my 6th AWS Certification.


Sat the AWS Certified Security - Speciality Exam this morning.  The exam is hard, as it scenario based.   Most of the exam questions were to pick the best security scenario.   It could be renamed the Certified Architect - Security.    Every one of those questions...

AWS Config, KMS and EBS encryption

If you have an AWS deployment, make sure you turn on AWS Config.       It has a whole bunch of built-in rules, and you can add your own to validate the security of your AWS environment as it relates to AWS services.   Amazon provides good documentation, a GitHub repo,  and SumoLogic does a quick How-to turn it on.      It’s straightforward to turn on and use.   AWS provides some pre-configured rules, and that’s what this AWS environment will validate against.  There is a screenshot below of the results.   Aside from turning it on, you have to decide which rules are valid for you.   For instance, not all S3 buckets have business requirements to replicate, so I’d expect this to always be a noncompliant resource.However, one of my findings yesterday was missing EBS encrypted volumes. In order to make EBS volumes encrypted its 9 easy steps:

  1. Make a snapshot of the EBS Volumes.

  2. Copy the snapshot of the EBS Volume to a Snapshot, but select encryption.   Use the AWS KMS key you prefer or Amazon default aws/ebs.

  3. Create an AMI image from the encrypted Snapshot.

  4. Launch the AMI image from the encrypted Snapshot to create a new instance.

  5. Check the new instance is functioning correctly, and there are no issues.

  6. Update EIPs, load balancers, DNS, etc. to point to the new instance.

  7. Stop the old un-encrypted instances.

  8. Delete the un-encrypted snapshots.

  9. Terminate the old un-encrypted instances.

Remember KMS gives you 20,000 request per month for free, then the service is billable.

If you have an AWS deployment, make sure you turn on AWS Config.       It has a whole bunch of built-in rules, and you can add your own to validate the security of your AWS environment as it relates to AWS services.   Amazon provides good documentation, a

Amazon Crashing on Prime Day

Amazon is crashing on Prime Day,  made breaking news.   Appears the company is having issues with the traffic load.

Given Amazon runs from AWS as of 2011. Not a great sign for either Amazon or the scalability model they deployed on AWS.

Amazon is crashing on Prime Day,  made breaking news.   Appears the company is having issues with the traffic load.

Given Amazon runs from AWS as of 2011. Not a great sign for either Amazon or the scalability model they deployed on...

Minimum Security Standards and Data Breaches

Why do agencies post minimum security standards?     The UK government recently released a minimum security standards document which all departments must meet or exceed.    The document is available here:  Minimum Cyber Security Standard.

The document is concise, short, and clear.   It contains some relevant items for decent security, covering most common practices over the last 10 years.   I’m not a UK citizen, but if agencies are protecting my data, why do they have to meet minimum standards.    If an insurer was using the minimum standards, it would be “lowest acceptable criteria that a risk must meet in order to be insured”.     Do I really want to be in that class lowest acceptable criteria for a risk to my data and privacy?

Given now, you know government agencies apply minimum standards, let’s look at breach data.   Breaches are becoming more common and more expensive and this is confirmed by a report from Ponemon Institue commissioned by IBM.   The report states that a Breach will cost $3.86 million, and the kicker is that there is a recurrence 27.8% of the time.

There two other figures in this report that astound me:

  • The mean time to identify (MTTI) was 197 days

  • The mean time to contain (MTTC) was 69 days

That means that after a company is breached, it takes on average 6 months to identify the breach and 2 months to contain it.   The report goes on to say that 27% of the time a breach is due to human error and 25% of the time because of a system glitch.

So interpolate this, someone or system makes a mistake and it takes 6 months to identify and 2 months to contain.    Those numbers should be scaring every CISO, CIO, CTO, other executives, security architects, as the biggest security threats are people and systems working for the company.

Maybe it’s time to move away from minimum standards and start forcing agencies and companies to adhere to a set of best practices for data security?

Why do agencies post minimum security standards?     The UK government recently released a minimum security standards document which all departments must meet or exceed.    The document is available here:  Minimum Cyber Security Standard.

The document is concise, short, and clear.   It contains some relevant items for decent security, covering...

SaaS based CI/CD

Let’s start with some basics of software development.    It still seems no matter what methodology of software development lifecycle that is followed it includes some level of definition, development, QA, UAT, and Production Release.   Somewhere in the process, there is a merge of multiple items into a release.   This still means your release to production could be monolithic.

The mighty big players like  GoogleFacebook, and Netflix (click any of them to see their development process) have revolutionized the concept of Continous Integration (CI) and Continous Deployment (CD).

I want to question the future of CI/CD,  instead of consolidating a release, why not release a single item into production, validate over a defined period of time and push the next release.   This entire process would happen automatically based on a queue (FIFO) system.

Taking it to the world of corporate IT and SaaS Platforms.   I’m really thinking about software like Salesforce Commerce Cloud,  or Oracle’s NetSuite.      I would want the SaaS platform to provide me this FIFO system to load my user code updates.  The system would push and update the existing code, while it continues to handle the requests and the users wouldn’t see discrepancies.    Some validation would happen, the code would activate and a timer would start on the next release.  If validation failed the code could be rolled back automatically or manually.

Could this be a reality?

Let’s start with some basics of software development.    It still seems no matter what methodology of software development lifecycle that is followed it includes some level of definition, development, QA, UAT, and Production Release.   Somewhere in the process, there is a merge of multiple items into a release....

Anycast

IPv6 implemented Anycast for many benefits. The premise behind Anycast is multiple nodes can share the same address, and the network routes the traffic to the Anycast interface address closest to the nearest neighbor.

There is a lot of information on it for the Internet as it relates to IPv6.  Starting with a deep dive in the RFC RFC 4291 - IP Version 6 Addressing Architecture.   Also, there is a document on Cisco Information IPv6 Configuration Guide.

The more interesting item which was a technical interview topic this week was the extension into IPv4. The basic premise is that BGP can have multiple subnets in different geographic regions with the same IP address and because of how internet routing works, traffic to that address is routed to the closest address based on BGP path.

However, this presents two issues if the path in BGP disappears that means the traffic would end up at another node, which would present state issues. The other issues are with BGP as it routes based on path length. So depending on how upstream ISP is peered and routed, a node physically closer, could not be in the preferred path and therefore add latency.

One of the concepts behind this is DDoS Mitigation, which is deployed with the Root Name Servers and also CDN providers. Several RFC papers discuss Anycast as a possible DDoS Mitigation technique:

RFC 7094 - Architectural Considerations of IP Anycast

RFC 4786 - Operation of Anycast Services

CloudFlare(a CDN provider) discusses their Anycast Solution:  What is Anycast.

Finally, I’m a big advocate of conference papers, maybe because of my Master’s degree or 20 years ago if you wanted to learn something it was either from a book or post-conference proceedings. In the research, for this blog article, I came across a well-written research paper from 2015 on the topic of DDoS mitigation with Anycast Characterizing IPv4 Anycast Adoption and Deployment.  It’s definitely worth a read, and especially on interesting how Anycast has been deployed to protect the Root DNS servers and CDNs.

IPv6 implemented Anycast for many benefits. The premise behind Anycast is multiple nodes can share the same address, and the network routes the traffic to the Anycast interface address closest to the nearest neighbor.

There is a lot of information on it for the Internet as it relates to IPv6. ...

Server Virtualization

I see a lot of trends between Containers in 2018 and the server virtualization movement started in 2001 with VMWare.  So I started taking a trip down memory lane.  My history started in 2003/2004 when I was leveraging Virtualization for datacenter and server consolation. At IBM we were pushing it to consolidate unused server capacity especially in test and development environments with IT leadership.  The delivery primary focused on VMWare GSX and local storage initially.  I recall the release of vMotion and additional Storage Virtualization tools, lead to a deserve to move from local storage to SAN-based storage.   That allowed us to discuss the reduction of downtime and potential for production deployments.  I also remember there was much buzz when EMC in 2004 acquired VMWare and it made sense given the push into Storage Virtualization.

Back then it was the promise of reduced cost, smaller data center footprint, improved development environments, and better resource utilization.   Sounds like the promises of Cloud and Containers today.

I see a lot of trends between Containers in 2018 and the server virtualization movement started in 2001 with VMWare.  So I started taking a trip down memory lane.  My history started in 2003/2004 when I was leveraging Virtualization for datacenter and server consolation. At IBM we were pushing it...

Serverless 2018

Serverless is becoming the 2018 technology hype.   I remember when containers were gaining traction in 2012, and Docker in 2013.  At technology conventions, all the cool developers were using containers.   It solved a lot of challenges, but it was not a silver bullet. (But that’s a blog article for another day.)

Today after an interview I was asking myself,  have Containers lived up to the hype?   They are great for CI/CD, getting rid of system administrator bottlenecks, helping with rapid deployment, and some would argue fundamental to DevOps.  So I started researching the hype.   People over at  Cloud Foundry published a container report in  2017 and 2016.

Per the 2016 report, “our survey, a majority of companies (53%) had either deployed (22%) or were in the process of evaluating (31%) containers.”

Per the 2017 report, “increase of 14 points among users and evaluators for a total of 67 percent using  (25%) or evaluating (42%).”

As a former technology VP/director/manager, I was always evaluating technology which had some potential to save costs, improve processes, speed development and improve production deployments.   But a 25% adaption rate and a 3% uptick over last year, is not moving the technology needle.

However, I am starting to see the same trend, Serverless is the new exciting technology which is going to solve the development challenges, save costs, improve the development process and you are cool if you’re using it.       But is it really Serverless or just a simpler way to use a container?

AWS Lambda is basically a container.  (Another blog article will dig into the underpinnings of Lambda.)   Where does the container run? ** A Server. **

Just means I don’t have to understand the underlying container, server etc.etc.etc.     So is it truly serverless?   Or is it just the 2018 technology hype to get all us development geeks excited, we don’t need to learn Docker or Kubernetes, or ask our Sysadmin friends provision us another server.

Let me know your thoughts.

Serverless is becoming the 2018 technology hype.   I remember when containers were gaining traction in 2012, and Docker in 2013.  At technology conventions, all the cool developers were using containers.   It solved a lot of challenges, but it was not a silver bullet. (But that’s a blog article for another...

CCNA Certificate

Got my CCNA certificate today via email.   Far from the day of getting a beautiful package in the mail.       The best is how Cisco lets you recertify after a long hiatus.

Got my CCNA certificate today via email.   Far from the day of getting a beautiful package in the mail.       The best is how Cisco lets you recertify after a long hiatus.

Certification Logos

I think Certification logos are interesting, I would not include them in emails, but some do.   Also,  I would probably not include certifications in an email signature anymore.

Here are the ones I’ve collected in the last few weeks.     I think moving forward, I’ll update the site header to include logos.

I think Certification logos are interesting, I would not include them in emails, but some do.   Also,  I would probably not include certifications in an email signature anymore.

Here are the ones I’ve collected in the last few weeks.     I think moving forward, I’ll update the site header to...

Passed Cisco 200-301 Designing for Cisco Internetwork Solutions

This morning I sat and passed Cisco 200-301 Designing for Cisco Internetwork Solutions.    The exam is not easy, it required an 860 to pass the exam.   17 years ago when I took it only required a 755.    I got 844 17 years ago.    This time I got an 884.    It’s a tough exam as it requires deep and broad networking knowledge across all domains routing, switching, unified communications, WLANs and how to use them in network designs.

That exam officially gives me a CCDA.   That officially makes 7 certifications (5 AWS and 2 Cisco) in 5 weeks.

Next up is the Cisco Exam for 300-101 ROUTE.

This morning I sat and passed Cisco 200-301 Designing for Cisco Internetwork Solutions.    The exam is not easy, it required an 860 to pass the exam.   17 years ago when I took it only required a 755.    I got 844 17 years ago.    This time I...

Strong Technical Interview

Had a strong technical interview today. The interviewer asked questions about the topics in this outline.

  • Virtualization and Hypervisors

  • Security

  • Docker and Kubernetes

    • Storage for Docker
  • OpenStack/CloudStack - which I lack experience

  • Chef

    • Security of data - see Databags
  • CloudFormation - immutable infrastructure

  • DNS

    • Bind

    • Route53

    • Anycast

      • Resolvers routes to different servers
    • Cloud

      • Azure

      • Moving Monolithic application to AWS

        • Which raises a bunch of questions on the Application

          • Virtual or Physical Hardware

          • Backend / Technology

          • Requirements

          • Where is this application on the Roadmap in 3 - 5 years

          • How much of the application being used

        • Cost Optimization

        • Security

      • Explain the underpinnings of IaaS, PaaS, SaaS.

These were all great questions for a technical interview. The interviewer was easy to converse and it was much more of a great discussion than an interview. The breadth and depth of the interview questions were impressive. I was very impressed with the answers to my questions by the interviewer. I left the interview hoping that the interviewer would become a colleague.

Had a strong technical interview today. The interviewer asked questions about the topics in this outline.

  • Virtualization and Hypervisors

  • Security

  • Docker and Kubernetes

    • Storage for Docker
  • OpenStack/CloudStack - which I lack experience

  • Chef

    • Security of data -...

CCNA Exam

I passed the 200-125 CCNA exam today.   Actually scored higher than I did 17 years ago.     However the old CCNA covered much more material.   Technically per Cisco guidelines it’s 3 - 5 days  before I become officially certified.

Primarily I used VIRL to get the necessary hands-on experience and Ciscopress CCNA study guide.   Wendell Odom always does a good job and his blog is beneficial in studying.   The practice tests from MeasureUp are ok, but I wouldn’t get them again.

Next up the 200-310 DESGN.

I passed the 200-125 CCNA exam today.   Actually scored higher than I did 17 years ago.     However the old CCNA covered much more material.   Technically per Cisco guidelines it’s 3 - 5 days  before I become officially certified.

Primarily I used VIRL to get the necessary...

CISCO Certifications

Last time,  I started studying for Cisco Certifications, I built a 6 router one switch lab on my desk.   One router had console ports for all the other routers, and the management port was connected to my home network so I could telnet into each of the routers via their console ports.     It was exciting and a great way to learn and stimulate complex configurations.     The routers had just enough power to run BGP and IPSec tunnels.

This time, I found VIRL, which is interesting as you build a network inside an Eclipse environment.   On the backend, the simulator creates a  network of multiple VMs.

So far,  I built a simple switch network.   I’m using it with the Cloud service Packet as the memory and CPU requirements exceed my laptop.    Packet provides a bare-metal server which is required for how VIRL does a pxe-boot.   I wish there was a bare-metal option on AWS.

I’m still trying to figure out how to upload complex configurations and troubleshoot them.

The product is very interesting as it provides a learning environment for a few hundred dollars vs. the couple thousand which I spent last time to build my lab.

Last time,  I started studying for Cisco Certifications, I built a 6 router one switch lab on my desk.   One router had console ports for all the other routers, and the management port was connected to my home network so I could telnet into each of the routers via their...