Useful Resources: ✅ Computer Abstraction AWS

This course is designed to help AWS Partners:

✅ This course is designed to help AWS Partners 🔖 *Explore basic concepts of cloud 🔖 *Communicate the AWS value proposition to their customers 🔖 *Navigate customer 🔖 *Get started co-selling with AWS

Introduction and course Overview

✅ Module 1: Cloud Concepts and AWS Services ✅ Module 2: Business Value ✅ Module 3: Cloud Objection Handling ✅ Module 4: Co-selling with AWS ✅ Accreditation Test

Course Description

✅ Explore Basic concepts of cloud services ✅ Communicate the AWS Value proposition to their customers ✅ Navigate Customer Objections ✅ Get Started co-selling with AWS

Module 1: Cloud Concepts and services

What is cloud computing?

Cloud computing is the on-demand delivery of IT resources over the internet with pay-as-you-go pricing.

Instead of buying, owning, and maintaining physical data centers and servers, you can access technology services on an as-needed basis.

The following visual depicts trends in computing evolution. The higher you go in the abstraction levels, the more the cloud provider can add value and can off-load non-strategic activities from the consumer. To learn more, choose each numbered marker.

Bare Metal Vs Virtual Machine VS Public cloud

Bare Metal

Historically, if you wanted to run a web server, you either set up your own or you rented a literal server somewhere. We often call this “bare metal” because, well, your code is literally executing on the processor with no abstraction. This is great if you’re extremely performance sensitive and you have ample and competent staffing to take care of these servers.

The problem with running your servers on the bare metal is you come become extremely inflexible. Need to spin up another server? Call up Dell or IBM and ask them to ship you another one, then get your tech to go install the phyiscal server, set up the server, and bring into the server farm. That only takes a month or two right? Pretty much instant. 😄

Okay, so now at least you have a pool of servers responding to web traffic. 🔥 Now you just to worry about keeping the operating system up to date. Oh, and all the drivers connecting to the hardware. 🔥 And all the software running on the server. And replacing the components of your server as new ones come out. 🔥 Or maybe the whole server. And fixing failed components. And network issues. 🔥 And running cables. And your power bill. And who has physical access to your server room. And the actual temperature of the data center. And paying a ridiculous Internet bill. You get the point. Managing your own servers is hard and requires a whole team to do it.

Virtual Machines

Virtual machines are the next step. This is adding a layer of abstraction between you and the metal. Now instead of having one instance of Linux running on your computer, you’ll have multiple guest instances of Linux running inside of a host instance of Linux (it doesn’t have to be Linux but I’m using it to be illustrative.) Why is this helpful? For one, I can have one beefy server and have it spin up and down servers at will. So now if I’m adding a new service, I can just spin up a new VM on one of my servers (providing I have space to do so.) This allows a lot more flexibility.

Another thing is I can separate two VMs from each other on the same machine totally from each other. This affords a few nice things.

  1. Imagine both Coca-Cola and Pepsi lease a server from Microsoft Azure to power their soda making machines and hence have the recipe on the server. If Microsoft puts both of these servers on the same physical server with no separation, one soda-maker could just SSH into the server and browse the competitor’s files and find the secret recipe. So this is a massive security problem.
  2. Imagine one of the soda-makers discovers that they’re on the same server as their competitor. They could drop a fork bomb and devour all the resources their competitors’ website was using.
  3. Much less nefariously, any person on a shared-tenant server could unintentionally crash the server and thus ruin everyone’s day.

So enter VMs. These are individual operating systems that as far as they know, are running on bare metal themselves. The host operating system offers the VM a certain amount resources and if that VM runs out, they run out and they don't affect other guest operating systems running on the server. If they crash their server, they crash their guest OS and yours hums along unaffected. And since they're in a guest OS, they can't peek into your files because their VM has no concept of any sibling VMs on the machine so it's much more secure.

All these above features come at the cost of a bit of performance. Running an operating system within an operating system isn’t free. But in general we have enough computing power and memory that this isn’t the primary concern. And of course, with abstraction comes ease at the cost of additional complexity. In this case, the advantages very much outweigh the cost most of the time.

Public Cloud

So, as alluded to above, you can nab a VM from a public cloud provider like Microsoft Azure or Amazon Web Services. It will come with a pre-allocated amount of memory and computing power (often called virtual cores or vCores because their dedicated cores to your virutal machine.) Now you no longer have to manage the expensive and difficult business of maintaining a data center but you do have to still manage all the software of it yourself: Microsoft won’t update Ubuntu for you but they will make sure the hardware is up to date.

But now you have the great ability spin up and spin down virtual machines in the cloud, giving you access to resources with the only upper bound being how much you’re willing to pay. And we’ve been doing this for a while. But the hard part is they’re still just giving you machines, you have to manage all the software, networking, provisioning, updating, etc. for all these servers. And lots of companies still do! Tools like Terraform, Chef, Puppet, Salt, etc. help a lot with things like this because they can make spinning up new VMs easy because they can handle the software needed to get it going.

We’re still paying the cost of running a whole operating system in the cloud inside of a host operating system. It’d be nice if we could just run the code inside the host OS without the additional expenditure of guest OSs.

Containers

And here we are, containers. As you may have divined, containers give us many of the security and resource-management features of VMs but without the cost of having to run a whole other operating system. It instead usings chroot, namespace, and cgroup to separate a group of processes from each other. If this sounds a little flimsy to you and you’re still worried about security and resource-management, you’re not alone. But I assure you a lot of very smart people have worked out the kinks and containers are the future of deploying code.

So now that we’ve been through why we need containers, let’s go through the three things that make containers a reality.

Cloud Computing Deployment models

Cloud computing consists of three different deployment models: cloud, on premises, and hybrid(cloud + premise). When someone refers to “The cloud” in the context of a shared or public cloud, they are referring to on-demand infrastructure provided by a vendor, such as AWS. Organizations that use the public cloud might take advantage of further solutions provided by a cloud services provider, such as one or any combination of the following:

🔥 Software as a service (SaaS) 🔥 Platform as a service (PaaS) 🔥 Infrastructure as a service (IaaS)

Software as a Service (SaaS):

Software as a Service (SaaS) is a cloud computing model where a third-party provider hosts applications and makes them available to customers over the internet. In this model, users access the software through a web browser or API without needing to install, manage, or maintain any underlying infrastructure.

Example with AWS: Amazon WorkMail is an example of SaaS provided by AWS. It’s a managed email and calendar service that users can access through their web browser or email clients without having to worry about server setup, maintenance, or software updates.

Platform as a Service (PaaS):

Platform as a Service (PaaS) provides a platform allowing customers to develop, run, and manage applications without dealing with the complexity of building and maintaining the underlying infrastructure. PaaS typically includes development tools, middleware, databases, and other resources needed for application development and deployment.

Example with AWS: AWS Elastic Beanstalk is a PaaS offering from AWS. It allows developers to deploy and manage applications without having to worry about provisioning and configuring underlying infrastructure. Developers can simply upload their application code, and Elastic Beanstalk handles the deployment, scaling, and monitoring of the application.

Infrastructure as a Service (IaaS):

Infrastructure as a Service (IaaS) provides virtualized computing resources over the internet. It gives customers access to virtualized hardware resources such as virtual machines, storage, and networking infrastructure, allowing them to build, manage, and scale their own virtualized environments.

Example with AWS: Amazon EC2 (Elastic Compute Cloud) is a prime example of IaaS offered by AWS. It provides resizable compute capacity in the cloud, allowing users to launch virtual servers (instances) with various configurations to run their applications. Users have full control over the operating system, applications, and security settings of these instances.

SaaS vs PaaS vs IaaS
  • Control and Flexibility:
    • SaaS offers the least control as the provider manages everything, while IaaS provides the most control as users manage the entire infrastructure.
  • Development Effort:
    • SaaS requires the least development effort since the software is already built and managed by the provider. PaaS reduces development effort by providing pre-built components and tools. IaaS requires the most development effort as users are responsible for building and managing their entire infrastructure.
  • Scalability:
    • SaaS and PaaS typically offer built-in scalability features, whereas IaaS requires users to manage scaling manually.
  • Cost Structure:
    • SaaS usually operates on a subscription-based model, PaaS may charge based on usage or subscription, and IaaS typically charges based on resource usage (e.g., per hour or per GB).

Patters Among AWS Customers

How do customer adopts the clouds Every journey, each customer is different, but we do recognize some adoption phases or milestone.

Dev and Test ✅ So the first one would be when a customer starts to develop new application or moving low-risks workload, we’ll call this dev and test. It’s an initial adoption of the cloud, very low risk. ✅ Production ✅ Then the next one would be production. That’s the next phase, when they start shifting their weight into the cloud while still keeping strong presence on premises.

Mission Critical ✅ Eventually there’s a tilting point where they can go mission critical. Mission-critical workloads are workflows that are necessary to the success of any business. So when a company moves these workloads to the cloud, that means that they’re deeply invested in using the cloud. ✅ All In ✅ And finally we’ll talk of all in, the last phase of adoption for some customers is when they have fully embraced the cloud. So you might hear all in, it doesn’t necessarily mean that they’re a 100% in the cloud. A few customers are a 100% the cloud, even less so for customer who move from on premises. All in is more of a concept where customer are fully invested, and they aim to abandon their technical debt, move away from on premises. They wanna accelerate their digital transformation, and they tend to adopt a cloud-first strategy. Cloud-first strategy means that they will consider cloud first and foremost for any new project, and they will need to justify any decision to diverge from that strategy.

Why Customers Choose AWS

🔥 Customer Obsession ✅ Being customer obsessed means putting customers front and center and giving them a compelling user experience. When we understand their problem and provide a great experience, it generates value for businesses.

Why customers choose AWS

AWS delivers many benefits to customers through a vast array of services, our cost-saving methodology, and a focus on customer growth and innovation. All these benefits are made available to customers on a global scale, so companies with an international footprint can benefit from AWS services.

What sets AWS Apart ?

✅ Experience And Enterprise Leadership ✅ Customers have selected AWS for many years because the company has proven to be committed to customer success. AWS is acknowledged by third-party analysts, such as Gartner, as the leader in enterprise infrastructure as a service (IaaS). ✅ Amazon Vulture ✅ AWS shares a culture that is driven by customer obsession, including proactive price reductions and a long-term approach to customer success. We have also had 100+ proactive price reductions. ✅ Service Breadth and depth ✅ With 200+ services to support any cloud workload and rapid customer-driven releases, AWS service provides unmatched opportunities for customers to innovate with their businesses. ✅ Pace of Innovation ✅ AWS developed 3,084 features in 2021 and is committed to maintaining this pace of innovation. ✅ Global Footprint ✅ AWS has 26+ Regions, 84+ Availability Zones, and 310+ points of presence to maintain this pace of innovation. ✅ Security and privacy ✅ AWS has leadership developed from hands-on experience meeting the requirements of the most rigorous government agencies. It also gives customers customized controls over the security of their services and data. ✅ Largest Partner community ✅ AWS has thousands of partners and more than 7,000 Marketplace products. ✅ Hybrid cloud capabilities ✅ AWS has the broadest set of hybrid capabilities of any cloud provider including virtual private networks (VPNs), Direct Connect, VMware on Cloud, AWS Outposts, and AWS Wavelength.

Highly available global infrastructure

AWS Regions are comprised of multiple Availability Zones for high availability, high scalability, and high fault tolerance. Applications and data are replicated in real time and consistent in the different zones.