From the course: Developing Infrastructure as Code with Terraform
Demo: Working with AWS - Terraform Tutorial
From the course: Developing Infrastructure as Code with Terraform
Demo: Working with AWS
- Okay, so now we're going to look at some specifics around deploying a solution into the AWS cloud. So we're going to go ahead and deploy a Kubernetes cluster, and it's going to also make some configurations on top of that cluster using some of these other providers that we talked about, Helm and Cube Control. Let's study the code before we kick that off. So the first thing you notice is we've got some configuration around our provider. So we're actually using two copies of the AWS provider here. So the first one is configured to use the region that your environment is using. So if you're deploying to US West two, it's just going to go ahead and use whatever is your default. But then we've got this other provider because we've got some resources that are specific and only available in US East one. And this is some certificate authority stuff that's required for this solution. So keep in mind that you can use multiple providers with different configuration within the same solution. So you see we've used this alias keyword here. So this means we can refer to that specific instance of the provider. And then we've got the Helm provider configured here. And something to note about this provider is that you can see in its configuration we're referencing outputs from our EKS cluster that we're going to create. So what that means is that the provider itself has to be deferred. So providers can rely on other resources. They don't have to be initialized at the beginning of the run. So this is a technique that's called provider stacking. It can be a powerful technique and it can be a little bit tricky too, but it is something that's used in this solution. So that's something to keep in mind. And then the Cube Control has the same approach to initialization. You can see we've got some data sources. This is pretty common to get the availability zone. We've talked about that before. And here's where we're using that second provider, and this is how you specify that within a resource or a data block. So you can specify which provider is used for which resource. If you don't specify the provider, then it uses the unnamed or the no alias provider that we configured before. And then we've just got some standard configuration in here, and we've got our resources defined. We're calling a module, we're calling a module from the public registry. So we're just using somebody else's EKS module to create our cluster here. So this is a great way to take advantage of somebody else's code. So this is not code that we've developed, we're not pulling out of our private registry or repository or anything like that. So you can see we specified a version there. And then it's just all the parameters. And this is a fairly large solution. And the thing to note here is that we are deploying this similar solution across multiple clouds. And you can watch the other sections on how to do this on Google Cloud and Azure. The end result is more or less the same. It's an auto-scaling Kubernetes cluster, but the approach and the resources at play are much different. So in this case, we're going to use Carpenter for the auto scaling and node management. And you can see we're pulling in these other modules. So we're using that module composition that we talked about. And then further down, we're going to use Helm to deploy some services on top of our Kubernetes cluster, once that's created. So this is is the carpenter controller that gets installed here. If you're familiar with Helm, this stuff will look familiar to you. And then we're going to use the Cube Control provider to deploy some other manifest to there. So some configuration on top of that Helm chart. And we've got a couple of manifests to find in here. You can see these are in line, and we're using that here doc syntax to put some inline YAML here. We could also use a template file or something like that. And then again, we're using this composition, so we're pulling in a VPC module. So this is going to create all the underlying network stuff required to support our cluster. And let's go ahead and run through our Terraform workflow to see how this works. So first we have to run in it, and you can see now we have a bunch of additional providers pulled in. And you can see our resources and our outputs. We're going to be creating 94 resources in this solution. Okay, so that's done deploying. And let's take a quick look at what we've got. So we've got all this output now. These are all the interesting bits that this has created that we might need to reference. And these are all just printed out to the console because this is our root module. So now we've got a Kubernetes cluster up and running in EKS AWS, and we can just explore it a little bit using the standard Kubernetes tools. Let's just make sure it's running and let's make sure Cube Control works. And we're using the K9s tool to inspect our cluster, and we can see that we've got some pods up and running. So we know that our cluster works and now it's ready to deploy services on top of that. And we're going to take a look at that in the end of this section. But for now, that's it. One thing to note, if you're going to run this in your own environment, so you should be able to just go ahead and do this, if you want to see how this works on your own. This solution does not fit into the free tier of AWS. So you're going to want to be sure, after you're done experimenting with it, to go ahead and run Terraform Destroy. And then that's it. You're cleaning up after yourself and you're ready to move on.