I’ve done a fair amount of ARM template authoring. It’s not as bad as XML, but the JSON can get laborious. A number of my colleagues use Terraform templates and I was recently on a project that was using these templates. I quickly did a couple PluralSight Terraform classes to get up to speed and then started hacking away. In this post I’ll jot down a couple thoughts about how we structure Terraform projects and how to deploy them using VSTS. The source code for this post is on Github.
Releases almost always require some kind of credentials - from service credentials to database usernames and passwords. There are a number of ways to manage credentials in VSTS release management. In this post I’ll look at a couple of common techniques. For brevity, I’m going to refer to secrets as a proxy for secrets and credentials.
Visual Studio Team Services (VSTS) is a cloud platform. That means it’s publicly accessible from anywhere - at least, by default. However, Enterprises that are moving from TFS to VSTS may want to ensure that VSTS is only accessed from a corporate network or some white-list of IPs.
There are a lot of ALM MVPs that advocate the “One Team Project to Rule Them All” when it comes to Visual Studio Team Services (VSTS) and Team Foundation Server (TFS). I’ve been recommending it for a long time to any customer I work with. My recommendation was based mostly on experience - I’ve experienced far too much pain when organizations have multiple Team Projects, or even worse, multiple Team Project Collections.
I’ve been working on some pretty complicated infrastructure deployment pipelines using my release management tool of choice (of course): VSTS Release Management. In this particular scenario, we’re deploying a set of VMs to a region. We then want to deploy exactly the same setup but in a different region. Conceptually, this is like duplicating infrastructure between different datacenters.
This week at MVP summit, I showed some of my colleagues a trick that I use to manage identity hell. I have several accounts that I use to access VSTS and the Azure Portal: my own Microsoft Account (MSA), several org accounts and customer org accounts. Sometimes I want to open a release from my 10th Magnitude VSTS account so that I can grab some tasks to put into CustomerX VSTS release. The problem is that if I open the 10M account in a browser, and then open a new browser, I have to sign out of the 10M account and sign in with the CustomerX account and then the windows break… identity hell.
I recently worked with a customer that had some containers running Python code. The code was written by data scientists and recently a dev had been hired to help the team get some real process in place. As I was helping them with their CI/CD pipeline (which used a Dockerfile to build their image, publish to Azure Container Registry and then spun up the containers in Azure Container Instances), I noted that there were no unit tests. Fortunately the team was receptive to adding tests and I didn’t have to have a long discussion proving that they absolutely need to be unit testing.
If you’ve ever had to create a complex ARM template, you’ll know it can be a royal pain. You’ve probably been tempted to split out your giant template into smaller templates that you can link to, only to discover that you can only link to a sub-template if the sub-template is accessible via some public URI. Almost all of the examples in the Template Quickstart repo that have links simply refer to the public Github URI of the linked template. But what if you want to refer to a private repo of templates?
Recently I was working with a customer that was struggling with test environments. Their environments are complex and take many weeks to provision and configure - so they are generally kept around even though some of them are not frequently used. Besides a laborious, error-prone manual install and configuration process that usually takes over 10 business days, the team has to maintain all the clones of this environment. This means that at least two senior team members are required just to maintain existing dev and test environments as well as create new ones.
I love containers. I’ve said before that I think they’re the future. Just as hardly anyone installs on tin any more since we’re so comfortable with Virtualization, I think that in a few years time hardly anyone will deploy VMs - we’ll all be on containers. However, container orchestration is still a challenge. Do you choose Kubernetes or Swarm or DCOS? (For my money I think Kubernetes is the way to go). But that means managing a cluster of nodes (VMs). What if you just want to deploy a single container in a useful manner?
Keep going!Keep going ×2!Give me more!Thank you, thank youFar too kind!Never gonna give me up?Never gonna let me down?Turn around and desert me!You're an addict!Son of a clapper!No wayGo back to work!This is getting out of handUnbelievablePREPOSTEROUSI N S A N I T YFEED ME A STRAY CAT