Jekyll2024-01-12T20:06:26+00:00https://colinsalmcorner.com/feed.xmlColin’s ALM CornerAll things DevOps and GitHub. Musings about DevOps tooling, culture and philosophy.
Colin DembovskyIngredients for scaling GitHub Copilot2024-01-11T01:22:01+00:002024-01-11T01:22:01+00:00https://colinsalmcorner.com/ingredients-for-scaling-github-copilot<ol id="markdown-toc">
<li><a href="#speed-for-the-individual" id="markdown-toc-speed-for-the-individual">Speed for the Individual</a></li>
<li><a href="#ingredients-for-scaling" id="markdown-toc-ingredients-for-scaling">Ingredients for Scaling</a> <ol>
<li><a href="#executive-mandate" id="markdown-toc-executive-mandate">Executive Mandate</a></li>
<li><a href="#systematic-approach" id="markdown-toc-systematic-approach">Systematic Approach</a></li>
<li><a href="#allowing-time" id="markdown-toc-allowing-time">Allowing Time</a></li>
<li><a href="#super-simple-onboarding" id="markdown-toc-super-simple-onboarding">Super Simple Onboarding</a></li>
<li><a href="#establishment-of-communities-of-practice-and-identification-of-champions" id="markdown-toc-establishment-of-communities-of-practice-and-identification-of-champions">Establishment of Communities of Practice and identification of Champions</a></li>
<li><a href="#tying-github-copilot-to-initiatives" id="markdown-toc-tying-github-copilot-to-initiatives">Tying GitHub Copilot to initiatives</a></li>
<li><a href="#pragmatic-measurement-and-measuring-the-right-things" id="markdown-toc-pragmatic-measurement-and-measuring-the-right-things">Pragmatic measurement and measuring the right things</a></li>
</ol>
</li>
<li><a href="#conclusion" id="markdown-toc-conclusion">Conclusion</a></li>
</ol>
<blockquote>
<p>Photo by <a href="https://unsplash.com/@timmykp?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Tim van der Kuip</a> on <a href="https://unsplash.com/photos/man-sitting-on-chair-wearing-gray-crew-neck-long-sleeved-shirt-using-apple-magic-keyboard-CPs2X8JYmS8?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a></p>
</blockquote>
<p>I work with a number of enterprises with large development communities: 5,000 - 25,000 developers. Managing DevSecOps at this scale is challenging, and keeping up with the pace on innovation in today’s AI-eaten world only adds complexity. While most organizations have dipped their toes into the generative AI waters, many are struggling to realize broad organizational benefits.</p>
<h1 id="speed-for-the-individual">Speed for the Individual</h1>
<p>Most customers I work with would agree (even if only intuitively) that GitHub Copilot is a productivity booster for developers. However, executives are often skeptical when they see numbers such as <a href="https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/">developers who use GitHub Copilot complete tasks ~55% faster than developers without GitHub Copilot</a>. This is not just vaporware from GitHub - customers are also reporting large productivity gains:</p>
<ol>
<li>Duolingo is seeing a <a href="https://github.com/customer-stories/duolingo">25% increase in developer speed with GitHub Copilot</a></li>
<li>Coyote Logistics is reporting <a href="https://github.com/customer-stories/coyote-logistics">50% decrease in time to write Terraform config files</a></li>
<li>Marcado Libre reports a <a href="https://github.com/customer-stories/mercado-libre">50% reduction in time spent writing code with GitHub Copilot</a></li>
</ol>
<p>These are just a few examples - there are more stories reporting similar numbers.</p>
<p>We may argue over <em>exactly</em> how much improvement developers get (and for what use-cases) - but there is enough evidence to assert that GitHub Copilot <em>makes individuals faster</em>. But how do we scale this individual productivity gain to the organization?</p>
<h1 id="ingredients-for-scaling">Ingredients for Scaling</h1>
<p>There are some patterns that we see when analyzing customers that are successful:</p>
<ul>
<li>Executive mandate</li>
<li>Systematic approach</li>
<li>Allowing time for developers to learn how to code with GitHub Copilot</li>
<li>Super simple onboarding</li>
<li>Establishment of Communities of Practice and identification of Champions</li>
<li>Tying GitHub Copilot to initiatives</li>
<li>Pragmatic measurement and measuring the right things</li>
</ul>
<h2 id="executive-mandate">Executive Mandate</h2>
<p>It is imperative the there is an executive mandate to use GitHub Copilot. Given the evidence of how effective GitHub Copilot is, executives should be tasking their teams with using GitHub Copilot and learning how to benefit from it - if for nothing else than to stay ahead of competitors!</p>
<h2 id="systematic-approach">Systematic Approach</h2>
<p>“Just turn it on” is not a good rollout strategy. This applies to <em>any</em> tool - not just GitHub Copilot. Organizations must consider how they are going to scale out. Team-by-team is a common strategy. Other strategies include “lighthouse teams first” or “language by language” or some other means of starting small and expanding out. Starting with developers that are <em>hungry</em> for GitHub Copilot is crucial - these folks are more likely to spend the time it takes to become good at GitHub Copilot, iron out networking challenges and other onboarding road bumps. Once these teams have gained some experience, they become key to scaling out GitHub Copilot skills to other developers and teams.</p>
<h2 id="allowing-time">Allowing Time</h2>
<p>GitHub Copilot can feel magical - but it is certainly not infallible. It takes time for developers to learn how to craft prompts so that they are useful, what the limits of GitHub Copilot are and how to change the way they code to fit GitHub Copilot in. This would be the same for developers adopting Test Driven Development (TDD) or eXtreme Programming (XP) - new ways of coding take time to learn and to adapt to.</p>
<p>Many developers try a couple of (not so good) prompts and conclude that GitHub Copilot “isn’t that useful.” However, if given time and examples, most developers start to learn how to better craft prompts and to learn the boundaries of GitHub Copilot’s capabilities. Giving up too soon (or piloting too quickly) will prevent successful scale out.</p>
<h2 id="super-simple-onboarding">Super Simple Onboarding</h2>
<p>GitHub Copilot seats are “pay as you use”. This is different to GitHub Enterprise or GitHub Advanced Security licenses that are purchased up-front. This gives customers much more flexibility in how and when seats are assigned. While giving every developer access to GitHub Copilot from day 1 may be easy, it is not optimal. If customers are not going to give everyone access, they have to think about how they are going to manage how and when developers get seats. Making this process super simple is key. A few of my customers require developers to fill out a form in their internal ticketing system which in turn calls an API to allocate a GitHub Copilot seat without the need for an approval. They have effectively made seat allocation self-serve.</p>
<p>Along with self-serve onboarding, customers must create a centralized knowledge base with onboarding docs, starter docs and demos. Many enterprises have proxies or other networking and firewall rules that prevent GitHub Copilot from working out of the box. Having documentation about how to configure proxies and how to authenticate GitHub Copilot is very important. Along with that, some docs that show how to get started (sample prompts, sample use-cases etc.) and demo videos are critical for success.</p>
<h2 id="establishment-of-communities-of-practice-and-identification-of-champions">Establishment of Communities of Practice and identification of Champions</h2>
<p>GitHub Copilot is a tool that requires continuous investment - since it is an art as well as a science, developers need to continue to develop their prompt crafting skills. Additionally, GitHub Copilot is continuously improving and new features are being added frequently. The best way to support a skill that needs continuous investment is a Community of Practice (COP) (or Center of Excellence or Guild or whatever you call this cross-cutting construct within your organization). This CoP needs to meet frequently and continuously evangelize tips and tricks and wins to keep momentum high.</p>
<p>Along with the CoPs, scaling requires identifying Champions - these are super-users, tech leaders and influencers within your organization’s development community. These folks need to be recognized and empowered to become GitHub Copilot Advocates internally. The more of these you build, the faster you will scale. The Champions are going to be those that are excited about GitHub Copilot, but also those that make the most GitHub Copilot requests (and have the highest acceptance rate). Identifying Champions by language is also helpful.</p>
<p>While GitHub does provide expert services and there are many GitHub Partners that can assist organizations to scale out GitHub Copilot, organizations must develop their own internal competency and programs in order to create sustainability.</p>
<h2 id="tying-github-copilot-to-initiatives">Tying GitHub Copilot to initiatives</h2>
<p>Most developers don’t use tools for the sake of tools - they tend to look for the best tool for the job. Most organizations have existing initiatives for their development teams - improving velocity, app modernization, reducing technical or security debt and increasing test coverage are examples. When developers <em>have something to tie learning GitHub Copilot to</em> they are more willing to invest time and effort. This is going to accelerate and widen adoption.</p>
<h2 id="pragmatic-measurement-and-measuring-the-right-things">Pragmatic measurement and measuring the right things</h2>
<p>Along with tying GitHub Copilot to initiatives comes pragmatic measurement, as well as measuring the right things. If you tie GitHub Copilot to an initiative to improve test coverage, then you probably won’t (initially) see an improvement in velocity. Being pragmatic (and targeted) with your metrics will lead to faster realization of value - not simply because of gamification, but because you will be measuring the right things. Improving maturity in what is measured (and how those measurements are interpreted and applied) is a requirement for success at scale.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Enterprises must be systematic about their approach to GitHub Copilot. Enterprise that don’t invest in mastering AI-assisted pair programming and generative AI in DevSecOps are going to fall behind. While these tools are novel today, they are rapidly becoming table stakes. Enterprises must be intentional about this technology - just as they should be intentional about adopting any technology. By applying the ingredients I’ve outlined above, enterprises can confidently scale GitHub Copilot - and realize organizational improvement faster and more sustainably.</p>Colin DembovskyMeasuring the impact of Developer Experience and GitHub Copilot2023-09-11T01:22:01+00:002023-09-11T01:22:01+00:00https://colinsalmcorner.com/measuring-impact-devx-copilot<ol id="markdown-toc">
<li><a href="#leading-and-lagging-indicators" id="markdown-toc-leading-and-lagging-indicators">Leading and lagging indicators</a></li>
<li><a href="#applying-indicators-to-developer-productivity" id="markdown-toc-applying-indicators-to-developer-productivity">Applying indicators to developer productivity</a></li>
<li><a href="#measuring-the-right-things" id="markdown-toc-measuring-the-right-things">Measuring the right things</a></li>
<li><a href="#assessing-the-value-of-github-copilot" id="markdown-toc-assessing-the-value-of-github-copilot">Assessing the value of GitHub Copilot</a></li>
<li><a href="#the-metrics-challenge" id="markdown-toc-the-metrics-challenge">The metrics challenge</a></li>
<li><a href="#perceptual-vs-workflow-metrics" id="markdown-toc-perceptual-vs-workflow-metrics">Perceptual vs workflow metrics</a></li>
<li><a href="#conclusion" id="markdown-toc-conclusion">Conclusion</a></li>
</ol>
<blockquote>
<p>Photo by <a href="https://unsplash.com/@schmaendels?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Andreas Klassen</a> on <a href="https://unsplash.com/photos/gZB-i-dA6ns?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
</blockquote>
<p>GitHub Copilot is radically transforming the software industry and highlighting the importance of Developer Experience (DevEx) as a key enabler to business success.</p>
<p>GitHub has published studies showing that developers are <a href="https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/">55% faster with GitHub Copilot than without</a>. Customers using GitHub Copilot are reporting numbers inline with those studies: Mercado Libre reports a <a href="https://github.com/customer-stories/mercado-libre">50% reduction in time spent writing code</a>, and Duolingo is seeing a <a href="https://github.com/customer-stories/duolingo">25% increase in developer speed</a>.</p>
<p>Accurately measuring the return on investment (ROI) for DevEx in dollar terms is nuanced, difficult and complex. For Copilot, it is more difficult. This is despite the fact that there have been many attempts to measure productivity - from counting lines of code to logging hours that developers spend in their IDEs to measuring velocity. Many of these methods are insufficient or subject to gamification.</p>
<p>GitHub Copilot is really a <em>productivity tool</em>. Productivity is so inextricably intertwined with DevEx, they can be spoken of synonymously. Any attempt to measure the value of GitHub Copilot must be tied to measuring DevEx in general.</p>
<blockquote>
<p><a href="https://dorametrics.org/">DORA Metrics</a> have long been used to measure DevOps: lead time, deployment frequency, mean time to recovery and change failure rate. When coupled with flow metrics as defined by Daniel S. Vicanti in <a href="https://actionableagile.com/resources/publications/aamfp/">Actionable Agile Metrics for Predictability</a> - cycle time, work in progress, throughput and work item age - organizations have a powerful set of metrics that can track how well they produce software. The <a href="https://queue.acm.org/detail.cfm?id=3454124">SPACE framework</a> is an excellent framework for understanding developer productivity.</p>
</blockquote>
<p>Why is it so hard to measure developer productivity? Firstly, it’s hard to define DevEx. There are many different opinions about what developer productivity is. Furthermore, both <em>perceptual</em> (qualitative) as well as <em>workflow</em> (quantitative) metrics should be considered. Measuring developer satisfaction is just as important as measuring how fast they work: happy developers are productive developers, since they spend more time coding and shipping great products, and are more likely to stay with your company. DevEx is multi-dimensional, so no single metric is going to tell the whole story.</p>
<blockquote>
<p>You can read a much more detailed analysis of perceptual and workflow metrics and the dimensions of DevEx in <a href="https://queue.acm.org/detail.cfm?id=3595878">this paper</a>.</p>
</blockquote>
<p>To fully understand how to measure developer productivity, we have to understand how <em>leading</em> and <em>lagging</em> indicators work. Let’s unpack these concepts.</p>
<h2 id="leading-and-lagging-indicators">Leading and lagging indicators</h2>
<p>Leading indicators are measures of <em>inputs</em> into a system. They help us to predict how the system will perform <em>in the future</em>. Typically, these are fairly easy to measure and can be influenced in a short period.</p>
<p>A good example of a leading indicator for a development team is the count of work items on the backlog, or committed to a sprint. This is easy to measure (just check the backlog) and easy to influence - we can immediately remove (or add) items committed to a sprint.</p>
<p>Lagging indicators are measures of the <em>outputs</em> of a system. They help us understand how something happened in the system <em>in the past</em>: they are retrospective in nature. Typically, these require longer time periods to measure. Lagging indicators are also the result of aggregated leading indicators, so you can’t directly affect them.</p>
<p>A good example of a lagging indicator for a development team is how many items are delivered in a sprint. Measuring this requires us to wait until the end of the sprint, so it takes a while to measure. This count can’t be <em>directly</em> changed - you can try to add more committed items in the next sprint, but that may result in more bottlenecks or contention for testing environments or any number of other issues that don’t actually increase the number of items completed.</p>
<h2 id="applying-indicators-to-developer-productivity">Applying indicators to developer productivity</h2>
<p>Let’s apply these concepts to the problem of measuring developer productivity. Remember <em>developer productivity isn’t an end in itself</em> - it’s a means to an end. To what end? Ultimately, it’s to make our business successful! A business may have productive teams and not do well in the market. So what are we trying to achieve? And how would we know that we’ve been successful?</p>
<p>We may want to ask questions like:</p>
<ul>
<li>How can we develop faster?</li>
<li>How can we reduce risk?</li>
<li>How can we improve quality?</li>
<li>How can we innovate more?</li>
</ul>
<p>But how would we measure those? How would we know we’ve been successful? We could measure some of these:</p>
<ul>
<li>Cycle times - how fast can we complete work?</li>
<li>Frequency of deployments - how frequently can we deploy?</li>
<li>Bugs - how many do we have in a release?</li>
<li>Vulnerabilities - how many do we have in a release?</li>
<li>How many code reviews do we do (and how fast do we do them)?</li>
<li>How much burnout do we see?</li>
<li>How easy is it to attract (and retain) talented people?</li>
<li>How much are we innovating vs maintaining?</li>
</ul>
<p>These metrics give you insight into how well your team is performing - but even these must be analyzed in the context of the business. Are you attracting and retaining more customers? How delighted are your customers with your products and services? How competitive are you in your market? Delivering faster won’t help the business if you’re delivering the wrong things.</p>
<p>Let’s imagine that we measure the number of bugs in a release. Release A had 3 bugs, and Release B had 5 bugs. This tells us that there is a problem somewhere, since the number of bugs increased. But what? This is where we see the challenge of metrics - how do we interpret what happened? Perhaps we added a lot of code and didn’t add enough tests. Perhaps our senior developers were too busy to do proper code reviews, so they missed some bad code. Perhaps a developer was burning out and just pushed code without taking care to test it properly. Multiple inputs may have affected an output that we’re not happy with.</p>
<h2 id="measuring-the-right-things">Measuring the right things</h2>
<p>What does this mean for measuring developer productivity and the value of GitHub Copilot? Measuring lines of code that Copilot produced or how many prompts were accepted are <em>leading indicators</em> that should have an impact on lagging indicators down the line. In other words, <em>the immediate improvement</em> (which is easier to measure) will result in affecting the <em>future impact</em> (which is harder to measure). However, the dollar value impact (ROI), is typically tied to the <em>lagging indicators</em>.</p>
<p>What does that mean? Here’s the critical concept: <em>measuring flow and other life cycle metrics is the best way to measure the dollar value of GitHub Copilot</em>. This is the challenge to organizations: to mature in tracking these metrics so that they can really see the impact of developer productivity on business outcomes.</p>
<p>There is a caveat here: GitHub Copilot is a tool meant primarily to increase individual productivity at the task level. While making developers faster at task completion will certainly impact team performance metrics like cycle times, task completion is not the only factor affecting team performance. For example, team performance involves synchronization (code review must be scheduled into the reviewer’s calendar), meetings, design sessions and many other processes and ceremonies.</p>
<h2 id="assessing-the-value-of-github-copilot">Assessing the value of GitHub Copilot</h2>
<p>The hypothesis is that by utilizing GitHub Copilot we can affect leading indicators like speed of coding, quality of code, test coverage and speed of code review. Improving these indicators will affect the lagging indicators like velocity and deployment frequency, quality, mean time to resolution (MTTR) and risk.</p>
<p>Unsurprisingly, the lagging indicators are typical DevSecOps metrics! These typically require longer periods of time to measure. Furthermore, when they change, it’s not always easy to analyze <em>why</em> they changed.</p>
<p>If you look at the above list, you’ll see that the leading indicators are fairly easy to affect, and don’t require long time periods to measure. For a sprint (typically 2 - 4 weeks) we can easily measure how many items we delivered, or how many bugs we found or how long code reviews took. If we found few bugs and completed code reviews quickly, that should allow us to deploy more frequently. We can also improve these measures directly. For example, if we want to improve code review times, we can add automated quality gates that need to pass before code review. This can help ensure that code has higher quality by the time a reviewer opens it, leading to faster review times.</p>
<p>To tie this back to GitHub Copilot - if you really want to measure its impact on the team, you have to look beyond how many suggestions were accepted (a leading indicator) and measure lagging indicators. If you use GitHub Copilot, you should see improvements in the following areas:</p>
<ul>
<li><strong>More frequent deployments/reduced cycle times</strong> Developers are spending less time hand-coding boilerplate code and searching for answers outside the IDE and so can complete tasks faster. GitHub Copilot is generating unit tests and documentation - all tedious, labor-intensive tasks that GitHub Copilot can do in milliseconds. This will lead to improved cycle times - and improved DevEx.</li>
<li><strong>Fewer build failures</strong> Developers can use Copilot Chat to explain code, meaning they can understand code more deeply. They can understand the impact of changes more clearly, and should lead to better code. As GitHub Copilot generates unit tests, buggy code is fixed before it’s even pushed to the repository. Copilot Chat can help developers debug and fix problems as the code is being written. When coupled with branch protection rules, status checks, and custom deployment rules, this should all translate into fewer build failures.</li>
<li><strong>Improved code quality and higher test coverage</strong> GitHub Copilot can be used to generate test cases and test data faster, which should lead to more code coverage, which in turn will improve quality.</li>
<li><strong>Faster code review times</strong> Since GitHub Copilot is like having a second developer with you all the time, developers can generate good code, understand existing code, debug code and generate tests for code all before the code review. This means that by the time the code reaches review, it’s higher quality, which should reduce the time needed to review it. Reviewers can use Copilot Chat to understand the impact of a proposed change by asking it to “explain this code”.</li>
<li><strong>Fewer security vulnerabilities and improved MTTR</strong> Copilot Chat is an excellent way to scale AppSec since it can guide developers in fixing security vulnerabilities without the need to involve a security professional. Furthermore, with AI filters on code suggestions, it is less likely to generate code suggestions with security vulnerabilities. This means that MTTR should improve and risk should be lowered. Recent research suggests developers intend to spend their new found time in code review and vulnerability remediation.</li>
<li><strong>Better flow metrics</strong> Cycle times should be improved, and Work in Progress (WIP) should be lowered. When developers are faster at their tasks, they work on fewer things at the same time, reducing the overhead of context switching, allowing them to spend more time “in the zone” as well as reducing cognitive load. Furthermore, work item age should decrease (since work items will be completed faster). All of this works to improve throughput.</li>
<li><strong>Accelerated developer growth</strong> The Collaborative Software Process study shows that pair programming speeds development, improves quality and improves developer experience. GitHub Copilot allows every developer to have a pair programmer, even when remote. Furthermore, Copilot Chat acts like a “just in time” coach that can help developers grow their expertise.</li>
<li><strong>Better talent acquisition and retention</strong> Happy developers are typically productive developers, but the corollary holds too: productive developers are typically happy developers. This has the dual benefit of attracting talent (developers love to work for high performing teams) as well as being good for the business, since developer churn costs in time and lost “tribal knowledge”. Furthermore, because of the improvements in quality and speed, developers will spend less time burning out, which is good for both talent acquisition and retention.</li>
</ul>
<h2 id="the-metrics-challenge">The metrics challenge</h2>
<p>The challenge with these metrics is that <em>they take time to measure</em>. And many organizations don’t even have a baseline for some of these metrics. If organizations are going to be able to show the value of GitHub Copilot and improved DevEx, they are going to have to get to grips with these DevSecOps metrics, such as those from DORA, ActionableAgile and SPACE.</p>
<p>To further complicate things, many of these metrics have interdependencies. Optimizing one part of the development life cycle may highlight bottlenecks and inefficiencies in other parts of the development life cycle that could prevent the lagging indicators from improving. For example, let’s say that you give your developers GitHub Copilot and they start coding faster and completing tasks faster. Now you have more code reviews than before - and you could end up overwhelming senior developers that perform the code reviews, and they become a bottleneck that prevents you from deploying more frequently. So we see that the lagging indicators are related to an aggregation of the leading indicators, and we must take this into account when doing any analysis.</p>
<p>You cannot get Copilot Chat to help you fix a vulnerability if you can’t find the vulnerability, so you need good Application Security (AppSec) tools. You cannot attain more frequent deployments by improving developer speed alone - you have to invest in automation to build, test, scan, package and deploy your code. Improving cycle times won’t help if you’re not truly transforming the software delivery life cycle to be agile. And team performance improvements require streamlining processes and removing red tape, not just making individuals faster.</p>
<h2 id="perceptual-vs-workflow-metrics">Perceptual vs workflow metrics</h2>
<p>Most of the above discussion has focused on <em>workflow</em> (system) metrics. Even if the effect of these is understood, organizations must not forget the value of <em>perceptual</em> metrics. These are informed by how developers <em>feel</em> about GitHub Copilot and DevEx in general. Just as leading indicators interact with each other in complex ways to affect lagging indicators, perceptual metrics play an important role in DevEx. Any program to measure DevEx and the value of GitHub Copilot must include perceptual metrics such as how developers feel about the development process and their tools. More perceptual metrics are defined in this paper.</p>
<p>Perceptual metrics are best measured by surveys and self-assessments. They must be carefully designed to take into account bias and avoid survey fatigue. Organizations without expertise in these areas should consider outsourcing this kind of study to experienced partners.</p>
<p>Once the perceptual metrics have been obtained, organizations should analyze the perceptual and workflow metrics together with business key performance indicators (KPIs) in order to attain a clear, accurate picture of DevEx and value.</p>
<h1 id="conclusion">Conclusion</h1>
<p>When organizations look at the return on investment (ROI) for investing in DevEx (including deploying GitHub Copilot) multiple dimensions must be considered. Analyzing which metrics will be impacted by improvements is a complex activity with many nuances. Organizations should start to analyze both input (leading) and output (lagging) metrics so that they can develop a fuller understanding of how productive their developers are individually, as well as how productive teams are. Ultimately the goal of such measurement is to help improve productivity and DevEx to accelerate achieving business outcomes.</p>
<p>What can organizations do today to improve developer productivity? First, start by asking developers what their view of DevEx and productivity is. Then start measuring both input and output metrics as defined above with a view to discovering where to most effectively invest to improve.</p>Colin DembovskyMission Control - and what it means for DevSecOps2023-06-12T01:22:01+00:002023-06-12T01:22:01+00:00https://colinsalmcorner.com/mission-control<ol id="markdown-toc">
<li><a href="#roots-of-process-debt" id="markdown-toc-roots-of-process-debt">Roots of Process Debt</a></li>
<li><a href="#army-mission-control" id="markdown-toc-army-mission-control">Army Mission Control</a></li>
<li><a href="#applying-mission-control-to-devsecops" id="markdown-toc-applying-mission-control-to-devsecops">Applying Mission Control to DevSecOps</a> <ol>
<li><a href="#competence" id="markdown-toc-competence">Competence</a></li>
<li><a href="#mutual-trust" id="markdown-toc-mutual-trust">Mutual trust</a></li>
<li><a href="#shared-understanding" id="markdown-toc-shared-understanding">Shared understanding</a></li>
<li><a href="#commanders-intent" id="markdown-toc-commanders-intent">Commander’s intent</a></li>
<li><a href="#mission-orders" id="markdown-toc-mission-orders">Mission orders</a></li>
<li><a href="#disciplined-initiative" id="markdown-toc-disciplined-initiative">Disciplined initiative</a></li>
<li><a href="#risk-acceptance" id="markdown-toc-risk-acceptance">Risk acceptance</a></li>
</ol>
</li>
<li><a href="#conclusion" id="markdown-toc-conclusion">Conclusion</a></li>
</ol>
<blockquote>
<p>Photo by <a href="https://unsplash.com/@fandrejevic?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Filip Andrejevic</a> on <a href="https://unsplash.com/s/photos/army?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
</blockquote>
<p>Today’s markets move fast. Organizations that don’t keep pace are being left behind. DevSecOps is fairly easy to grasp conceptually, but is not easily implemented. Most organizations that struggle to implement DevSecOps effectively are hampered not by tooling, but by old ways of thinking.</p>
<p>DevSecOps requires a cultural shift - as well as a platform to support this shift. A reminder of Donovan Brown’s definition of DevOps is warranted:</p>
<blockquote>
<p>DevOps is the union of people, process and products to enable continuous delivery of value to our end users.</p>
</blockquote>
<p>We used to say it this way when I was a DevOps consultant:</p>
<blockquote>
<p>You can’t but DevOps in a box.</p>
</blockquote>
<p>There is no “silver bullet” or product that will “make you DevOps”. Finding the right tools and platforms is important, but culture is more so. Many teams talk about “technical debt” but I don’t hear a lot of teams talk about <em>process debt</em>.</p>
<h2 id="roots-of-process-debt">Roots of Process Debt</h2>
<p>There are probably many roots of process debt, but I think that many of them come from the Waterfall mindset. In Waterfall, the idea was to work out all the possible scenarios and outcomes up-front so that we could minimize risk. Ironically, this extreme “analysis paralysis” almost always led to <em>building the wrong things</em> which was the exact thing it was trying to prevent!</p>
<p>A second factor was the desire to find economies of scale. For example, it was common to have database administrators (DBAs) and security professionals that took care of all the database and security work respectively. “Developers don’t know how to optimize database work, so we’ll centralize that work to let the developers code faster.” Again, the irony is that DBAs became a bottleneck. The same is true of security teams - the desire to “offload” security from App Teams ends up slowing teams down!</p>
<p>As I was thinking about process debt, I came across a philosophy from the US Department of the Army called Mission Control that seemed to offer some insights into how to build a good DevSecOps culture.</p>
<h2 id="army-mission-control">Army Mission Control</h2>
<blockquote>
<p>Mission Control is the Army’s approach to command and control that empowers subordinate decision-making and decentralized execution appropriate to the situation.</p>
</blockquote>
<p>In war, events are too chaotic and communication too fragmentary to rely on centralized control. Commanders need to rely on the <em>innovation and decisive action of subordinates to meet intent in a complex operating environment</em>. Sounds like this applies to DevSecOps, doesn’t it?</p>
<p>The seven principles of Mission Command are:</p>
<ul>
<li><strong>Competence</strong> - developed continually through training and self-development of soldiers</li>
<li><strong>Mutual trust</strong> - shared confidence between soldiers and commanders that they can be relied upon and are competent to perform assigned tasks</li>
<li><strong>Shared understanding</strong> - creating common language and culture and clear visions and values</li>
<li><strong>Commander’s intent</strong> - commanders must clearly communicate intent to everyone, articulating purpose and desired end state</li>
<li><strong>Mission orders</strong> - describing the situation, commander’s intent, desired results and required tasks, <em>without specifying how tasks are to be accomplished</em></li>
<li><strong>Disciplined initiative</strong> - whether the benefits of a localized decision outweigh the risk of desynchronizing the overall operation, and whether the action further’s the commander’s intent</li>
<li><strong>Risk acceptance</strong> - commanders must assess risk to mission while mitigating risks with control measures, trusting that their intent has been relayed and subordinate decisions will be made based on that intent</li>
</ul>
<h2 id="applying-mission-control-to-devsecops">Applying Mission Control to DevSecOps</h2>
<p>We can apply these principles to our thinking about culture for DevSecOps.</p>
<h3 id="competence">Competence</h3>
<p>Investing in people and their skills is a critical part of a successful DevSecOps culture. Developers need to be empowered to learn about new technologies, stacks and trends. Similarly, cross-functional teams need to have training available for the breadth of their responsibilities. These responsibilities go beyond just coding and include testing, automating, monitoring, security, hyper-scale, infrastructure as code, cloud operations, live-site culture and more. By investing in training and opportunities for learning, companies build competence.</p>
<h3 id="mutual-trust">Mutual trust</h3>
<p>Trust is critical - but it must be <em>mutual</em>. App Dev teams must trust that their commanders (executives) are investing in them, and executives must trust their teams to do the right thing. This trust is earned and built over time, and can only be built on a culture that values innovation and won’t punish people for initiative.</p>
<h3 id="shared-understanding">Shared understanding</h3>
<p>Executives must clearly communicate the vision and values of the organization so that it is well understood by everyone. Organizations should also spend time thinking about a common language as well as communication lines and types (see the three key <a href="https://teamtopologies.com/key-concepts">Interaction Modes</a> from Team Topologies). Organizations that are clear about how they communicate can take advantage of the homomorphic force of Conway’s Law to ensure that their architectures and culture are aligned, rather than opposing.</p>
<h3 id="commanders-intent">Commander’s intent</h3>
<p>Beyond the values and vision, executives must clearly communicate <em>purpose</em> and <em>desired end state</em>. Clearly articulating what success looks like and what the key objectives are at the executive level keeps everyone aligned.</p>
<p>In my <a href="/scaling-dev-sec-ops/">previous post</a> I spoke about the balance between <em>Team autonomy</em> and <em>Enterprise alignment</em>. When executives are crystal clear on the purpose of an organization as well as desired end state, this gives teams strong enterprise alignment. Strong enterprise alignment at the <em>strategic level</em> promotes a culture where Teams feel empowered to innovate within the boundaries that the organization really cares about.</p>
<h3 id="mission-orders">Mission orders</h3>
<p>This is where most organizations get it wrong - mission orders are about distilling the commander’s intent, in language built from Shared understanding, and specifying <em>what</em> needs to be done, not <em>how</em> it should be done.</p>
<p>This requires the vision, purpose and values of the organization to be clearly understood. It requires good shared understanding, but it is also built on mutual trust. Will leaders trust that their teams have the competency to do what needs to be done? Do developers trust that the leaders are investing in them?</p>
<p>This also ties back well to Enterprise alignment - which emphasizes a core set of non-negotiables (the values) and lets teams innovate within these parameters to meet the Commander’s intent (Team autonomy).</p>
<h3 id="disciplined-initiative">Disciplined initiative</h3>
<p>If organizations have clear Mission orders, know the Commander’s intent, and what they are tasked to achieve, <em>then they can innovate to fulfil the goals</em>. Rather than specifying <em>how</em> they should do things, which signals a lack of trust, executives show trust by letting teams apply initiative. This benefits the teams (they gain mastery and autonomy) and the company, since the company is now building a culture of innovation. This then feeds to better trust, which leads to more autonomy - and the virtuous cycle continues.</p>
<p>Team autonomy is what is being expressed here - within the boundaries of clear, concise Enterprise alignment.</p>
<h3 id="risk-acceptance">Risk acceptance</h3>
<p>This is a tough one for most organizations. However, if the other principles are in place, then this becomes the natural progression. Organizations that have a low-trust environments tend to be highly risk-averse.</p>
<p>This is not to say that risks should not be evaluated, weighed and mitigates when appropriate. However, teams that default to zero risk also stifle innovation and experimentation. When organizations build competent teams in a high-trust environment, are clear about their purpose and vision, then they can accept the risk of <em>letting teams fail</em>. If teams are never allowed to fail, they will never innovate. Once again, settling on a small core of non-negotiables (Enterprise alignment) and then giving teams room (Team autonomy) to innovate, experiment and (at times) fail, shows trust.</p>
<h2 id="conclusion">Conclusion</h2>
<p>The principles of the Army’s Mission Control philosophy apply well to the culture of DevSecOps. Organizations that want to succeed need to develop a culture that builds mutual trust and empowers innovation, rather than stifling it.</p>
<p>Happy missioning!</p>Colin DembovskyWho needs GitHub Copilot?2023-06-12T01:22:01+00:002023-06-12T01:22:01+00:00https://colinsalmcorner.com/who-needs-github-copilot<ol id="markdown-toc">
<li><a href="#1-you-prefer-writing-code-in-notepad" id="markdown-toc-1-you-prefer-writing-code-in-notepad">1. You prefer writing code in Notepad</a></li>
<li><a href="#2-you-like-writing-boilerplate-code" id="markdown-toc-2-you-like-writing-boilerplate-code">2. You like writing boilerplate code</a></li>
<li><a href="#3-you-know-every-regex-expression" id="markdown-toc-3-you-know-every-regex-expression">3. You know every regex expression.</a></li>
<li><a href="#4-you-know-every-api" id="markdown-toc-4-you-know-every-api">4. You know every API</a></li>
<li><a href="#5-you-like-copying-and-pasting-from-stackoverflow" id="markdown-toc-5-you-like-copying-and-pasting-from-stackoverflow">5. You like copying and pasting from StackOverflow.</a></li>
<li><a href="#6-you-dont-need-unit-tests" id="markdown-toc-6-you-dont-need-unit-tests">6. You don’t need unit tests.</a></li>
<li><a href="#7-comments-what-for" id="markdown-toc-7-comments-what-for">7. Comments? What for?</a></li>
<li><a href="#8-youd-rather-leak-ip-by-pasting-code-into-chatgpt" id="markdown-toc-8-youd-rather-leak-ip-by-pasting-code-into-chatgpt">8. You’d rather leak IP by pasting code into ChatGPT.</a></li>
<li><a href="#conclusion" id="markdown-toc-conclusion">Conclusion</a></li>
</ol>
<blockquote>
<p>Photo by <a href="https://unsplash.com/@aideal?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Aideal Hwa</a> on <a href="https://unsplash.com/s/photos/robot?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
</blockquote>
<p>Generative AI, Copilot, blah blah blah - who needs it? You’re the Ultimate Programmer, so why would you want some pretentious “large” language model helping you? You’re a lone wolf that don’t need nobody (or no <em>thing</em>) to “help” you - it will only get in the way of your staggering intellect.</p>
<p>Well this post is just for you - top reasons why do DON’T need GitHub Copilot.</p>
<h2 id="1-you-prefer-writing-code-in-notepad">1. You prefer writing code in Notepad</h2>
<p>GitHub Copilot only supports four IDEs: Visual Studio Code, Visual Studio, IntelliJ and NeoVim. But you prefer to code in Notepad. Or vim. Or emacs. All those plugins and breakpoints and live debugging - it’s overrated. You can debug in your head just by looking at the perfect code you wrote. And you can quit vim whenever you want to.</p>
<p>So what if Copilot integrates seamlessly and fades into the background as you code?</p>
<h2 id="2-you-like-writing-boilerplate-code">2. You like writing boilerplate code</h2>
<p>Constructors. Getters. Setters. Who needs to think about business problems when you can write <em>real</em> code. I mean, you learned how to do it when you did your Intro to Programming course, so you want to make sure you get your money’s worth.</p>
<p>So what if Copilot is <em>really</em> good at writing repetitive, boilerplate code, thereby keeping you focused on solving business problems?</p>
<h2 id="3-you-know-every-regex-expression">3. You know every regex expression.</h2>
<p>Only losers need to test their regex expressions using <a href="https://regex101.com/">regex101</a>. You don’t need Copilot’s help to validate obscure string formats - you just do it in your head.</p>
<p>So what if Copilot can generate regex and easily dump out obscure formats and formulas so that you don’t have to remember them or search for them?</p>
<h2 id="4-you-know-every-api">4. You know every API</h2>
<p>Who needs to look up how to invoke common APIs? Once you’ve seen a Swagger doc you can call any and every method in that API forever.</p>
<p>So what if Copilot knows how to call APIs that millions of developers use daily?</p>
<h2 id="5-you-like-copying-and-pasting-from-stackoverflow">5. You like copying and pasting from StackOverflow.</h2>
<p>Speaking of searching for stuff - you love StackOverflow! What’s better than googling a question and then inevitably landing on StackOverflow where there are a bunch of random answers that may or may not be correct that you can copy from? And who doesn’t love renaming all the variables and fixing all the formatting errors (tabs vs spaces anyone)? Not that you need to search for stuff anyway - your infallible memory is a giant library of endless code examples to draw from.</p>
<p>So what if Copilot can get answers for you without you having to leave the IDE… er, file… and follows your naming conventions and styles?</p>
<h2 id="6-you-dont-need-unit-tests">6. You don’t need unit tests.</h2>
<p>Unit tests - that assumes your code could be wrong. And why spend time programming code that tries to break the code you just coded so perfectly? If you <em>did</em> write unit tests, they would be the ultimate tests.</p>
<p>So what if Copilot can quickly generate tests, mocks and find multiple test cases just by analyzing the code you’re testing?</p>
<h2 id="7-comments-what-for">7. Comments? What for?</h2>
<p>You don’t have to document your code. Your code is so perfect that people can tell what it’s doing just by seeing your code. Besides, no-one else will ever need to look at your code unless it’s to learn how to program perfectly. Da Vinci didn’t have to “comment” the Mona Lisa, did he?</p>
<p>So what if GitHub Copilot can generate code based on your comments, and that the comments stay to help document your code?</p>
<h2 id="8-youd-rather-leak-ip-by-pasting-code-into-chatgpt">8. You’d rather leak IP by pasting code into ChatGPT.</h2>
<p>You’re not like those Samsung developers that <a href="https://www.techradar.com/news/samsung-workers-leaked-company-secrets-by-using-chatgpt">leaked sensitive information</a> while copying code into ChatGPT, right? I mean, you’d never be asking an AI for help anyway.</p>
<p>So what if Copilot encrypts data and uses an HTTPS and (at least for Copilot for Business) never keeps any of the data or solutions?</p>
<h2 id="conclusion">Conclusion</h2>
<p>This GitHub Copilot thing is totally overrated. It’s not going to change the way you work or the perfection of the code that you crank out as you consume coffee and cold pizza. No way.</p>
<p>Happy (not) Copiloting!</p>Colin DembovskyTeam Autonomy vs Enterprise Alignment2023-06-07T01:22:01+00:002023-06-07T01:22:01+00:00https://colinsalmcorner.com/scaling-dev-sec-ops<ol id="markdown-toc">
<li><a href="#team-autonomy-vs-enterprise-alignment" id="markdown-toc-team-autonomy-vs-enterprise-alignment">Team Autonomy vs Enterprise Alignment</a> <ol>
<li><a href="#team-autonomy" id="markdown-toc-team-autonomy">Team Autonomy</a></li>
<li><a href="#enterprise-alignment" id="markdown-toc-enterprise-alignment">Enterprise Alignment</a></li>
</ol>
</li>
<li><a href="#tying-in-to-devsecops" id="markdown-toc-tying-in-to-devsecops">Tying in to DevSecOps</a></li>
<li><a href="#considering-builds" id="markdown-toc-considering-builds">Considering Builds</a> <ol>
<li><a href="#extreme-enterprise-alignment" id="markdown-toc-extreme-enterprise-alignment">Extreme Enterprise Alignment</a></li>
<li><a href="#extreme-team-autonomy" id="markdown-toc-extreme-team-autonomy">Extreme Team Autonomy</a></li>
<li><a href="#well-balanced" id="markdown-toc-well-balanced">Well Balanced</a></li>
</ol>
</li>
<li><a href="#considering-appsec" id="markdown-toc-considering-appsec">Considering AppSec</a> <ol>
<li><a href="#extreme-enterprise-alignment-1" id="markdown-toc-extreme-enterprise-alignment-1">Extreme Enterprise Alignment</a></li>
<li><a href="#extreme-team-autonomy-1" id="markdown-toc-extreme-team-autonomy-1">Extreme Team Autonomy</a></li>
<li><a href="#well-balanced-1" id="markdown-toc-well-balanced-1">Well Balanced</a></li>
</ol>
</li>
<li><a href="#devsecops-at-scale" id="markdown-toc-devsecops-at-scale">DevSecOps At Scale</a> <ol>
<li><a href="#treat-the-pr-as-the-center-of-quality-and-security" id="markdown-toc-treat-the-pr-as-the-center-of-quality-and-security">Treat the PR as the center of quality and security</a></li>
<li><a href="#enable-secret-scanning-and-push-protection" id="markdown-toc-enable-secret-scanning-and-push-protection">Enable secret scanning and push protection</a></li>
<li><a href="#treat-security-vulnerabilities-as-work" id="markdown-toc-treat-security-vulnerabilities-as-work">Treat security vulnerabilities as “work”</a></li>
<li><a href="#let-teams-buildtestpackagescandeploy-their-apps" id="markdown-toc-let-teams-buildtestpackagescandeploy-their-apps">Let teams build/test/package/scan/deploy their apps</a></li>
<li><a href="#manage-by-exception" id="markdown-toc-manage-by-exception">Manage by exception</a></li>
</ol>
</li>
<li><a href="#conclusion" id="markdown-toc-conclusion">Conclusion</a></li>
</ol>
<blockquote>
<p>Image by <a href="href="https://unsplash.com/@mrsunflower94?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText"">Matteo Vistocco</a> on <a href="https://unsplash.com/s/photos/team?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
</blockquote>
<p>I work for GitHub - so naturally I have a lot of conversations about tooling and products. However, let’s take a step back and remember Donovan Brown’s seminal definition of DevOps:</p>
<blockquote>
<p>DevOps is the union of people, process and products to enable continuous delivery of value to our end users.</p>
</blockquote>
<p>You’ve also probably heard Peter Drucker’s quote:</p>
<blockquote>
<p>Culture eats strategy for breakfast.</p>
</blockquote>
<p><em>Culture</em> is the <em>people and product</em> part of the DevOps equation, and are arguably more important than the <em>product</em> or platform your teams are working with.</p>
<p>That’s all well and good in a theoretical, high-level way. But how do we apply these principles in practice?</p>
<h2 id="team-autonomy-vs-enterprise-alignment">Team Autonomy vs Enterprise Alignment</h2>
<p>Many years ago, I heard Aaron Bjork and Buck Hodges from the Azure DevOps team talk about how Microsoft transformed their teams from a 2-year delivery cycle to a 3-week delivery cycle. This <a href="https://www.youtube.com/watch?v=WhRRGUmwoq4&t=10s">excellent video</a> by my late friend and colleague Able Wang talks about this transformation and I highly recommend it.</p>
<p>One concept has always stood out to me when Microsoft spoke about this transformation: <em>team autonomy</em> vs <em>enterprise alignment</em>. You can imagine these as two ends of a spectrum, with total team autonomy on one side and complete enterprise alignment on the other side.</p>
<p>To visualize these extreme ends of the spectrum, picture 300 rowboats vs the Titanic:</p>
<ul>
<li>the 300 rowboats can each turn very quickly</li>
<li>each rowboat can travel fast or slow, according to how well the rowers gel together</li>
<li>getting all 300 rowboats pointed in the same direction is a challenge</li>
<li>communicating to all 300 rowboats is a challenge</li>
<li>the Titanic only has a single direction</li>
<li>the Titanic takes a long time to change direction</li>
<li>communication on the Titanic is easier</li>
</ul>
<p>Most organizations fall somewhere on the spectrum between team autonomy and enterprise alignment, and various points along the spectrum have advantages and disadvantages.</p>
<h3 id="team-autonomy">Team Autonomy</h3>
<p>Team autonomy means that teams are able to make decisions without filling in forms and logging tickets. To make this practical for software development, it means allowing teams to decide which programming languages and stacks they want to work with, what IDEs they want to use, and how they will build, test, scan, deploy and monitor their apps.</p>
<h3 id="enterprise-alignment">Enterprise Alignment</h3>
<p>Enterprise alignment is the vision and goal of the company and how that is worked out day-to-day. It defines how individuals and teams communicate, what their standards are, and what future direction is. It also defines the “non-negotiables”.</p>
<p>In practice, successful organizations have a small, well-defined “core” of Enterprise Alignment, and then allow teams to have a large level of autonomy. Enterprise alignment defines the <em>what</em> and let’s teams define the minutia of the <em>how</em>.</p>
<h2 id="tying-in-to-devsecops">Tying in to DevSecOps</h2>
<p>How does this tie into DevSecOps? Many organizations I work with have a centralized, command and control model. In other words, they lie much closer to the Enterprise Alignment side of the spectrum. Let’s look at two examples: builds and security. We’ll analyze each on both extremes: enterprise alignment and team autonomy.</p>
<h2 id="considering-builds">Considering Builds</h2>
<h3 id="extreme-enterprise-alignment">Extreme Enterprise Alignment</h3>
<p>Many organizations have a “DevOps team”. I really despise this language, since it makes DevOps the responsibility of some other team - after all, if I’m not on the DevOps team, then why should I care about DevOps? I think what most organization mean is that they have a team that is responsible for build and deployment automation.</p>
<p>The idea behind this team is to enable developers to code, and not have to worry about how to package, test, scan and deploy their apps. This leads to app developers not caring about operational issues, not building sufficient telemetry into their apps, not caring about security or scale or performance. After all, that all falls onto the “DevOps” team.</p>
<p>The supposed value-add is that there is a standardized build, test and deploy process, controlled by the DevOps team.</p>
<h3 id="extreme-team-autonomy">Extreme Team Autonomy</h3>
<p>When there is no enterprise alignment, it can look and feel like the wild west. Teams are deploying whenever and however they want, there is little or no code sharing and there are a plethora of tools since each team is using its own preferred tools and stacks.</p>
<p>While this allows agility in the “local” this ends up being a blocker at the “global” level. Teams optimizing for themselves end up being blocked by other teams (or blocking other teams) since there is no set contract for sharing code or apps and no set way to communicate.</p>
<h3 id="well-balanced">Well Balanced</h3>
<p>A more balanced approach would be to have a small, well defined set of goals at the enterprise level that can guide teams and set a few non-negotiables. Thereafter, teams should be free to innovate within those boundaries.</p>
<p>How would we do this with builds? One way would be to standardize on a single build platform (say, GitHub) and then require teams to test, secure and monitor <em>their own apps</em>. This can be achieved by setting up branch policies to ensure that teams place these gates into their processes and making developers responsible for run-time operations of their apps. How teams test can be left up to them, as long as they test. If teams don’t want to add telemetry, they are going to have a hard time running apps in production - so they will likely end up adding telemetry to make operations easier.</p>
<h2 id="considering-appsec">Considering AppSec</h2>
<h3 id="extreme-enterprise-alignment-1">Extreme Enterprise Alignment</h3>
<p>Most organizations I work with have a Cyber security team. These teams are typically involved late in the development lifecycle and are the official gate-keepers to “going to prod”. The idea is that this centralized team is the enterprise alignment for securing applications.</p>
<p>There are many problems with this extreme - poor developer experience, slowing release cycles and friction. When you add that security engineering skills are rare (1 security pro for every 800 developers is the current industry measure) then you get the additional problem that this does not scale.</p>
<p>The value-add for this would be a central place where security and risk are surfaced and managed. Unfortunately, the bottleneck and friction this model creates negates any benefits.</p>
<h3 id="extreme-team-autonomy-1">Extreme Team Autonomy</h3>
<p>On the other extreme, teams are not bound to any security standards at all, leading to risk for the company. If teams are scanning their code, dependencies and secrets, they’re using disparate tools and processes and it is nearly impossible to manage risk at scale.</p>
<h3 id="well-balanced-1">Well Balanced</h3>
<p>How can we balance these requirements - centralized risk management and good developer experience? We standardize on a single platform/tool and mandate that teams scan their code and dependencies and scan for secrets. We can enforce branch protection rules to ensure that these scans complete before deployment. These are the non-negotiables.</p>
<p>We then let teams figure out how to treat remediation in their backlogs. We may have to set some sort of SLA on remediation. As long as we have visibility into which teams are in compliance, we can let the teams decide when/how to remediate. This gives the teams autonomy within some good boundaries.</p>
<h2 id="devsecops-at-scale">DevSecOps At Scale</h2>
<p>There is no <em>effective</em> way to scale DevSecOps if your culture is either too centralized (enterprise alignment) or decentralized (team autonomy). Organizations must find a good set of non-negotiables and then extend trust to the team for everything else.</p>
<p>For this to work, however, you must have a platform that can support this culture. I believe that GitHub is the platform for this. Here are a few recommendations that will allow you to scale DevSecOps:</p>
<h3 id="treat-the-pr-as-the-center-of-quality-and-security">Treat the PR as the center of quality and security</h3>
<ul>
<li>Enabling branch protection for your <code class="language-plaintext highlighter-rouge">main</code> branch forces teams to use Pull Requests (PRs) to flow code changes to your stable code.</li>
<li>Require peer code review for your PRs. This ensures that you get more eyes onto code changes, and encourages teams to work in smaller batches (there’s nothing worse that doing a code review for a large number of changes).</li>
<li>Require passing builds that include unit tests. This ensures that code at least compiles and that it passes some level of unit testing. Code that can’t pass these basic gates should not be deployed to production!</li>
<li>Require code scanning (SAST). This ensures that security issues for your code are picked up early and fixed immediately. This also removes the burden on the (scarce) security professionals in your organization.</li>
<li>Require dependency scanning and <a href="https://docs.github.com/en/enterprise-cloud@latest/code-security/supply-chain-security/understanding-your-software-supply-chain/about-dependency-review">Dependency Review</a>. This ensures that you are not introducing vulnerable dependencies with your code changes.</li>
</ul>
<h3 id="enable-secret-scanning-and-push-protection">Enable secret scanning and push protection</h3>
<ul>
<li>There are too many breaches because of secrets checked into source control. Turning this on to remediate existing secrets (get clean) and turning on push protection (stay clean) dramatically reduces this risk.</li>
<li>The ease of switching this on at the org level should not be underestimated. There are no IDE plugins to configure or build steps to configure - it’s just switching a button. <em>There is no other secret scanning tool that can be scaled as easily</em>.</li>
</ul>
<h3 id="treat-security-vulnerabilities-as-work">Treat security vulnerabilities as “work”</h3>
<ul>
<li>This removes the “scare” factor from security issues.</li>
<li>This lets teams prioritize remediation along with other feature requests. Teams look at bugs and determine if they need to be fixed immediately or not - they should treat vulnerabilities in the same manner.</li>
</ul>
<h3 id="let-teams-buildtestpackagescandeploy-their-apps">Let teams build/test/package/scan/deploy their apps</h3>
<ul>
<li>A centralized build team may work at a small scale, but at larger scales (> 50 devs) this can become a bottleneck.</li>
<li>Reuse small jobs rather than large pipelines. Large, generic pipelines that try to deploy every app become unwieldy and fragile. Rather create small reusable jobs that are like Lego bricks to encapsulate common parts of a workflow, and let teams compose these in their own pipelines. This gives a good balance of reusability without bloating.</li>
</ul>
<h3 id="manage-by-exception">Manage by exception</h3>
<ul>
<li>“Trust, but verify.” Assume that teams will do the right thing, and then check for cases where they do not. For example, monitor bypasses of push protection. If a team does this repeatedly, it could be an indication that they are doing something wrong. This is better than “hard gating” and blocking developers.</li>
<li>Teams must own their apps - and that includes <em>failing</em>. If you can fail fast, then you can recover fast too. Once teams see that good quality makes their lives better, they will be more motivated to produce quality code without the need for heavy handed processes! This means that you should be prepared for them to fail from time to time - and to trust them to recover quickly.</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>Scaling DevSecOps effectively requires organizations to think about their culture. Finding a good spot on the spectrum of Team Autonomy and Enterprise Alignment is critical to success. Organizations must find a small set of core non-negotiables and give teams choice for everything else. The GitHub platform enables organizations to configure these “non-negotiables” in a transparent way, allowing teams to move quickly without compromising quality and security.</p>
<p>Happy scaling!</p>Colin DembovskySpicy Takes 🌶️🌶️🌶️ on RSA 20232023-05-01T01:22:01+00:002023-05-01T01:22:01+00:00https://colinsalmcorner.com/spicy-takes-on-rsa<ol id="markdown-toc">
<li><a href="#spicy-takes" id="markdown-toc-spicy-takes">Spicy Takes</a> <ol>
<li><a href="#️-culture-eats-application-security-for-breakfast" id="markdown-toc-️-culture-eats-application-security-for-breakfast">🌶️ Culture eats application security for breakfast</a></li>
<li><a href="#️️-developers-developers-developers" id="markdown-toc-️️-developers-developers-developers">🌶️🌶️ Developers, developers, developers!</a> <ol>
<li><a href="#security-professionals-as-enabling-teams" id="markdown-toc-security-professionals-as-enabling-teams">Security professionals as Enabling Teams</a></li>
</ol>
</li>
<li><a href="#️️️-security-tools-are-a-dime-a-dozen" id="markdown-toc-️️️-security-tools-are-a-dime-a-dozen">🌶️🌶️🌶️ Security tools are a dime a dozen</a> <ol>
<li><a href="#developer-productivity" id="markdown-toc-developer-productivity">Developer productivity</a></li>
<li><a href="#reduced-friction" id="markdown-toc-reduced-friction">Reduced friction</a></li>
<li><a href="#visibility" id="markdown-toc-visibility">Visibility</a></li>
<li><a href="#scalability" id="markdown-toc-scalability">Scalability</a></li>
</ol>
</li>
<li><a href="#github-advanced-security" id="markdown-toc-github-advanced-security">GitHub Advanced Security</a></li>
</ol>
</li>
<li><a href="#conclusion" id="markdown-toc-conclusion">Conclusion</a></li>
</ol>
<blockquote>
<p>Image by <a href="https://unsplash.com/@pickledstardust?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Pickled Stardust</a> on <a href="https://unsplash.com/photos/4xc6i5BKPWs?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
</blockquote>
<p>I was at RSA last week in San Francisco. The highlight of the week was a talk by <a href="https://www.linkedin.com/in/shannonlietz/">Shannon Lietz</a>, who I met briefly at GitHub HQ during the week. More on this later.</p>
<p>I visited expo area and I had great conversations with GitHub customers as well as GitHub technology and services partners. I was looking for overall trends and trying to get a pulse on the industry - and coming from a developer background, the security world is both fascinating and foreign to me!</p>
<h1 id="spicy-takes">Spicy Takes</h1>
<p>There are a couple of key themes that I took away from the week, and I present them here in order of spiciness:</p>
<ol>
<li>🌶️ Culture eats application security for breakfast</li>
<li>🌶️🌶️ Organizations that don’t invest in developers are not serious about security</li>
<li>🌶️🌶️🌶️ Security tools are a dime a dozen</li>
</ol>
<h2 id="️-culture-eats-application-security-for-breakfast">🌶️ Culture eats application security for breakfast</h2>
<p>I am used to the phrase “culture eats tooling for breakfast” in the context of DevOps. You can have the most amazing tools, but if you have a dysfunctional culture, <em>tools will not help you succeed</em>. Many of the conversations I had this week presented echoes of this sentiment, but in the context of security. So it is easy to turn the phrase into <em>culture eats application security for breakfast</em>.</p>
<p>But what does this really mean? I was struck by how little emphasis was placed on culture as a foundation and pillar for application security. A culture that isolates and separates developers and security professionals will struggle to be effective at AppSec.</p>
<p><a href="/vsts-one-team-project-and-inverse-conway-maneuver/">Conway’s Law</a> teaches us that the communication structures of organizations is invariably reflected in the application architectures of those organizations. It’s no surprise when we look at the popularity of n-tier applications in the late 90’s and early 2000’s - these mirror the top-down, hierarchical management structures that were prevalent in those times. As Agile gained popularity and management changed to smaller, more autonomous teams, we saw the proliferation of microservices.</p>
<p>This is why we must consider the impact of how our developers and security teams communicate and collaborate if we want to succeed at AppSec. We cannot get away from Conway’s Law. If we continue to bolt security teams onto developer teams late in the development life cycle as a mess of bureaucratic red tape, then AppSec will be continue to fail.</p>
<p>You’ve heard the mantra “shift left”, and today no self-respecting security pro worth their salt won’t talk about this concept. But simply deploying another tool in an automated build has limited efficacy - we must “shift the culture left”.</p>
<p>Teams with good tools and bad culture are less effective than teams with good culture and bad tools. Ultimately, we need to progress to teams that have both good culture <em>and</em> good tools.</p>
<h2 id="️️-developers-developers-developers">🌶️🌶️ Developers, developers, developers!</h2>
<p>Following on from the culture discussion above, we have to pivot to the key to effective AppSec: <em>the developer</em>. Changing culture is going to require renewed investment in developers as well as a shift in the roles and responsibilities of security professionals.</p>
<p>One highlight of the week was the DevOps Connect talk by Shannon Lietz. I particularly remember her saying, “To be effective in security, we must <em>translate security into developer</em>.”</p>
<p>I realize that I was at a <em>security</em> conference, but I realized that there are very few companies today that are looking to solve security by investing in developers. And I will go even further: companies that do not look to solve AppSec by investing in developers are doomed to fail at AppSec.</p>
<p>AppSec is a fascinating intersection between developers and security professionals. These two groups typically speak different languages and have different lenses through which they view the world. This is why I resonated with Shannon’s statement - companies that fail to translate security into language, processes and tooling that developers understand are not serious about security. And as part of that, they must transform how security professionals work too!</p>
<h3 id="security-professionals-as-enabling-teams">Security professionals as Enabling Teams</h3>
<p><a href="https://teamtopologies.com/">Team Topologies</a> does a great job in creating language around how to design teams within an organization. This is another area that suffers a severe lack of investment - companies don’t typically think about how they design their teams or how their teams communicate. Without going into the four types of Teams, at a high level, developers should be Stream Aligned Teams and security professionals should become Enabling Teams.</p>
<p>In short, the security teams should work on <em>enabling developers</em> to write secure code, fix vulnerabilities and become the first line for security. If your security professionals are doing all the security work, they will always be a bottleneck. Organizations can scale AppSec and scarce security skills by taking this approach. This is what I think true “shift culture left” means in the context of AppSec.</p>
<h2 id="️️️-security-tools-are-a-dime-a-dozen">🌶️🌶️🌶️ Security tools are a dime a dozen</h2>
<p>Most of the vendors at the expo seemed cookie-cutter, using oft-repeated catch phrases (like the ubiquitous “shift left” and “go faster”) but didn’t seem to bring anything new or fresh to AppSec.</p>
<p>There are some critical dimensions that companies must consider when evaluating and rolling out security tools:</p>
<ol>
<li>Developer productivity</li>
<li>Reduced friction</li>
<li>Visibility</li>
<li>Scalability</li>
</ol>
<p>I was disappointed to see that very few tools in the AppSec space addressed these dimensions. Slapping another tool into the mix isn’t going to be effective - you must address these dimensions.</p>
<h3 id="developer-productivity">Developer productivity</h3>
<p>Moving fast isn’t just about new features: your <em>security response</em> velocity will never be faster than your <em>developer velocity</em>. It’s simple to illustrate this point: let’s say that your commit-to-production lead time is 3 days; in that case it stands to reason that your time to remediate cannot be <em>faster</em> than 3 days. Speed is a critical component of staying secure.</p>
<h3 id="reduced-friction">Reduced friction</h3>
<p>Another great quote from Shannon Lietz is: “Developers don’t talk about security tools unless they <em>make security folks go away</em>.” When I was a developer, security were the people that blocked your deployments. Security tools only slowed me down. It wasn’t until I saw a <em>developer-focused</em> security tool that I realized that security doesn’t have to be a blocker! Shannon’s sentiment is spot-on.</p>
<p>Developers are smart - and hate process when it adds no value to what they do. They tend to find workarounds for any process that introduces more friction. Therefore, any tool that adds friction is doomed to fail. Tools must reduce friction for developers to be successful.</p>
<h3 id="visibility">Visibility</h3>
<p>One of my customers has a security tool that performs formal method analysis. They use this tool heavily - but they cannot struggle to collate results and see status over multiple projects. They can switch to the tool UI, but this adds to friction. The lack of visibility in the developer workflow is limiting the effectiveness of this tool.</p>
<p>Another part of visibility is <em>metrics</em>. Most teams will talk about Mean Time to Detect (MTTD) and Mean Time to Repair (MTTR), but do not define these or track them effectively. There doesn’t seem to be a consensus on what AppSec metrics are the most important or how to track them.</p>
<h3 id="scalability">Scalability</h3>
<p>The industry standard ratio for security professionals to developers is 1:800. This is why shifting your security professionals to enabling teams (see above) is so critical - it is the only way to scale AppSec effectively. But you will struggle to do this if your security tools cannot support this shift.</p>
<h2 id="github-advanced-security">GitHub Advanced Security</h2>
<p>I often ask the question, “Why do you think GitHub got into AppSec at all?” The answer is fairly simple: even though security tools and practices have been around for two decades, AppSec is still failing. And the major reason is that it <em>is not developer centric</em>. GitHub is uniquely positioned to bring security to developers in a way that reduces friction, empowers developers and scales AppSec teams. Since it is the heart of the developer workflow, this is a powerful way to really shift both tooling and culture left.</p>
<h1 id="conclusion">Conclusion</h1>
<p>We still have a lot of work to do. AppSec in the industry isn’t as successful as it should be, and organizations must consider both tools and culture in combination in order to improve. Organizations must invest in developers, shift security pros to enabling teams and ensure that they deploy tools that support these shifts instead of hinder them. I was again reminded of how fortunate I am to be at GitHub, where we are moving AppSec forward.</p>
<p>Happy securing!</p>Colin DembovskyUsing GitHub Copilot Effectively2023-04-17T01:22:01+00:002023-04-17T01:22:01+00:00https://colinsalmcorner.com/using-copilot-effectively<ol id="markdown-toc">
<li><a href="#how-autopilot-works-on-commercial-flights" id="markdown-toc-how-autopilot-works-on-commercial-flights">How Autopilot works on Commercial Flights</a></li>
<li><a href="#github-copilot" id="markdown-toc-github-copilot">GitHub Copilot</a> <ol>
<li><a href="#taking-off---providing-context" id="markdown-toc-taking-off---providing-context">Taking off - providing context</a></li>
<li><a href="#cruising-altitude---working-in-small-chunks" id="markdown-toc-cruising-altitude---working-in-small-chunks">Cruising Altitude - working in small chunks</a></li>
<li><a href="#man-the-radio---fast-feedback" id="markdown-toc-man-the-radio---fast-feedback">Man the Radio - fast feedback</a></li>
<li><a href="#land-the-plane---good-devsecops" id="markdown-toc-land-the-plane---good-devsecops">Land the Plane - Good DevSecOps</a></li>
<li><a href="#autopilot-is-only-for-flying" id="markdown-toc-autopilot-is-only-for-flying">Autopilot is only for flying</a></li>
</ol>
</li>
<li><a href="#conclusion" id="markdown-toc-conclusion">Conclusion</a></li>
</ol>
<blockquote>
<p>Image by <a href="https://unsplash.com/@rayyu?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText"">Rayyu Maldives</a> on <a href="https://unsplash.com/photos/vZ5Tk3cc52o?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText"">Unsplash</a></p>
</blockquote>
<p>GitHub Copilot is aptly named. While some have feared that generative models will replace developers, I do not believe we are there yet: Copilot is an assistant, not a replacement. However, developers will need to adjust their skills, both to stay effective as well as to stay marketable through the disruption that the AI age is bringing.</p>
<p>I have a friend who is a commercial airline pilot, and asked him about how autopilot works on commercial airplanes. I think the analogy of how autopilot works is useful in framing how developers should approach learning how to use GitHub Copilot.</p>
<h2 id="how-autopilot-works-on-commercial-flights">How Autopilot works on Commercial Flights</h2>
<p>Most of us have flown in a commercial airplane. We all know that there are two human pilots in the cockpit, and we even know that they engage autopilot to fly the aircraft. However, even though we are all comfortable with the idea of planes flying themselves, we would be a little nervous if there were no humans in the cockpit before we take off!</p>
<p>Here is my understanding of how autopilot works during a commercial flights:</p>
<ol>
<li>The pilot taxis the plane and takes off - the autopilot cannot take off automatically</li>
<li>Once the plane reaches around 15,000 ft altitude, the pilot engages the autopilot system. Some pilots will fly manually until they are at cruising altitude.</li>
<li>Once engaged, the autopilot is programmed to fly the plane along the current flight plan. The autopilot can navigate through bad weather and turbulence.</li>
<li>The pilots man the radios and watch for weather and wind conditions. At times, pilots will tell the autopilot to fly around weather, or change altitude to get better wind conditions.</li>
<li>The autopilot lands the plane.</li>
<li>The pilot takes over to taxi the plane.</li>
</ol>
<blockquote>
<p><strong>Note</strong>: Even if the above is not 100% correct, it’s good enough to make an analogy for GitHub Copilot! Errors and omissions are my own.</p>
</blockquote>
<h2 id="github-copilot">GitHub Copilot</h2>
<p>Understanding how autopilot works, we can make a useful analogy when we consider GitHub Copilot:</p>
<ol>
<li>Developers must “take off” since Copilot can’t take off by itself (context)</li>
<li>Developers can use Copilot “mid-stream” but will need to make adjustments for “turbulence” (work in small chunks)</li>
<li>Developers must “man the radio” to monitor the code that they are writing with Copilot (good DevSecOps))</li>
<li>Copilot can “land the plane” but getting to the final destination is up to the developer (remember to solve the right problems)</li>
<li>Quality control is beyond the purview of Copilot</li>
</ol>
<p>Let’s dig into these a little deeper.</p>
<h3 id="taking-off---providing-context">Taking off - providing context</h3>
<p>Just as autopilot can’t take off automatically, in the same way a blank project or file isn’t a good way to get going with Copilot. Even before that, developers need a “flight plan” - some idea of what they are going to be coding. Spending a little time to analyze requirements and think about how code is going to be written, tested, scanned, packaged and deployed will go a long way to better productivity and efficiency.</p>
<p>When using Copilot, you get the best results when supplying good context - think of this as the flight plan. <em>Context</em> is the file that you’re currently editing as well as other tabs open in the solution. If you have other files already, open a couple of them to assist Copilot. Open test files to help Copilot with tests and examples of how your methods are being called.</p>
<p>Where none of this exists, take time to think about what you want the code to do and write the intent in comments at the top of the file. The more context you supply, the better your results will be.</p>
<p>I love doing the Advent of Code in December - and using Copilot while solving the puzzles has been great. However, I think one of the main advantages of using Copilot was that it subtly changed <em>how</em> I develop: rather than simply diving into code, I take a few moments to think about how I can best prompt Copilot to give me what I want. This makes a big difference and I found myself spending more time thinking and less time thrashing code - which is a more fulfilling experience as well as a more productive way to code!</p>
<blockquote>
<p>Prompt engineering is a phrase that is being bandied about - I think there is something to this. Successful engineers will be those that can successfully guide AI to do the right thing.</p>
</blockquote>
<h3 id="cruising-altitude---working-in-small-chunks">Cruising Altitude - working in small chunks</h3>
<p>Once you have a little bit of code, you’re at “cruising altitude”. This is where Copilot feels like magic - there is enough context for it to generate the code that you were thinking of. Keep working in small chunks (like inside a method body or inside a loop). The narrowed context produces far better results.</p>
<p>Remember, Copilot is a <em>probability engine</em> and there is some level of randomness inherent in how it works (this is true of all large language models). Broad, vague requests (low context) tend to produce results that show much more randomness (and less meaning and utility). Narrowing the context reduces the compounding effect of the randomness and is more likely to produce meaningful code.</p>
<h3 id="man-the-radio---fast-feedback">Man the Radio - fast feedback</h3>
<p>While you’re having fun coding with Copilot at your side, don’t forget to “man the radio”. Remember, code in an of itself isn’t the goal - <em>solving business problems is</em>! Moving faster isn’t an end - it’s a means to an end.</p>
<p>Why do we want developers to be more productive and efficient? The value of going faster <em>is that we get feedback faster</em>. The faster we get feedback from our end-users, the faster we’re able to adjust course. Scrum and Agile didn’t succeed because of daily stand-ups and retros - Agile succeeded because it focused on flow and shortening the feedback loop. Copilot, by making developers more productive, is wasted unless you’re shortening the feedback loop. Listen to the feedback from end-users, and adjust accordingly. This will give Copilot purpose and value beyond <em>just</em> developer happiness.</p>
<h3 id="land-the-plane---good-devsecops">Land the Plane - Good DevSecOps</h3>
<p>Landing the plane is crucial - after all, if your plane doesn’t land, you can’t get to your destination! Again, the landing of the plane is a means to an end - you have to get off the plane to reach your destination!</p>
<p>Copilot will help you land, but you’ll have to taxi in yourself. Copilot is designed to speed the “inner loop” of development - but you’ll have to make sure you have an efficient “outer loop” too - peer code review, build automation, linting, unit and integration testing, scanning and automated deployment are critical if you’re going to get the most out of Copilot.</p>
<blockquote>
<p>Having said that, some Copilot X features are bringing AI to the “outer loop” such as Copilot for PRs, which can suggest missing test cases for code changes in a PR.</p>
</blockquote>
<h3 id="autopilot-is-only-for-flying">Autopilot is only for flying</h3>
<p>Copilot allows developers to move faster - which means you need to match that speed when it comes to quality gates and deployment - otherwise you’ll get an impedence mismatch, which if you know your electronics, is a Bad Thing. Copilot, by making developers faster, requires your quality gates and processes to be faster.</p>
<p>The autopilot on planes do not check the fuel levels or the ailerons or do any of the preflight checks itself - quality control is still up to the pilots and ground crews. Copilot is not meant to do everything for you - it’s meant to augment your developers and make them faster. You must have good DevSecOps practices in place to maximize your usage of Copilot.</p>
<h1 id="conclusion">Conclusion</h1>
<p>GitHub Copilot is a powerful tool, but to get the most out of it developers should understand how to feed it context, work in small chunks, and ensure the rest of the DevSecOps pipeline is running smoothly.</p>
<p>Happy co-piloting!</p>Colin DembovskyAllowing Bypass of Secret Scanning Push Detections is a Good Thing2023-03-06T01:22:01+00:002023-03-06T01:22:01+00:00https://colinsalmcorner.com/allow-push-protection-bypass-is-a-good-thing<ol id="markdown-toc">
<li><a href="#secret-scanning-locations" id="markdown-toc-secret-scanning-locations">Secret Scanning Locations</a> <ol>
<li><a href="#local-environment" id="markdown-toc-local-environment">Local Environment</a></li>
<li><a href="#post-push-in-a-build" id="markdown-toc-post-push-in-a-build">Post-push in a build</a></li>
<li><a href="#at-push-time" id="markdown-toc-at-push-time">At push time</a></li>
</ol>
</li>
<li><a href="#allowing-bypassing-is-a-good-idea" id="markdown-toc-allowing-bypassing-is-a-good-idea">Allowing bypassing is a good idea</a> <ol>
<li><a href="#false-positives" id="markdown-toc-false-positives">False positives</a></li>
<li><a href="#maintaining-trust-with-developers" id="markdown-toc-maintaining-trust-with-developers">Maintaining trust with developers</a></li>
<li><a href="#workarounds" id="markdown-toc-workarounds">Workarounds</a></li>
</ol>
</li>
<li><a href="#effective-management-of-bypasses" id="markdown-toc-effective-management-of-bypasses">Effective management of bypasses</a></li>
<li><a href="#alerts-are-still-created-after-bypassing-push-protection" id="markdown-toc-alerts-are-still-created-after-bypassing-push-protection">Alerts are still created after bypassing push protection</a></li>
<li><a href="#management-by-exception" id="markdown-toc-management-by-exception">Management by exception</a></li>
<li><a href="#conclusion" id="markdown-toc-conclusion">Conclusion</a></li>
</ol>
<blockquote>
<p>Image by <a href="https://unsplash.com/@huefnerdesign?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Tim Hüfner</a> on <a href="https://unsplash.com/s/photos/target?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
</blockquote>
<p><a href="https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security">GitHub Advanced Security</a> includes <a href="https://docs.github.com/en/code-security/secret-scanning/about-secret-scanning">secret scanning</a>. While there are other secret scanning solutions in the market such as <a href="https://trufflesecurity.com/trufflehog/">TruffleHog</a>, no SaaS solution can offer <em>push protection</em>.</p>
<h2 id="secret-scanning-locations">Secret Scanning Locations</h2>
<p>Secret Scanning could be implemented in 3 locations:</p>
<ol>
<li>The local developer environment - either in the IDE or in the CLI</li>
<li>In a build after commits are pushed</li>
<li>At the time of the push</li>
</ol>
<p>Let’s examine the pros and cons of each of these approaches.</p>
<h3 id="local-environment">Local Environment</h3>
<p>Performing secret detection in the local environment only works as long as developers remember to run the tool. And if their favorite IDE doesn’t support the tool, it’s unlikely that they’ll run it. Furthermore, even if developers remembered to run these detections every time before they pushed, how would organizations manage custom secret patterns or other configurations? Centralized configuration is essential for managing security at scale - so organizations can’t just think of the <em>scanning</em>, they have to think about how they would manage custom configurations too.</p>
<h3 id="post-push-in-a-build">Post-push in a build</h3>
<p>If the local environment is too heterogenous and relies too much on the developer, then surely adding a scanning tool in the build makes sense. That way, teams can guarantee that the scan is being performed and could manage configuration using <a href="https://docs.github.com/en/enterprise-cloud@latest/actions/using-workflows/reusing-workflows">reusable workflows</a>.</p>
<p>However, this is <em>too late</em> in the life cycle - the secret has to be the repo for the build to perform the scan. While this option adds more consistency, it cannot prevent the secret from getting to the repo in the first place.</p>
<h3 id="at-push-time">At push time</h3>
<p>The best place to scan for secrets is at the moment of the push. Teams could do this using <a href="https://docs.github.com/en/enterprise-server@3.8/admin/policies/enforcing-policy-with-pre-receive-hooks/managing-pre-receive-hooks-on-the-github-enterprise-server-appliance">pre-receive hooks</a> on GitHub Enterprise Server. This would allow teams to run some validation on the push and allow or block it - say, if it contained a secret. Unfortunately, GitHub Enterprise Cloud does not support pre-receive hooks (yet).</p>
<p>However, GitHub Advanced Security does include the option to enable <a href="https://docs.github.com/en/enterprise-cloud@latest/code-security/secret-scanning/protecting-pushes-with-secret-scanning">push protection</a>. This prevents pushes if secrets are detected.</p>
<p>This push protection feature is unique in the market for several reasons. Some tools have <em>some</em> of the features listed below, but only secret scanning push protection in GitHub Advanced Security has all of the following:</p>
<ol>
<li>It is embedded into the repos and can be enabled instantly at enterprise, org or repo level</li>
<li>It does not require build customization or IDE plugins or anything else - it simply works</li>
<li>It allows admins to create custom patterns that are managed centrally</li>
<li>It allows admins to perform dry-runs of their custom patterns so that they can refine them before they roll them out, preventing noise and loss of developer trust</li>
<li>Alerts trigger webhooks for additional automation and alerts are also visible in the audit log</li>
</ol>
<p>However, it is important to note that push protections <em>can be bypassed</em>. But why? Wouldn’t you want to hard-block any detected secrets?</p>
<h2 id="allowing-bypassing-is-a-good-idea">Allowing bypassing is a good idea</h2>
<p>This seems counter-intuitive. However, let’s think about why this actually makes more sense that preventing bypasses.</p>
<h3 id="false-positives">False positives</h3>
<p>There are rare cases when secret scanning will detect what it thinks is a secret - but it’s not in fact a secret. In these cases, a bypass is crucial since you need to get the code into the repo. This becomes even more critical as admins roll out custom patterns, especially for “generic” secrets (like database connection strings) which have no governing pattern (unlike tokens which tend to have much more predictable patterns). The less predictable a pattern is, the more noisy (more false positives) it is going to generate.</p>
<h3 id="maintaining-trust-with-developers">Maintaining trust with developers</h3>
<p>Whenever there is a gate, control or roadblock in the development life cycle, there must be some real value in the gate. Too many controls are vestiges of old processes or created by people who are no longer at the company, but are not challenged. This leads to friction and causes developers to lose trust in the security teams (or IT teams) and vice versa. Developers will also start to lose trust in the platform.</p>
<p>Totally preventing bypasses of push detections is effectively a statement that <em>you do not trust your developers</em>. Most developers are not malicious and secrets in pushes will most commonly be mistakes: a dev is testing and puts a credential to a test platform or database in their configuration file, only to forget to remove it before pushing. In this case, the push protection helps remind the dev that they have a secret that should not be committed to the repo. So allowing bypasses for false positives while preventing <em>accidental</em> leaks is a good combination.</p>
<h3 id="workarounds">Workarounds</h3>
<p>Let’s imagine a scenario where push protections can never be bypassed. Developers who experience false positives will be frustrated since they have no way around the incorrect detection. This may lead them to become creative and find workarounds.</p>
<p>For example, developers could simply <code class="language-plaintext highlighter-rouge">base64 encode</code> the secret. This results in a high-entropy string. High entropy strings could be added to push detection, but by nature will produce a lot of noise (lots of false positives). So in all likelihood, these base64 encoded strings would end up being pushed to the repo. This is a leak, since you can simply <code class="language-plaintext highlighter-rouge">base64 decode</code> the string to get to the secret.</p>
<p>Or a developer may take a credential and split it in half, and simply concatenate the halves at run time. Again, an extremely difficult scenario to detect, but easy for a human to exfiltrate.</p>
<p>In short, workarounds make detection harder, and so increase risk.</p>
<blockquote>
<p>Note: I have heard stories from customers who have created their own secret scanning tools that cannot by bypassed. The results were disastrous, and the tool is either turned off or bypasses have been allowed.</p>
</blockquote>
<h2 id="effective-management-of-bypasses">Effective management of bypasses</h2>
<p>This doesn’t mean that allowing bypasses is insecure! With some simple steps, organizations can implement effective controls for bypasses, allowing them to retain customer trust as well as prevent secrets from leaking.</p>
<p>There are two primary methods to track bypasses of push protections:</p>
<ol>
<li>The <code class="language-plaintext highlighter-rouge">secret_scanning_alert</code> <a href="https://docs.github.com/webhooks-and-events/webhooks/webhook-events-and-payloads#secret_scanning_alert">webhook</a> which is fired every time a protection is bypassed (the <code class="language-plaintext highlighter-rouge">push_protection_bypassed</code> property is set to <code class="language-plaintext highlighter-rouge">true</code>)</li>
<li>The <code class="language-plaintext highlighter-rouge">secret_scanning_push_protection</code> category of <a href="https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/audit-log-events-for-your-enterprise#secret_scanning_push_protection-category-actions">audit logs</a></li>
</ol>
<p>You can use either of these to send automated emails or notify admins when bypasses occur. This allows you to maintain visibility without losing developer trust, since the bypass can be inspected and, if valid for cases like false positives, ignored. For cases where the bypass was not valid, admins can have conversations with the developer who bypassed the protection.</p>
<h2 id="alerts-are-still-created-after-bypassing-push-protection">Alerts are still created after bypassing push protection</h2>
<p>Furthermore, even if a secret is bypassed during a push, GitHub will create a secret scanning alert, enabling admins to manage the bypassed secret appropriately. For example, automated token revocation can be enabled so that when secrets are detected in the repo post-push, automation can revoke the secret immediately for known token formats, or admins can be notified to check the bypass.</p>
<h2 id="management-by-exception">Management by exception</h2>
<p>This allows organization to “manage by exception” rather than “throttle by prevention”. Ultimately this is a <em>cultural</em> problem and not really a <em>technical</em> problem. Organizations that demonstrate a “trust but verify” culture using the management techniques above will generally foster better developer experience and arguably end up being more secure than companies that promote a low-trust, hard gate.</p>
<p>Let’s all remember to be good humans. Developers should sympathize with the IT and security teams - leaked credentials are a serious matter that could have large and far reaching negative consequences to companies. Developers need to be careful and thoughtful about preventing leaks. IT and security teams should in turn sympathize with developers, who are constantly under pressure to deliver more, faster - so anything that adds friction is going to be counterproductive. They should be careful and thoughtful of how they can partner with, rather than fight against, developers.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Using GitHub Advanced Security secret scanning push protection is the best way for teams to effectively reduce the risk of credential leaks. While users can bypass push protections, there are valid reasons for this, and bypasses can be managed to ensure they are valid, while invalid bypasses can be mitigated quickly.</p>
<p>Happy push protecting!</p>Colin DembovskyFine Tuning CodeQL Scans using Query Filters2022-08-30T01:22:01+00:002022-08-30T01:22:01+00:00https://colinsalmcorner.com/fine-tuning-codeql-scans<ol id="markdown-toc">
<li><a href="#query-organization" id="markdown-toc-query-organization">Query Organization</a></li>
<li><a href="#why-filter" id="markdown-toc-why-filter">Why filter?</a></li>
<li><a href="#standard-selectors" id="markdown-toc-standard-selectors">Standard Selectors</a></li>
<li><a href="#filtering-by-security-severity" id="markdown-toc-filtering-by-security-severity">Filtering by Security Severity</a> <ol>
<li><a href="#security-severity-levels" id="markdown-toc-security-severity-levels">Security Severity Levels</a></li>
</ol>
</li>
<li><a href="#query-filters" id="markdown-toc-query-filters">Query Filters</a></li>
<li><a href="#precision" id="markdown-toc-precision">Precision</a></li>
<li><a href="#widening-the-filter" id="markdown-toc-widening-the-filter">Widening the Filter</a></li>
<li><a href="#testing-the-configurations" id="markdown-toc-testing-the-configurations">Testing the Configurations</a> <ol>
<li><a href="#adding-debug-to-the-init-action" id="markdown-toc-adding-debug-to-the-init-action">Adding Debug to the <code class="language-plaintext highlighter-rouge">init</code> Action</a></li>
<li><a href="#executing-the-scans" id="markdown-toc-executing-the-scans">Executing the Scans</a></li>
</ol>
</li>
<li><a href="#conclusion" id="markdown-toc-conclusion">Conclusion</a></li>
</ol>
<blockquote>
<p>Image by <a href="https://unsplash.com/@maurogigliphoto?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Mauro Gigli</a> on <a href="https://unsplash.com/s/photos/target?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
</blockquote>
<p>CodeQL scanning involves four phases:</p>
<ol>
<li><strong>Initialize</strong> - where an empty database is created and hooks are configured into the compiler for compiled languages</li>
<li><strong>Build</strong> - where the database is populated from the code-base</li>
<li><strong>Query</strong> - where queries are executed against the database - results are output to a SARIF file</li>
<li><strong>Upload</strong> - where the SARIF file is uploaded to the GitHub repo</li>
</ol>
<blockquote>
<p><strong>Note:</strong> The default <a href="https://github.com/github/codeql-action/blob/main/analyze/action.yml"><code class="language-plaintext highlighter-rouge">analyze</code> Action</a> will query and upload in a single step.</p>
</blockquote>
<p>In the initialize phase, you specify which of the <a href="https://codeql.github.com/docs/codeql-overview/supported-languages-and-frameworks/">supported languages</a> you want to analyze. You can also (optionally) specify the set of queries you want to run.</p>
<h2 id="query-organization">Query Organization</h2>
<p>Queries are the lowest level artifact in CodeQL scans. These are T-SQL like in syntax (with <code class="language-plaintext highlighter-rouge">from</code>, <code class="language-plaintext highlighter-rouge">where</code> and <code class="language-plaintext highlighter-rouge">select</code> clauses), but also have very powerful abstractions like <code class="language-plaintext highlighter-rouge">predicate</code>, <code class="language-plaintext highlighter-rouge">class</code> and <code class="language-plaintext highlighter-rouge">override</code>.</p>
<p>Queries are typically grouped into <em>suites</em>. CodeQL <em>packs</em> can contain queries and suites. Additionally, you can <a href="https://codeql.github.com/docs/codeql-cli/creating-codeql-query-suites/#reusing-existing-query-suite-definitions"><em>filter</em> queries</a> - which we’ll get to shortly!</p>
<p>Before we move on, one more concept we need to understand is that queries have <em><a href="https://codeql.github.com/docs/writing-codeql-queries/metadata-for-codeql-queries/">metadata</a></em> associated with them. The metadata are more than just a way to describe the query - they are also critical for filtering.</p>
<p>Let’s look at the metadata from a <a href="https://github.com/github/codeql/blob/main/csharp/ql/src/Security%20Features/CWE-359/ExposureOfPrivateInformation.ql">query</a> in the <a href="https://github.com/github/codeql">CodeQL</a> repo to examine some of the metadata:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
/**
* @name Exposure of private information
* @description If private information is written to an external location, it may be accessible by
* unauthorized persons.
* @kind path-problem
* @problem.severity error
* @security-severity 6.5
* @precision high
* @id cs/exposure-of-sensitive-information
* @tags security
* external/cwe/cwe-359
*/
</code></pre></div></div>
<p class="figcaption">A typical CodeQL metadata example.</p>
<p>We’ll use some of these metadata properties to filter - notably the <code class="language-plaintext highlighter-rouge">kind</code>, <code class="language-plaintext highlighter-rouge">security-severity</code>, <code class="language-plaintext highlighter-rouge">precision</code> and <code class="language-plaintext highlighter-rouge">tags</code>.</p>
<h2 id="why-filter">Why filter?</h2>
<p>If you do not specify a suite in the <a href="https://github.com/github/codeql-action/blob/main/init/action.yml">CodeQL Action</a>, then you’ll get a default set of queries for the language you’re scanning. However, the default set is a subset of all the queries. There are some queries that have higher or lower severity or different levels of “precision” (we’ll discuss what that is later). Rather than give you <em>all</em> the queries, the default setting <em>filters out</em> some queries. <a href="https://github.com/github/codeql/blob/main/misc/suite-helpers/code-scanning-selectors.yml">This file</a> contains the default set of filters.</p>
<blockquote>
<p>The default set of queries is called the <code class="language-plaintext highlighter-rouge">code-scanning</code> suite. Each language has a <code class="language-plaintext highlighter-rouge">.qls</code> (query suite) file that specifies the list of queries and applies the <code class="language-plaintext highlighter-rouge">code-scanning-selectors.yml</code> selector. For example, <a href="https://github.com/github/codeql/blob/main/csharp/ql/src/codeql-suites/csharp-code-scanning.qls">this file</a> is the default code scanning suite for <code class="language-plaintext highlighter-rouge">csharp</code>.</p>
</blockquote>
<p>You can also customize the <a href="https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs">query suite</a> by specifying other “standard” selectors: either <code class="language-plaintext highlighter-rouge">security-extended</code> or <code class="language-plaintext highlighter-rouge">security-and-quality</code>, which change the filter criteria by adding in additional queries that are excluded in the default selection.</p>
<p>Let’s examine a couple of selectors and how they are specified, and then a couple of use-cases where we use selectors to specify a different set of queries to execute during the Analyse phase.</p>
<h2 id="standard-selectors">Standard Selectors</h2>
<p>If you look at the <code class="language-plaintext highlighter-rouge">includes</code> from the <a href="https://github.com/github/codeql/tree/main/misc/suite-helpers">standard selectors</a> you’ll see that <a href="https://github.com/github/codeql/blob/main/misc/suite-helpers/security-extended-selectors.yml">security-extended-selectors.yml</a> selects queries that contain the <code class="language-plaintext highlighter-rouge">security</code> <code class="language-plaintext highlighter-rouge">tag</code>:</p>
<div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="pi">-</span> <span class="na">description</span><span class="pi">:</span> <span class="s">Selectors for selecting the security-extended queries for a language</span>
<span class="pi">-</span> <span class="na">include</span><span class="pi">:</span>
<span class="na">kind</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">problem</span>
<span class="pi">-</span> <span class="s">path-problem</span>
<span class="na">precision</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">high</span>
<span class="pi">-</span> <span class="s">very-high</span>
<span class="na">tags contain</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">security</span>
<span class="s">...</span>
</code></pre></div></div>
<p class="figcaption">Selectors in the <code class="language-plaintext highlighter-rouge">security-extended-selectors.yml</code> file.</p>
<p>By contrast, the <a href="https://github.com/github/codeql/blob/main/misc/suite-helpers/security-and-quality-selectors.yml">security-and-quality-selectors.yml</a> file does <strong>not</strong> filter by that <code class="language-plaintext highlighter-rouge">tag</code>:</p>
<div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="pi">-</span> <span class="na">description</span><span class="pi">:</span> <span class="s">Selectors for selecting the security-extended queries for a language</span>
<span class="pi">-</span> <span class="na">include</span><span class="pi">:</span>
<span class="na">kind</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">problem</span>
<span class="pi">-</span> <span class="s">path-problem</span>
<span class="na">precision</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">high</span>
<span class="pi">-</span> <span class="s">very-high</span>
<span class="s">...</span>
</code></pre></div></div>
<p class="figcaption">Selectors in the <code class="language-plaintext highlighter-rouge">security-and-quality-selectors.yml</code> file.</p>
<p>This means that the <code class="language-plaintext highlighter-rouge">security-extended</code> suite will only include queries that have <code class="language-plaintext highlighter-rouge">security</code> in their <code class="language-plaintext highlighter-rouge">tags</code> metadata, while the <code class="language-plaintext highlighter-rouge">security-and-quality</code> suite will include additional queries that do not contain this <code class="language-plaintext highlighter-rouge">tag</code>.</p>
<p>However, we can also filter on other properties - such as <code class="language-plaintext highlighter-rouge">kind</code>, <code class="language-plaintext highlighter-rouge">security-severity</code> or <code class="language-plaintext highlighter-rouge">precision</code>.</p>
<h2 id="filtering-by-security-severity">Filtering by Security Severity</h2>
<p>Last week I heard of a company using CodeQL that were hitting upper limits on the upload size of the SARIF file. They are scanning a large mono-repo and are getting a large number of results in the scan. Arguably, there are other issues at play here, but the team did not want to refactor their build or their codebase.</p>
<p>In this case, neither of the default suites works. Perhaps we need to focus just on the most critical alerts first - so we are going to want to filter by <code class="language-plaintext highlighter-rouge">security-severity</code>.</p>
<h3 id="security-severity-levels">Security Severity Levels</h3>
<p>When you see a CodeQL alert, it is marked with <code class="language-plaintext highlighter-rouge">low</code>, <code class="language-plaintext highlighter-rouge">medium</code>, <code class="language-plaintext highlighter-rouge">high</code> or <code class="language-plaintext highlighter-rouge">critical</code> severity:</p>
<p><img src="/assets/images/2022/08/fine-tune/codeql-alerts.png" alt="CodeQL Alerts showing security severity" class="center-image" /></p>
<p class="figcaption">CodeQL Alerts showing security severity.</p>
<p>However, if you look at the query metadata, these levels don’t appear. That’s because there is a <a href="https://github.blog/changelog/2021-07-19-codeql-code-scanning-new-severity-levels-for-security-alerts/#about-security-severity-levels">table</a> that shows how GitHub calculates the level based on the <code class="language-plaintext highlighter-rouge">security-severity</code> number:</p>
<table class="stretch-table">
<thead>
<tr>
<th style="text-align: center">Severity</th>
<th style="text-align: center">Score Range</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center">None</td>
<td style="text-align: center">0.0</td>
</tr>
<tr>
<td style="text-align: center">Low</td>
<td style="text-align: center">0.1 - 3.9</td>
</tr>
<tr>
<td style="text-align: center">Medium</td>
<td style="text-align: center">4.0 - 6.9</td>
</tr>
<tr>
<td style="text-align: center">High</td>
<td style="text-align: center">7.0 - 8.9</td>
</tr>
<tr>
<td style="text-align: center">Critical</td>
<td style="text-align: center">9.0 - 10.0</td>
</tr>
</tbody>
</table>
<p class="figcaption">The mapping of severity to <code class="language-plaintext highlighter-rouge">security-severity</code> score.</p>
<p>So how do we filter on security level?</p>
<h2 id="query-filters">Query Filters</h2>
<p>You can filter queries using <em>query filters</em> in a configuration file. Then you just point the <code class="language-plaintext highlighter-rouge">init</code> action to the config file, and you’re done! I’ll use code from <a href="https://github.com/colindembovsky/dotnet-webapi-boilerplate/">this repo</a> for the examples.</p>
<p>Here’s and example of an <code class="language-plaintext highlighter-rouge">init</code> action that specifies a custom config:</p>
<div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="c1"># file: '.github/workflows/codeql-high-severity.yml'</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Initialize CodeQL</span>
<span class="na">uses</span><span class="pi">:</span> <span class="s">github/codeql-action/init@v2</span>
<span class="na">with</span><span class="pi">:</span>
<span class="na">languages</span><span class="pi">:</span> <span class="s">csharp</span>
<span class="na">config-file</span><span class="pi">:</span> <span class="s">./.github/codeql/high-severity.yml</span>
</code></pre></div></div>
<p class="figcaption">Specifying a custom config file for CodeQL.</p>
<p>Let’s then look at the custom config file:</p>
<div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="c1"># file: '.github/codeql/high-severity.yml'</span>
<span class="na">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">Custom</span><span class="nv"> </span><span class="s">CodeQL</span><span class="nv"> </span><span class="s">Config</span><span class="nv"> </span><span class="s">for</span><span class="nv"> </span><span class="s">high/very</span><span class="nv"> </span><span class="s">high</span><span class="nv"> </span><span class="s">severity</span><span class="nv"> </span><span class="s">only"</span>
<span class="na">disable-default-queries</span><span class="pi">:</span> <span class="no">true</span>
<span class="na">queries</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">uses</span><span class="pi">:</span> <span class="s">security-extended</span>
<span class="na">query-filters</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">include</span><span class="pi">:</span>
<span class="na">precision</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">high</span>
<span class="pi">-</span> <span class="s">very-high</span>
<span class="na">tags contain</span><span class="pi">:</span> <span class="s">security</span>
<span class="na">security-severity</span><span class="pi">:</span> <span class="s">/([7-9]|10)\.(\d)+/</span>
</code></pre></div></div>
<p class="figcaption">A custom configuration to only include queries with <code class="language-plaintext highlighter-rouge">security-severity</code> >= 7.</p>
<p>Notes:</p>
<ol>
<li>First we specify a <code class="language-plaintext highlighter-rouge">name</code>.</li>
<li>We then disable the default queries.</li>
<li>We bring in the default <code class="language-plaintext highlighter-rouge">security-extended</code> queries.</li>
<li>We then apply a <code class="language-plaintext highlighter-rouge">query-filter</code></li>
<li>The filter selects only queries that have a <code class="language-plaintext highlighter-rouge">high</code> or <code class="language-plaintext highlighter-rouge">very-high</code> precision and a <code class="language-plaintext highlighter-rouge">security</code> tag.</li>
<li>Finally, we use <code class="language-plaintext highlighter-rouge">regex</code> to include only queries that contain a numeric value >= 7</li>
</ol>
<h2 id="precision">Precision</h2>
<p>Before we go on, what exactly is <code class="language-plaintext highlighter-rouge">precision</code>? This is a measure of how many false positives are likely to be returned by the query. Queries with higher precision will return fewer false positives, while queries with lower precision tend to yield more false positives.</p>
<p>When security professionals are analyzing code-bases or writing queries, they may want to dial down precision. However, teams that want to make security remediation <em>actionable</em> should default to higher precision queries. The default setting for the out-the-box suites is <code class="language-plaintext highlighter-rouge">high</code> and <code class="language-plaintext highlighter-rouge">very-high</code> precision to ensure very few false positives.</p>
<blockquote>
<p><strong>Note:</strong> Who decides on the precision? While the <a href="https://github.com/github/codeql">CodeQL repo</a> is open-source and accepts community contributions, it is maintained by GitHub. Queries are rigorously tested and vetted, so the precision metadata is accurate.</p>
</blockquote>
<h2 id="widening-the-filter">Widening the Filter</h2>
<p>The filter above narrowed the number of queries that will be executed in the analysis phase. But we can go the other way too! Here’s a snippet from the configuration for a set of lower precision queries that teams can use if they understand that they are going to get more false positives with this setting:</p>
<div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="c1"># file: '.github/codeql/high-severity.yml'</span>
<span class="na">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">Custom</span><span class="nv"> </span><span class="s">CodeQL</span><span class="nv"> </span><span class="s">Config</span><span class="nv"> </span><span class="s">for</span><span class="nv"> </span><span class="s">lower</span><span class="nv"> </span><span class="s">precision"</span>
<span class="na">disable-default-queries</span><span class="pi">:</span> <span class="no">true</span>
<span class="na">queries</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">uses</span><span class="pi">:</span> <span class="s">security-extended</span>
<span class="pi">-</span> <span class="na">uses</span><span class="pi">:</span> <span class="s">security-and-quality</span>
<span class="na">query-filters</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">include</span><span class="pi">:</span>
<span class="na">kind</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">problem</span>
<span class="pi">-</span> <span class="s">path-problem</span>
<span class="pi">-</span> <span class="s">alert</span>
<span class="pi">-</span> <span class="s">path-alert</span>
<span class="na">precision</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">low</span>
<span class="pi">-</span> <span class="s">medium</span>
<span class="pi">-</span> <span class="s">high</span>
<span class="pi">-</span> <span class="s">very-high</span>
<span class="na">tags contain</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">security</span>
<span class="pi">-</span> <span class="s">correctness</span>
<span class="pi">-</span> <span class="s">maintainability</span>
<span class="pi">-</span> <span class="s">readability</span>
<span class="pi">-</span> <span class="na">include</span><span class="pi">:</span>
<span class="na">kind</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">problem</span>
<span class="pi">-</span> <span class="s">path-problem</span>
<span class="na">precision</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">medium</span>
<span class="s">problem.severity</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">error</span>
<span class="pi">-</span> <span class="s">warning</span>
<span class="pi">-</span> <span class="s">recommendation</span>
<span class="na">tags contain</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">security</span>
<span class="nn">...</span>
</code></pre></div></div>
<p class="figcaption">A custom configuration to include more queries.</p>
<p>Notes:</p>
<ol>
<li>First we specify a <code class="language-plaintext highlighter-rouge">name</code>.</li>
<li>We then disable the default queries.</li>
<li>We bring in the both the default <code class="language-plaintext highlighter-rouge">security-extended</code> and <code class="language-plaintext highlighter-rouge">security-and-quality</code> queries.</li>
<li>We then apply a couple of <code class="language-plaintext highlighter-rouge">query-filter</code>s</li>
<li>The first filter <code class="language-plaintext highlighter-rouge">includes</code> every type of <code class="language-plaintext highlighter-rouge">kind</code>, <code class="language-plaintext highlighter-rouge">precision</code> and <code class="language-plaintext highlighter-rouge">tag</code></li>
<li>The next filter <code class="language-plaintext highlighter-rouge">includes</code> queries with a security tag and all types of <code class="language-plaintext highlighter-rouge">problem.severity</code> (different from <code class="language-plaintext highlighter-rouge">security-severity</code>).</li>
<li>The remainder of the file is the same as the default selectors from the CodeQL repo</li>
</ol>
<h2 id="testing-the-configurations">Testing the Configurations</h2>
<p>We can compare and contrast three scenarios:</p>
<table>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
<th>Branch</th>
<th>Actions File</th>
<th>Config file</th>
</tr>
</thead>
<tbody>
<tr>
<td>Default</td>
<td>A default scan (no custom config)</td>
<td><code class="language-plaintext highlighter-rouge">main</code></td>
<td><code class="language-plaintext highlighter-rouge">.github/workflows/codeql-analysis.yml</code></td>
<td>None</td>
</tr>
<tr>
<td>High Severity</td>
<td>A high-severity config to only include high and critical security queries</td>
<td><code class="language-plaintext highlighter-rouge">high-severity</code></td>
<td><code class="language-plaintext highlighter-rouge">.github/workflows/codeql-high-severity.yml</code></td>
<td><code class="language-plaintext highlighter-rouge">.github/codeql/high-severity.yml</code></td>
</tr>
<tr>
<td>Low Precision</td>
<td>A “low-precision” config to include more queries with lower precision and severity</td>
<td><code class="language-plaintext highlighter-rouge">low-precision</code></td>
<td><code class="language-plaintext highlighter-rouge">.github/workflows/codeql-low-precision.yml</code></td>
<td><code class="language-plaintext highlighter-rouge">.github/codeql/low-precision.yml</code></td>
</tr>
</tbody>
</table>
<p class="figcaption">Three scenarios for CodeQL configuration.</p>
<p>The code on all 3 branches is identical - the only reason I created them was for filtering the results in the Security tab.</p>
<h3 id="adding-debug-to-the-init-action">Adding Debug to the <code class="language-plaintext highlighter-rouge">init</code> Action</h3>
<p>For the purposes of our exploration, I wanted to be able to analyze the SARIF results file after each scan run. To do this, I just added <code class="language-plaintext highlighter-rouge">debug: true</code> to the <code class="language-plaintext highlighter-rouge">init</code> action just below the <code class="language-plaintext highlighter-rouge">config-file</code>. This will zip up the scanning database and the results file as artifacts that can be downloaded - I am really only interested in the results file since we can compare results, but also because the results file includes the list of the queries that are executed during a scan!</p>
<h3 id="executing-the-scans">Executing the Scans</h3>
<p>I’ve added a <code class="language-plaintext highlighter-rouge">workflow_dispatch</code> trigger to the workflow files - so you have to navigate to the Actions tab of the repo and queue a run. After queueing a run for each scenario (and selecting the corresponding branch) I downloaded the SARIF results files for comparison.</p>
<p>To count the number of results in the SARIF, I crafted a quick <code class="language-plaintext highlighter-rouge">jq</code> query:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="nb">cat </span>default-results.sarif | jq <span class="s1">'.runs[0].results | length'</span>
</code></pre></div></div>
<p>We can also figure out the count of queries. The language for this repo is <code class="language-plaintext highlighter-rouge">csharp</code> so we look for the <code class="language-plaintext highlighter-rouge">codeql/csharp-queries</code> tools extension in the file for the list of all the queries (<code class="language-plaintext highlighter-rouge">rules</code>) that were included in the analysis:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="nb">cat </span>default-results.sarif | jq <span class="s1">'.runs[0].tool.extensions[] | select(.name == "codeql/csharp-queries") | .rules | length'</span>
</code></pre></div></div>
<p>When we do the comparison, we get the following results:</p>
<table class="stretch-table">
<thead>
<tr>
<th style="text-align: center">Scenario</th>
<th style="text-align: center">Rule Count</th>
<th style="text-align: center">Result Count</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center">Default</td>
<td style="text-align: center">47</td>
<td style="text-align: center">6</td>
</tr>
<tr>
<td style="text-align: center">High Severity</td>
<td style="text-align: center">35</td>
<td style="text-align: center">5</td>
</tr>
<tr>
<td style="text-align: center">Low Precision</td>
<td style="text-align: center">159</td>
<td style="text-align: center">74</td>
</tr>
</tbody>
</table>
<p class="figcaption">The result and rule count for each scan.</p>
<p>We can also see the counts in the Code Scanning tab in the repo. Just change the branch filter to see the different result counts:</p>
<p><img src="/assets/images/2022/08/fine-tune/results-default.png" alt="Default count" class="center-image" /></p>
<p><img src="/assets/images/2022/08/fine-tune/results-high.png" alt="High severity count" class="center-image" /></p>
<p><img src="/assets/images/2022/08/fine-tune/results-low.png" alt="Low precision count" class="center-image" /></p>
<p class="figcaption">CodeQL Alert counts for each scenario.</p>
<h1 id="conclusion">Conclusion</h1>
<p>CodeQL is incredibly powerful - but there are times when you want to fine-tune the set of queries for analysis. Using Query Filters we can easily tweak exactly what we want to scan.</p>
<p>Happy scanning!</p>Colin DembovskyShift Left - How far is too far?2022-08-04T01:22:01+00:002022-08-04T01:22:01+00:00https://colinsalmcorner.com/shift-left-how-far-is-too-far<ol id="markdown-toc">
<li><a href="#how-far-left-is-too-far" id="markdown-toc-how-far-left-is-too-far">How Far Left is Too Far?</a> <ol>
<li><a href="#ides" id="markdown-toc-ides">IDEs</a></li>
<li><a href="#baked-in-or-optional" id="markdown-toc-baked-in-or-optional">Baked in or optional?</a></li>
<li><a href="#background-analysis" id="markdown-toc-background-analysis">Background analysis</a></li>
<li><a href="#cli-tools-before-pushing-code" id="markdown-toc-cli-tools-before-pushing-code">CLI Tools before pushing code</a></li>
<li><a href="#pre-commit-hooks" id="markdown-toc-pre-commit-hooks">Pre-commit hooks</a></li>
<li><a href="#data-for-dependency-scans" id="markdown-toc-data-for-dependency-scans">Data for dependency scans</a></li>
</ol>
</li>
<li><a href="#the-sweet-spot" id="markdown-toc-the-sweet-spot">The sweet spot</a></li>
<li><a href="#conclusion" id="markdown-toc-conclusion">Conclusion</a></li>
</ol>
<blockquote>
<p>Image by <a href="https://unsplash.com/@jannerboy62?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Nick Fewings</a> on <a href="https://unsplash.com/s/photos/left?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
</blockquote>
<p>I have a developer background, so App Security (AppSec) was always anathema to me. However, I had an epiphany about GitHub Advanced Security and how it is unique in it’s approach - it is <em>security for developers</em>. I wrote some thoughts about that in a <a href="/ghas-will-win-the-appsec-wars/">previous post</a>.</p>
<p>GitHub Advanced Security (GHAS) allows you to reduce risk <em>without impeding velocity</em>. This is a big deal in today’s fast-paced world. The way that GHAS does this is by centering AppSec on the developer, while still meeting requirements of security professionals. Integrating AppSec into the developers’ daily workflow with very low friction is the secret to securing your software effectively.</p>
<p>GHAS centers itself around the <em>repo</em> and the <em>Pull Request</em>. I have had a number of customers ask why GHAS does not have an IDE plugin. If shifting left is the Holy Grail of AppSec, and GHAS is built to be developer-centric, then why isn’t GHAS in the IDE? Isn’t that the furthest left we can shift?</p>
<p>Or would that be too far left?</p>
<h2 id="how-far-left-is-too-far">How Far Left is Too Far?</h2>
<p>Let’s take a moment to consider where in the life cycle various GHAS features work:</p>
<table>
<thead>
<tr>
<th>Feature</th>
<th>Phase</th>
</tr>
</thead>
<tbody>
<tr>
<td>Secret Scanning</td>
<td>After pushes to the repo. If you have <a href="https://docs.github.com/en/enterprise-cloud@latest/code-security/secret-scanning/protecting-pushes-with-secret-scanning">Push Protection</a> enabled, secrets are scanned before the push.</td>
</tr>
<tr>
<td>Dependency Scanning (SCA)</td>
<td>After pushes to the repo and in PRs via <a href="https://docs.github.com/en/code-security/supply-chain-security/understanding-your-software-supply-chain/about-dependency-review">Dependency Review</a>.</td>
</tr>
<tr>
<td>Code Scanning (CodeQL)</td>
<td>During builds and surfaced in PRs.</td>
</tr>
</tbody>
</table>
<p>It seems that Push Protection is the only feature that occurs before a <code class="language-plaintext highlighter-rouge">push</code> to the repo. Dependency scanning and code scanning are centered around the repo or PR. Why is the PR the center of GHAS, rather than the IDE? Wouldn’t it be even faster if the IDE could surface vulnerable dependencies and vulnerable code before developers push changes to the repo?</p>
<h3 id="ides">IDEs</h3>
<p>Developers can be picky about their IDEs. While many modern IDEs are extensible, there is no standard for IDE extensibility. This means that any policy enforcement at the IDE is near impossible, since you’d have to implement that policy for all IDEs. You could mandate a single IDE, but that doesn’t always work.</p>
<p>Additionally, there’s no simple way to force developers to turn certain tools and plugins on in the IDE. Any process that relies on the IDE is relying on the developer to remember to turn on the tool. And what about shared configuration? Relying on configuration files may work - but many IDEs store preferences on the workstation in personal folders rather than in repos, so sharing common config can also be a challenge.</p>
<p>IDEs are great for “simple” analysis - linters that enforce coding standards work really well in IDEs, assuming you can effectively share the linting rules. Most linters are built this way, storing configuration dotfiles alongside the code. Most linters are <em>fast</em> because they typically require very little compute, so running them in the IDE doesn’t distract the developer.</p>
<p>However, most security analysis tools (worth their salt) tend to require heavier compute and take longer to scan because of the more complex problem domain. Putting code scanning into an IDE becomes a resource hog for developers (have you ever seen a developer waiting for an IDE to compile their code - it’s not pretty!). Furthermore, inundating developers with tons of results can be distracting and end up reducing the remediation effort of the developer since they get fatigued by noisy alerts.</p>
<h3 id="baked-in-or-optional">Baked in or optional?</h3>
<p>Security testing that isn’t built into the inner sanctum of your code is <em>effectively optional</em>. External tools require someone to build them, install and configure and maintain them, integrate them and automate them. Even if you buy a 3rd party tool rather than build it yourself, you still have to operate, configure, intergrate and automate it yourself. This friction and extra overhead tends to cause developers to avoid these tools - and you lose any value they offer if just one person “forgets” to run the scan.</p>
<h3 id="background-analysis">Background analysis</h3>
<p>What about running the code scanning <em>in the background</em> on the developer laptop? This can get problematic because of compute constraints, and may end up with the situation where code is changed before the scans complete, so you get alerts for code that has already changed or been removed - way too much friction and frustration.</p>
<h3 id="cli-tools-before-pushing-code">CLI Tools before pushing code</h3>
<p>You could require developers to run CLI tools before pushing code - but this is now outside of the IDE anyway. Developers will invariably forget to run the tool, or just avoid running it since it is disruptive to their coding workflow.</p>
<h3 id="pre-commit-hooks">Pre-commit hooks</h3>
<p>What about pre-commit hooks - what about running code scanning there? Once again, typical code scanning takes in the order of minutes - far too long for a pre-commit hook. Developers would have a fit if it took 10 minutes to scan the code before a successful push! Heck, even 1 minute is too long to wait for a
push to succeed.</p>
<h3 id="data-for-dependency-scans">Data for dependency scans</h3>
<p>Dependency scanning (SCA) is performed on the repo with GHAS. While the dependency graph could be built in the IDE, how would the IDE compare the dependency graph to CVE/CWE databases to determine if any package contains a vulnerability? Either the IDE would have to download the databases or make API calls, which could be too slow and disrupt the daily developer workflow.</p>
<h2 id="the-sweet-spot">The sweet spot</h2>
<p>Taking the above considerations into account, it becomes clear that placing security scanning at the repo/PR is as far left as you should go. Not only does this make security remediation a <em>team sport</em> since team members can collaborate around alerts/remediation process, but this is very little disruption to the daily workflow of a developer. For complex codebases where scanning takes longer than 10 minutes and could potentially slow CI/CD, scheduled jobs or parallel workflows (a CI workflow and a scanning workflow) are perfectly acceptable workarounds.</p>
<p>Developers are already used to collaborating around the PR. The PR is already the rallying point for code review, automated unit testing, linting and other quality gates. GHAS allows teams to add security testing into this pivot point smoothly. This means developers can keep using whatever IDEs they want - but still gain all the benefits of security scanning early and often in the software life cycle.</p>
<p>Dependabot runs post-push (and on a schedule) on the repo and is able to then compare the dependency graph to the vulnerability databases. Automated PRs to bump to patched versions further aids developers to quickly and easily remediate vulnerable packages with very low friction and interruption.</p>
<p>Secret scanning is the one exception - that you want to shift as far left as possible to prevent secrets from ever making their way into the shared repo. Secret Scanning in GHAS scans a repo’s entire history when you enable it for the first time, but you can also turn on Push Protection to ensure that secrets are kept out of the repo in the first place! Under the hood this is achieved conceptually by a pre-commit hook - but the computation time for secret scanning is far smaller than that required to perform code analysis. Secret scanning tends to complete well within seconds, allowing it to be shifted “more left” to the <code class="language-plaintext highlighter-rouge">push</code>.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Shifting left is critical for AppSec in today’s world - but you can actually shift too far left. GitHub Advanced Security shifts as far left as possible, but not into the IDE. This decision is deliberate and considered, since IDEs are not ideal for code and dependency scanning. Push protection ensures that secrets don’t enter the repo, but Dependency scanning and Code scanning are centered on the repo and PR where there is little friction for the development inner loop and encouragement of collaboration to remediate security alerts.</p>
<p>Happy securing!</p>Colin Dembovsky