-
Notifications
You must be signed in to change notification settings - Fork 0
/
feed.xml
808 lines (613 loc) · 57.1 KB
/
feed.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>GoCD</title>
<subtitle>Continuous Delivery</subtitle>
<id>https://go.cd/blog</id>
<link href="https://go.cd/blog"/>
<link href="https://go.cd/feed.xml" rel="self"/>
<updated>2016-03-24T05:30:00+05:30</updated>
<author>
<name>GoCD Team</name>
</author>
<entry>
<title>How to Avoid Brittle Code</title>
<link rel="alternate" href="https://go.cd/2016/03/24/how-to-avoid-brittle-code/"/>
<id>https://go.cd/2016/03/24/how-to-avoid-brittle-code/</id>
<published>2016-03-24T05:30:00+05:30</published>
<updated>2016-03-30T09:52:10+05:30</updated>
<author>
<name>David Rice</name>
</author>
<content type="html">
<p>The most common problem with legacy code is brittleness. A brittle codebase is one that a team cannot change without great pain. In ThoughtWorks’ 10 years of building products we’ve learned some hard lessons while trying to keep fairly large codebases malleable, year after year. In this post I'll share what we learned from our biggest challenges. One caveat: my writing down these thoughts isn’t my saying we’ve got it down cold. We still have our share of pain from legacy code. Like any team, we struggle to get better each and every day.</p>
<h2 id="upgrade-everything-all-the-time">Upgrade everything, all the time</h2>
<p>You should aspire to upgrade your dependencies and frameworks all the time. OK, so maybe this is almost in the realm of the obvious now. But very few people thought so 10 years ago. And I wonder whether even teams who <em>know</em> this is the right thing to do, <em>actually</em> prioritize it. It just needs to be something you do all the time and not left to be handled via technical debt. Here's why:</p>
<ol>
<li>
<p><strong><a href="http://martinfowler.com/bliki/FrequencyReducesDifficulty.html">If it hurts, do it more often</a></strong>. One of the most obvious reasons to upgrade all the time is that upgrading can be hard. There’s very often an unpredictable cascade of broken dependencies. The amount of work is mostly unknown. Do it more often and it becomes a non-issue. But there’s more to this than simple pain avoidance.</p>
</li>
<li>
<p>Another motivator for upgrading dependencies is <strong>fixing security vulnerabilities</strong>. One of the biggest differences in building software now versus 10 years ago is the seemingly non-stop flow of vulnerability reports against our libraries, frameworks and applications. Fixing vulnerabilities will almost always involve upgrading some of your dependencies. The upgrades must to be easy in order to quickly ship vulnerability fixes.</p>
</li>
<li>
<p>Teams that don’t upgrade regularly typically will label the activity as technical debt. Despite the industry being much more willing to talk about technical debt than 10 years ago, it’s still <strong>a very painful conversation</strong> to convince a product manager to pay down technical debt. If your team works in an "upgrade everything all the time mode", you can avoid any conversation around upgrading technical debt altogether.</p>
</li>
</ol>
<h2 id="its-about-the-unit-tests">It’s about the unit tests</h2>
<p>The primary pain point for working with legacy code is how long it takes to make changes. So if you intend for your code to be long-lived, you need to ensure that it will be entirely pleasurable for future developers to make changes to it. And there’s one element that dominates all others for this: an extremely fast and thorough unit test suite.</p>
<p>The cycle for adding new features, including any refactoring, is roughly this: write failing test; code; get to green; make it right. If you’re doing it right, you’re executing a lot of unit tests along the way, sometimes a focused set and sometimes the entire suite. If these tests aren’t fast, the development cycle will not be enjoyable. The coding experience should not be: make a couple of changes and sit around for 10 or 20 minutes while tests run. That’s a bad place to be.</p>
<p>Keeping a unit suite fast isn’t just about how you design and code. Yes, you can do a lot of things to keep tests fast, such as avoiding files, databases, sockets, creation of huge graphs of objects, etc. But the other key piece is picking frameworks and languages that lend themselves to fast tests. If you find yourself subverting your framework to make your tests fast, you need to consider a different framework. And—yes—you can read this as my being unlikely to use Rails the next time I'm building a traditional multi-page application.</p>
<p>There’s also a consideration about the size of the application. Once a codebase is a certain size, you need to figure out how to split it up. This is the only way to keep a fairly complete understanding of a piece of software in your head. Finding the seams along which to split a monolith is not an academic modeling exercise. You will spend a lot of time playing with your code, moving things around, redesigning, refactoring. Having a fast test suite to quickly validate your work along the way will make this work several orders of magnitude easier.</p>
<p>Actually, "several orders of magnitude" is likely underselling it. If you need to split up a monolith and have a painfully slow unit test suite, well… you just might be stuck. That’s learning a lesson the hard way. So do everything in your power to keep your unit tests extremely fast and able to run in a single thread on a dev machine.</p>
<h2 id="branch-by-abstraction-should-not-be-a-permanent-state">“Branch by Abstraction” should not be a permanent state</h2>
<p>Long-lived products are going to have a number of tech leads over the years. A certain type of tech lead will come in and start making noise about what stinks in the stack and immediately want to start swapping in new stuff. And that's OK. New shiny toys aren't always bad. For a long-lived codebase, it requires some fresh energy to generate enough momentum to swap out the parts that are no longer holding their weight. That said, I want to make two important points.</p>
<p>A new tech lead should not swap out any tech until they’ve been working on the team for two to three months. There’s too much context to understand. The new tech lead needs to learn empathy for the team and the codebase. The team and tech lead need to build trust and a rhythm. Better decisions will be made with an initial pause.</p>
<p>The typical means of swapping out new tech (outside the absurdity of long-lived branches) is to utilize <a href="http://martinfowler.com/bliki/BranchByAbstraction.html">Branch by Abstraction</a>:</p>
<ul>
<li>An abstraction is placed in front of component X.</li>
<li>Component Y is introduced as a replacement for X</li>
<li>The abstraction routes intelligently between X &amp; Y while…</li>
<li>X is gradually made obsolete</li>
<li>X is removed; the abstraction is maybe removed</li>
</ul>
<p>I have many times seen this process fail to complete due to discovering how difficult it is to remove that final 20% of the old component. I cannot stress enough how painful it is to drag around multiple ways of doing things year after year. It slows everything down and is demoralizing. Branch by Abstraction is a great pattern. It’s the only way I’d do this sort of component swapping. But it needs to be accompanied by the team's complete commitment to eliminate the old component within a specified timebox.</p>
<h2 id="technical-debt-can-kill-you">Technical debt can kill you</h2>
<p>Just because we talk about technical debt more than we used to does not provide any guarantee that it will be paid down. Perversely, maintaining a backlog of technical debt makes it easy to never pay it down. It’s too easy for a manger to say “It’s OK to hold off on that. We’ve got this other pressing need over here. It’s logged. We can come back to it.” And in that moment, it’s probably a sound decision. But those pressing needs never go away. Urgent lists only grow longer.</p>
<p>And the situation get worse. My experience is that there is a point when the technical debt backlog grows so onerous that the team will give up on <em>wanting</em> to pay it off. The team will feel hopeless. The developers cannot achieve flow. The business isn’t getting new value. I have a few thoughts on how to avoid insurmountable technical debt.</p>
<p>A good development team won’t play the same technical debt card over and over again. When a team realizes it’s playing the same type of technical debt card repeatedly, it must bring the pain forward and quickly assume that work into its normal everyday way of working.</p>
<p>My colleague Badri suggests that a team must agree to take on debt collectively. No one individual has the right to make the codebase worse while signing up the entire team to fix it later.</p>
<p>Most importantly, technical leaders and product leaders need to trust each other. Neither side should be able to play the “because I said so” card. Good technical leaders understand the priorities of the business. Good product managers value being able to deliver. Both sides need to talk about risks, costs, and benefits. If you can’t ship, your technical debt has converted into a business problem and that’s bad for everyone.</p>
<p>There’s obviously much more a team can do to write long-lived code: code for the reader, don’t be clever, and always think of your future colleagues to name a few. I’d love to hear what you think should be added to this list.</p>
</content>
</entry>
<entry>
<title>Are you ready for Continuous Delivery? Part 2: Feedback loops</title>
<link rel="alternate" href="https://go.cd/2016/03/15/are-you-ready-for-continuous-delivery-part-2-feedback-loops/"/>
<id>https://go.cd/2016/03/15/are-you-ready-for-continuous-delivery-part-2-feedback-loops/</id>
<published>2016-03-15T05:30:00+05:30</published>
<updated>2016-03-30T09:52:10+05:30</updated>
<author>
<name>David Rice</name>
</author>
<content type="html">
<p>During the 10 plus years ThoughtWorks has been in the Continuous Delivery (CD) ecosystem, we've regularly come across
people wanting to try our tools (GoCD and Snap CI) as they start their journey toward CD. Very often, in attempting to
support teams new to CD, we suggest that they pause any tool evaluation and consider whether their organization is
actually ready to embark on this journey. If you do not frankly assess your team's readiness, the result can be a
massive failure. The path to CD should not start with the immediate adoption of a CD tool.</p>
<p>In <a href="../../../01/25/are-you-ready-for-continuous-delivery/">part one</a> of this series, we explored some core development
practices that are prerequisites for CD. In this part, we'll look at a variety of feedback loops—both manual and
automated—your organization should have in place before rolling out CD.</p>
<h2 id="feedback-loops-and-continuous-delivery">Feedback loops and Continuous Delivery</h2>
<p>The aim of Continuous Delivery is to release software faster, more reliably, and more frequently. Given that, diagrams
of CD typically depict a linear flow. On the surface, this is quite different from Continuous Integration, which is
usually shown as a loop.</p>
<figure>
<img src="/assets/images/blog/are-you-ready-for-continuous-delivery/gocd_thoughtworks_continuous_integration_feedback_loops.png" alt="Continuous Integration feedback loops GoCD ThoughtWorks" class="size_medium" />
</figure>
<p>But CD as a linear flow is an incomplete picture. A good deployment pipeline has numerous feedback loops along the
way. At each stage of the pipeline, verifications are run. If they pass, the pipeline continues. If they fail, the
pipeline halts and the team responds appropriately to the feedback. The feedback along the way prevents CD from being
chaos. Poor quality will almost never reach production in a well-designed pipeline.</p>
<figure>
<img src="/assets/images/blog/are-you-ready-for-continuous-delivery/gocd_thoughtworks_continuous_delivery_feedback_loops.png" alt="Continuous Delivery feedback loops GoCD ThoughtWorks" class="size_medium_large" />
</figure>
<p>Most of the feedback loops you find in a deployment pipeline are good practices in and of themselves. You might already
be doing some or most of them. We think you should have many of these in place before moving forward with CD.</p>
<h2 id="test-automation">Test automation</h2>
<p>The most common feedback loop in any deployment pipeline is the execution of automated tests. You must have a solid test
automation strategy before attempting CD. Some people like the approach of the
<a href="http://martinfowler.com/bliki/TestPyramid.html">test pyramid</a>. We’re actually fine with any sensible approach, as long
as it's fast and reliable. There are myriad types of automated tests, and which ones you use will depend upon your
circumstances. Here, we will take a look at three of the most important types: unit, regression, and performance.</p>
<h4 id="unit-tests">Unit tests</h4>
<p>Unit tests verify your application at the most granular level, typically methods or classes. They are fast, easy to
maintain, and support rapid change of your application. Unit tests should be the foundation of your automation
strategy. If your teams don't value a thorough and fast unit test suite, they won’t be able to move fast or with
confidence.</p>
<p>Some points to consider when assessing your team:</p>
<ul>
<li>The suite must be fast. What’s fast? A few minutes on a large code base is OK. But faster is better. Slow unit tests
result in a slow, horribly frustrating development flow.</li>
<li>On a mature team, the testers will be comfortable with pushing as much of your test automation as possible into your
unit test layer.</li>
<li>Code coverage is important, but tracking metrics is generally only beneficial for a team learning the basics.</li>
<li>Some frameworks and platforms are known to be slow when it comes to unit tests. Do not fight or subvert a framework to
make tests fast. Instead, consider switching your framework or platform.</li>
</ul>
<h4 id="regression-test-suite">Regression test suite</h4>
<p>A regression test suite verifies that your entire application actually works. This suite adds a ton of value to a
deployment pipeline. For many, the regression stage of a pipeline gives the confidence needed to deploy. We want to make
a couple of points about this.</p>
<p>Firstly, regression tests should be 100 percent automated. They are change-detectors and do not require brain power to
execute. A manual regression stage in your deployment process will prove painful. Work to get rid of it. Your testers
can add more value elsewhere.</p>
<p>Secondly, we reject the notion that a regression suite must mean slow, flaky Selenium tests. Our take is, yes, it’s a
fair reputation, but it was earned by many teams doing it wrong. How to author and maintain an automated regression
suite is a book-worthy topic, but quickly:</p>
<ul>
<li>Don’t couple them to small stories or tasks. Only consider them in the context of the entire application.</li>
<li>Relentlessly prune the suite. Keep it tight. Err on the side of leaving something uncovered rather than accepting
duplication.</li>
<li>Treat them as production code. Keep things very clean.</li>
<li>Have programmers write them. Train your testers to code if they are interested in automation. Avoid drag-and-drop
programming.</li>
<li>Do not accept flaky tests. Fix them or get rid of them.</li>
<li>Even the best suites we've seen tend to be slow. Embrace using some combination of hardware, virtualization, and cloud
to parallelize execution.</li>
</ul>
<p>One caveat to note: the best testers will want to do a manual regression every so often, just to help structure how they
think about the application. That’s a good thing so long as it’s about their being thoughtful and not how you actually
integrate regression checks into your process.</p>
<h4 id="performance-testing">Performance testing</h4>
<p>Performance testing—verifying that your application meets specific performance criteria—is a massive topic. There’s no
one way to do it: your approach will vary according to request volume and data size. There are many varieties: load,
stress, soak, and spike, to name a few. It’s too big a topic for this post. That said, we do have some thinking that can
help you assess your maturity:</p>
<ul>
<li>Do not leave this phase for last. We cannot stress enough just how difficult this practice is and how much time we’ve
seen sunk into failed efforts. Everything about it is difficult: modeling, standing up an environment, building the
harness, assessing results, building it into your deployment pipeline.</li>
<li>It’s critical to test against specific criteria. Don't worry about getting your criteria wrong at first. You can change
the criteria once you have real production data.</li>
<li>Don’t assume you'll reach web-scale in month two. You’ll waste huge cycles prematurely optimizing both your application
and your tests. (Don’t read this as us saying, "Don’t consider what your actual scale might be." Just a suggestion that
you be realistic and pragmatic.)</li>
<li>Utilize production monitoring to the greatest extent possible. A canary release can go a long way toward verifying the
performance of a new version of your application.</li>
</ul>
<h2 id="production-monitoring">Production monitoring</h2>
<p>Do you have a production monitoring strategy? Feedback loops aren’t only for pre-production phases. As much as we try to
achieve dev/prod parity, production is truly a unique environment for most organizations and things can—and do—go wrong.</p>
<p>Here are some questions to help you assess your readiness:</p>
<ul>
<li>How quickly does your team know something is broken? Do they learn about it via monitoring? And then how fast can they
respond?</li>
<li>Does your team ignore alerts?</li>
<li>Do your teams tend to invent an approach to monitoring as they go along? Believe it or not, we actually see this a lot
and it’s not a good thing. Be as thoughtful about monitoring and alerts as you are with other parts or your application.</li>
<li>Do you keep a database of events so that you can later query for patterns?</li>
</ul>
<h2 id="user-testing">User testing</h2>
<p>Sitting users down in front of your application and try it out is a critical feedback loop. In an enterprise setting, we
like to see two type of user testing: usability testing and user acceptance testing. Usability testing verifies that
users find the application easy to use. User acceptance testing verifies that users can complete transactions with the
application in a real-world setting. There can be fair bit of overlap between the two types.</p>
<p>If you do not do user testing, you will struggle getting users to accept frequent releases of new versions. Users will
only like rapid changes if the experience remains usable, consistent, and effective.</p>
<p>We also want to call out that these feedback loops are manual processes that often require weeks or months of elapsed
time. They are typically not modeled into the deployment pipeline, and that’s fine. But do not leave them both batched
up until the end. That’s a long wait period and likely an unknown amount of rework before deploying to prod. If you do
this, your process will feel more like waterfall than CD. Run these user tests early and often, while you are writing
the code.</p>
<h2 id="exploratory-testing">Exploratory testing</h2>
<p>All this talk of automation does not mean your testers should retire their analytical thinking and learn to program. In
fact, test automation should free up your testers to do what they're best at: use their
brains. <a href="http://testobsessed.com/2006/04/rigorous-exploratory-testing/">Exploratory testing</a> is when a tester is
simultaneously learning about the system, designing tests, and executing tests. It is when a tester gets into a deep
flow, not even knowing what the next test is. This is where a good tester can really shine, doing some of their most
valuable work.</p>
<p>For most types of applications, a test strategy should include skilled testers performing exploratory testing. This
testing will find problems, teach you about your system, and inform your automated regression suite. As with user
testing, this testing should be done throughout the development process and not as a gate at the end.</p>
<h2 id="summary">Summary</h2>
<p>This list of feedback loops organizations should have in place before doing CD is not exhaustive (e.g., we didn’t
discuss A/B testing). We have presented the more common feedback loops we see where CD has been successful. Obviously
every situation presents different problems and has different needs.</p>
<p>You don’t need to have high marks on everything we have presented to begin your journey to CD. But if you are feeling
only so-so against a majority of them, we’d suggest working on the individual pieces before approaching CD. Once you get
enough of them in place, you will find that you’ve actually completed a large swath of your journey to Continuous
Delivery.</p>
<p>In future parts of this series, we plan to explore culture, the last mile, and more.</p>
</content>
</entry>
<entry>
<title>Add Security Testing to Your Deployment Pipelines</title>
<link rel="alternate" href="https://go.cd/2016/02/08/not-done-unless-its-done-security/"/>
<id>https://go.cd/2016/02/08/not-done-unless-its-done-security/</id>
<published>2016-02-08T05:30:00+05:30</published>
<updated>2016-03-30T09:52:10+05:30</updated>
<author>
<name>Ken Mugrage</name>
</author>
<content type="html"><div>
<div class="float-image float-right">
<img src="/assets/images/blog/deploy-now/security-badge.png" alt="Continuous Delivery Security Testing" class="pad-left" />
</div>
<div class="float-article float-left">
<p>This is the second part of a series called <a href="https://www.go.cd/2016/01/17/not-done-unless-its-done.html">It’s not Continuous Delivery if you can’t deploy right now.</a> In this part, I’m going to cover some more common tools in security testing pipelines.</p>
<p>In my experience, automated security testing is pretty rare in CD pipelines. If the job of a pipeline is to make you confident in your release, confidence in your security is a must have. While it’s not practical to try to list them all, I’ll give a few examples of tools used for this automation. You can find more <a href="https://www.owasp.org/index.php/Appendix_A:_Testing_Tools">here</a>.</p>
<p>Tests created by your team and run by tools like the ones in this article should be a key part of any deployment pipeline.</p>
</div>
<div class="clear" />
</div>
<h2 id="automation-is-one-part-of-the-solution">Automation is one part of the solution</h2>
<p>Security has to be addressed in a holistic way. Automation is a way to get fast feedback on common security issues. A talented <a href="http://security.stackexchange.com/a/46028">penetration tester</a> will consider scenarios and methods that are not usually automated.</p>
<p>The goal of automation is to catch the “low-hanging fruit”. Are we pushing things to Git we shouldn’t be? Are we using an old, vulnerable package we shouldn’t? Are we violating our own company’s rules?</p>
<h3 id="before-committing-code">Before committing code</h3>
<p>There is a lot you can—and should—do before your code even gets to a pipeline. Generally speaking, CD servers watch your source code repositories for changes and then act on those changes. For many issues, this is too late!</p>
<p>One of the biggest recurring stories we hear about SSH keys, auth tokens, private keys etc., being checked into source control. There was <a href="http://www.securityweek.com/github-search-makes-easy-discovery-encryption-keys-passwords-source-code">a story</a> a few years ago where a basic search for private id_rsa keys returned over 600 matches on GitHub alone.</p>
<p>Consider incorporating tools that check for these things before they are actually added!</p>
<p>ThoughtWorks recently created <a href="https://github.com/thoughtworks/talisman">Talisman</a>, a tool that is installed as a pre-push hook to Git. The idea is to catch issues before they even get into your source code repository.</p>
<h3 id="static-application-security-testing-sast">Static Application Security Testing (SAST)</h3>
<p><a href="http://www.gartner.com/it-glossary/static-application-security-testing-sast">Gartner</a> defines SAST as “a set of technologies designed to analyze application source code, byte code and binaries for coding and design conditions that are indicative of security vulnerabilities. SAST solutions analyze an application from the 'inside out' in a nonrunning state.”</p>
<p>This starts with having good unit test coverage. Can you authenticate as you should be able to? Are bad authentication requests refused? Are retries being limited properly? Are password policies being properly enforced?</p>
<p>Very early in your build process, your CD server can run some security-specific, source code level tests. These could look for issues ranging from bad code to policy violations.</p>
<p>For Ruby applications, this category includes tools like <a href="http://brakemanscanner.org/docs/introduction/">Brakeman</a> and <a href="https://github.com/rubysec/bundler-audit">Bundler-audit</a>.</p>
<p>Brakeman scans the application’s source code and can give out lots of different <a href="http://brakemanscanner.org/docs/warning_types/">warning types</a>. I particularly like what I’ll call policy checking. Someone implements basic authorization when you don’t want to allow that? Pipeline stage fails.</p>
<p>Bundler-audit does pretty much what it sounds like. It checks to see if you're using Gems that have known vulnerabilities.</p>
<p>For Java applications, <a href="http://www.sonatype.com/">Sonatype</a> has some impressive tools in this area. According to one Sonatype <a href="http://www.sonatype.com/assessments/known-vulnerabilities">study</a> “of the 106 component ‘parts’ used in a typical application, on average 24 have known cyber vulnerabilities, which are rated either critical or severe."</p>
<h3 id="dynamic-application-security-testing-dast">Dynamic Application Security Testing (DAST)</h3>
<p>Again quoting <a href="http://www.gartner.com/it-glossary/dynamic-application-security-testing-dast/">Gartner’s</a> definition, these are tools which are “designed to detect conditions indicative of a security vulnerability in an application in its running state".</p>
<p>The tools that run against your code are a good start, but they aren’t accessing the application like a user. Tools such as <a href="https://portswigger.net/burp/">Burp</a>, <a href="https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project">OWASP ZAP</a>, <a href="http://www.arachni-scanner.com/">Arachni</a>, <a href="http://w3af.org/">w3af</a> and <a href="https://subgraph.com/vega/index.en.html">Vega</a> access the application itself, looking for exploit vectors like SQL Injection and cross-site scripting.</p>
<h2 id="who-creates-the-tests">Who creates the tests?</h2>
<p>Normally, I’m a big proponent of tests being written by the developer as (or preferably before) they write the code. With that said, I don’t think it’s controversial to state that the average software developer isn’t very good at security testing. We should also acknowledge that developers do sometimes leave some doors open on purpose.</p>
<p>I believe security is one of the few areas where having specialists writing and executing the tests is not only acceptable, but preferable. Development teams should seek out these experts and work with them in close collaboration.</p>
<h2 id="where-do-the-tests-go-in-the-cd-pipeline">Where do the tests go in the CD pipeline?</h2>
<p>When people ask this question, they are usually trying to decide if security pipelines or stages should be blocking, meaning that the pipeline can’t move forward on failures. I definitely think they should block, but that doesn’t mean you can’t do other types of testing on the same build.</p>
<p>If your continuous delivery server supports <a href="https://www.go.cd/documentation/user/current/introduction/concepts_in_go.html#fan_in_out">fan-in / fan-out</a>, you can set tests up as entirely separate pipelines that run while other pipelines (or people) are doing other things. In the example below, we’ve decided that we can go ahead with User Acceptance while the security scans are in progress. We still know that it won’t get deployed to our staging environment unless they both pass.</p>
<p><img src="/assets/images/blog/deploy-now/continuous_delivery_security_testing_pipeline.png" alt="continuous delivery security testing pipeline" /></p>
<h2 id="reminder-tools-dont-solve-problems">Reminder: tools don’t solve problems</h2>
<p>I’ve spent the last 15 years working for makers of software development tools. There are only a few things I’m completely sure of, and one of them is that tools do not solve problems by themselves.</p>
<p>Having the right continuous delivery server (like GoCD) will make your life a lot easier, and having the right security tools will make it easier to find issues fast. None of this is a substitute for expertise.</p>
<h2 id="how-do-you-start">How do you start?</h2>
<p>By starting. Pick an area, automate it. Pick another area, automate it. It will take time, but as that time progresses you’ll be more and more confident in the security of your application.</p>
<h2 id="what-are-some-others">What are some others?</h2>
<p>What are some other tools you like? Add them to the comments.</p>
<style type="text/css">
.float-image {
max-width: 40%;
}
.float-image img {
max-width: 100%;
}
.float-image img.pad-right {
padding-right: 10px;
}
.float-image img.pad-left {
padding-left: 10px;
}
.float-article {
max-width: 60%;
}
.float-left {
float: left;
}
.float-right {
float: right;
}
.clear {
clear: both;
}
@media (max-width: 699px) {
.float-left, .float-right {
float: none;
}
.float-image {
max-width: 100%;
}
.float-article {
max-width: 100%;
}
}
</style>
</content>
</entry>
<entry>
<title>Are you ready for Continuous Delivery?</title>
<link rel="alternate" href="https://go.cd/2016/01/25/are-you-ready-for-continuous-delivery/"/>
<id>https://go.cd/2016/01/25/are-you-ready-for-continuous-delivery/</id>
<published>2016-01-25T05:30:00+05:30</published>
<updated>2016-03-30T09:52:10+05:30</updated>
<author>
<name>David Rice and Aravind SV</name>
</author>
<content type="html">
<p>During the 10 plus years ThoughtWorks has been in the Continuous Delivery ecosystem, we've regularly come across people
wanting to try our tools GoCD and Snap CI as they start their journey toward Continuous Delivery (CD). Very often,
in attempting to support teams new to CD, we suggest that they pause any tool evaluation and consider whether their
organization is actually ready to embark on this journey. If you do not frankly assess your teams' readiness, the result
can be a massive failure. The path to CD should not start by immediately adopting a CD tool.</p>
<p>This is the first part of a series of posts about Continuous Delivery infrastructure, culture, and process. In this
first post, we’ll present questions you need to answer honestly about your own people, teams, and organization to
determine your readiness for Continuous Delivery.</p>
<h2 id="part-1-practices">Part 1: Practices</h2>
<h3 id="do-you-put-emeverythingem-into-version-control">Do you put <em>everything</em> into version control?</h3>
<p>A foundation of CD is the ability to put a specific version of your application into a given environment at any point in
time. CD is actually quite fussy about this. It’s a must. And this can only be done by:</p>
<ul>
<li>Putting everything needed to make your application into a version control system</li>
<li>Any time you change anything, push the changes to version control.</li>
<li>Write an automated script that, given a version, checks out everything from version control and assembles your application.</li>
</ul>
<p>CD is impossible when software teams (or the people on a single team) work in isolation from each other. When
development work happens in isolation, large periods of integration and testing are required at the end of the
development phase. This results in long periods during which your application cannot be released. Avoiding these lengthy
integration phases requires that your work be as visible to, and usable by, others as soon as possible. Our preferred
mechanism for doing this is called trunk-based development:</p>
<ul>
<li>Everyone regularly pulls others’ changes from version control.</li>
<li>Everyone regularly pushes their changes to version control.</li>
<li>Everyone works in the same place in version control, typically called "trunk" or "master".</li>
</ul>
<p>If you are new to trunk-based development, it might sound ridiculous. But the practices of CI, unit testing, and
trunk-based development all playing together, not only makes it reasonable, but a truly pleasant way of working.We don't
have space here to go into the details of making trunk-based development work, but know that it cannot work without
version control.</p>
<p>Even if you aren’t planning to do CD, start putting everything into version control now. And we mean everything. Not
just your source code. Everything can include images, database scripts, tests, configuration, libraries, documentation,
and more. Source code won’t be enough if you need to get back to a specific version of your application, infrastructure,
etc. Also, this encourages your entire team—not just the developers—to collaborate.</p>
<h3 id="do-your-developers-practice-continuous-integration-ci">Do your developers practice Continuous Integration (CI)?</h3>
<p>For CD to be successful, the entire organization must trust that your software is high quality and always in a working
state. In terms of development team practices, CI is the fundamental building block to achieve this level of trust.</p>
<p>So what is CI? Well, <a href="http://www.martinfowler.com/articles/continuousIntegration.html">much has been written about CI</a>, but here’s the TL;DR version:</p>
<ul>
<li>Developers check code into trunk/master multiple times each day.</li>
<li>Developers maintain a suite of unit tests that verify the code works before merge, locally, and post merge, on an integration machine or CI server.</li>
</ul>
<p>The end result is a development team that has high trust that the code in trunk/master actually works. This will leave
the development team more willing to push code to testers, or even production, more regularly. With this in place, trust
between the groups will quickly grow.</p>
<p>Our experience has been that development teams can only move quickly by combining unit testing, refactoring, and
CI. That’s a discussion too broad for discussion here, but know that your teams will never deliver at a fast pace
without CI.</p>
<p>If your developers are not practicing CI we would recommend putting your move to CD on hold and shift your focus
entirely to supporting the adoption of CI.</p>
<h3 id="do-you-automate-relentlessly">Do you automate relentlessly?</h3>
<p>CD is very dependent on automation. Automation everywhere is crucial to achieving trusted, one-click, low drama
deployments. Manual processes are error-prone and do not lend themselves to repeatability. To practice CD, the entire
team needs to get into the mindset of relentless automation of nearly everything.</p>
<p>This mindset means asking "Why can't this be automated?" every time anyone on the team does anything manually more than
once. Some components and aspects of your process that need automation are:</p>
<ul>
<li>Tests at different levels, such as unit, integration, UI, regression, security and performance</li>
<li>Database schema creation, data migration and rollback</li>
<li>Installer creation and signing (if you have them)</li>
<li>Generation of documentation for every release</li>
<li>Last-mile deployment of your application to any environment</li>
<li>Provisioning of infrastructure all the way from test environments to production</li>
<li>Provisioning of developer workspaces</li>
</ul>
<p>We mentioned error reduction and repeatability. There are dozens of other compelling reasons to relentlessly automate. We have a few favorites:</p>
<ul>
<li>Scripts codify team knowledge. This ups your bus factor. That’s a good thing.</li>
<li>It encourages consistency across environments. Once you have a script working in one environment, you’ll want to use it everywhere.</li>
<li>The output of these scripts provides a detailed audit trail that is hard to match manually.</li>
</ul>
<p>Relentless automation might seem daunting, particularly if you’re focused on something that truly cannot be automated,
such as exploratory testing. In our experience, there will be many more parts of your process that can be automated than
those that cannot. The best approach is to figure out the manual processes you are already using, and then make a plan
to gradually automate them. As you begin to achieve small successes, you will want to automate more and more.</p>
<p>If your team is hesitant to automate and cannot be convinced about the need, you might have to consider if there's
enough maturity in your team to move toward CD. An automation mindset is a firm prerequisite for CD.</p>
<h3 id="conclusion">Conclusion</h3>
<p>There are numerous small steps you can take early in your CD journey that will have immediate, positive impacts. Don’t
waste time flailing around with CD tools if you’ve got a bunch of low-hanging fruit that can provide high value
quickly. Also, this approach will set you up for success once you do feel the need to adopt an end-to-end CD tool.</p>
<p>In coming posts we will present similar sets of questions for you to consider in the areas of infrastructure,
application design, process, and culture.</p>
<p><em>Update:</em> <a href="../../../03/15/are-you-ready-for-continuous-delivery-part-2-feedback-loops/">Part 2</a> of this series, discussing feedback loops, has been published.</p>
</content>
</entry>
<entry>
<title>It's not Continuous Delivery if you can't deploy right now.</title>
<link rel="alternate" href="https://go.cd/2016/01/17/not-done-unless-its-done/"/>
<id>https://go.cd/2016/01/17/not-done-unless-its-done/</id>
<published>2016-01-17T05:30:00+05:30</published>
<updated>2016-03-30T09:52:10+05:30</updated>
<author>
<name>Ken Mugrage</name>
</author>
<content type="html"><div>
<div class="float-image float-right">
<img src="/assets/images/blog/deploy-now/but_it_just_needs_oven.png" class="pad-left" />
</div>
<div class="float-article float-left">
<p>
This is the first in a series of articles that will cover the types of pipelines you should implement to ensure your software is truly ready for production at any time. The culture changes required in most organizations are incredibly important, but I’m going to focus on some technical practices in this series.
</p>
<p>
Years ago, when I was in management, I had a favorite rule. If asked “is something done?” the answer could not include the word “except” or any of its synonyms.
</p>
<p>
"It’s done except for…" = "no".
</p>
<p>
I hear people say all the time that they're practicing continuous delivery. This declaration is often followed by something like, “I can let the security team know anytime”, or “I just have to run the performance tests". If you can't push your software to production right now, you're not done with your continuous delivery journey.
</p>
</div>
<div class="clear" />
</div>
<h2 id="some-of-the-things-you-might-not-be-running-but-should-">Some of the things you might not be running but should …</h2>
<p>In this article I'm going to give an overview of some of the types of pipelines that you should be running if you want your software to be ready to ship at all times. Of course this is not an exhaustive list, there are most likely things that are specific to what you're doing that you should have, just as there are probably things that I will list that don't make sense for you. The point is that everything possible should be automated as part of your deployment pipeline.</p>
<p>Over the next several weeks I'll be writing more about each of these types of pipelines, follow me on Twitter if you would like to know when new articles come out at <a href="https://twitter.com/kmugrage">@kmugrage</a>.</p>
<h3 id="security-testing">Security testing</h3>
<div>
<div class="float-image float-left">
<img src="/assets/images/blog/deploy-now/but_it_just_needs_recorded.png" class="pad-right" />
</div>
<div class="float-article float-right">
<p>
All too often this is the primary category of tasks that don’t get run until everything else is “done”. This often results in issues that are very hard to track down since the time between tests has been very long. By writing these tests all the time you’ll have a much easier time tracking down issues before they become too hard to fix.
</p>
<p>
Many people feel it's not the greatest idea to have the same person writing the security tests who is writing the code. There’s also the question of skillset; great security people are not common. It’s important that you use a Continuous Delivery server that is capable of using more than one build material for a single pipeline. That way these tests will run whenever the code or the tests are updated.
</p>
</div>
<div class="clear" />
</div>
<h3 id="performance-testing">Performance testing</h3>
<p>This one is probably the hardest to run all the time if for no other reason than hardware costs. To properly performance test many applications takes a serious dedication of resources. Luckily public and private cloud infrastructures have made this somewhat easier. Consider having a pipeline where the first stage spins up the machines you need either as virtual machines or containers, runs the tests, and then shuts down those machines.</p>
<p>In this day of “search for a term, hit the link, wait no more than 2 seconds for the page to load” performance is critical. To make matters worse, performance issues are often very hard to track down. You want to know as soon as possible if you’ve introduced a problem.</p>
<h3 id="management-of-the-environments">Management of the environments</h3>
<p>It’s been said many times that it’s much easier to break an application by messing up the environment that it is by doing something wrong in the source code. If something like the security advisory comes out and you need to update systems as soon as possible, you should be able to commit the change to a configuration management tool, have that change picked up by your continuous delivery system and run it through exactly the same process as a code change.</p>
<h3 id="testing-of-the-deployment-itself">Testing of the deployment itself</h3>
<div>
<div class="float-image float-right">
<img src="/assets/images/blog/deploy-now/but_it_just_needs_paint.png" class="pad-left" />
</div>
<div class="float-article float-left">
<p>
This isn’t really a type of pipeline all by itself. This is the concept that you should be deploying the software exactly the same way in every environment that you plan to deploy in production. Unfortunately it’s still not uncommon for people to copy over files to a QA server run test and only then run the actual deployment tool that pushes the same software to a production server.
</p>
<p>
No matter how you’re doing your actual production deployment, whether that is shell scripts, dedicated tools, configuration management tools, or others, you should be deploying in exactly the same way everywhere else. Consider using tools that can read environment specific details from environment variables or other inputs.
</p>
</div>
<div class="clear" />
</div>
<h3 id="why-wouldnt-you-do-this">Why wouldn’t you do this?</h3>
<p>One of the biggest objections I hear to running all of these types of pipelines on every change is that the pipeline will take too long to run. This is why having a continuous delivery server that’s capable of running multiple pipelines in parallel while ensuring that software doesn’t go any further if any of those pipelines fail is so important.</p>
<p>The other objection I hear the most is that people simply lack the automation around these areas. This is certainly valid, and I don’t want to pretend that any of this is easy to do. Don’t be afraid to start with what you can, and then add other things your pipeline as your capabilities grow. A continuous delivery pipeline is a bit of a living system it should be evolving along with your processes.</p>
<h3 id="what-are-the-other-big-ones">What are the other big ones?</h3>
<p>I'm very interested in hearing other types of pipelines that you find useful.</p>
<style type="text/css">
.float-image {
max-width: 25%;
}
.float-image img {
max-width: 100%;
}
.float-image img.pad-right {
padding-right: 10px;
}
.float-image img.pad-left {
padding-left: 10px;
}
.float-article {
max-width: 75%;
}
.float-left {
float: left;
}
.float-right {
float: right;
}
.clear {
clear: both;
}
@media (max-width: 699px) {
.float-left, .float-right {
float: none;
}
.float-image {
max-width: 100%;
}
.float-article {
max-width: 100%;
}
}
</style>
</content>
</entry>
<entry>
<title>Guest post: GoCD - Continuous Delivery through pipelines</title>
<link rel="alternate" href="https://go.cd/2015/12/28/gocd-continuous-delivery-through-pipelines/"/>
<id>https://go.cd/2015/12/28/gocd-continuous-delivery-through-pipelines/</id>
<published>2015-12-28T05:30:00+05:30</published>
<updated>2016-03-30T09:52:10+05:30</updated>
<author>
<name>Nenad Bozic</name>
</author>
<content type="html"><p>In order to compete in today’s IT market, you must be truly agile, you must listen
to your customers and deliver features in a timely manner. In order to support business
development and marketing in their lean strategies we, as developers, must leverage
fast deliveries and deployments and test automations. Continuous Delivery makes it possible
to continuously adapt software in line with user feedback, shifts in the market and changes
of business strategy. Testing, support, development and operations work together as one
delivery team to automate and streamline the build, test and release process.</p>
<p>There are a lot of quality tools out there. For a long time, we used Jenkins as the most
widespread CD tool, with a great community and a lot of plugins and integrations with
other tools. What we lacked was a natural pipeline flow and good visualization. We also
lacked some more advanced features like pipeline dependencies, conditional triggering jobs
from many pipelines, templating etc. We needed to look elsewhere and we decided to go with
GoCD, a product by ThoughtWorks which became open source in 2014. It is a Java/Ruby on Rails
advanced continuous integration and release management system, according to their website.
The major reason why we chose it was that they modeled pipelines as first class citizens
and that, in our opinion, it used right abstraction for delivery pipeline.
But let us start from the beginning.</p>
<h2 id="gocd-overview">GoCD overview</h2>
<p>At the highest level, Go consists of two main components, the Go Server and multiple Go Agents.
The system works on a pull model where agents periodically poll the server for work.</p>
<p><img src="/assets/images/blog/go-cd-continuous-delivery-through-pipelines/goCD-architecture.png" alt="GoCD architecture" /></p>
<p>The main flow of Go goes through a couple of following stages:</p>
<ol>
<li>User adds a pipeline with material</li>
<li>MDU (material update sub-system) notices a new commit</li>
<li>Scheduler schedules a build</li>
<li>Go agents poll for work and get assignments</li>
<li>Agent does the work</li>
</ol>
<p>Let’s talk about the main building blocks of Go. As stated before, the main abstract is a
<strong>pipeline</strong> which is the highest unit of work with its inputs and outputs. The input object
of one pipeline is called a <strong>material</strong> and it can be either a version control resource
(Git, Gerrit, Subversion, Mercurial) or an output from another pipeline. The output of a
pipeline is called an <strong>artifact</strong>. Since there is one server and multiple agents,
there is no guarantee that the whole pipeline will be performed by the same Go agent.
Artifacts are copied to Go server and picked up by agents that require them for their jobs.</p>
<p>Each pipeline consists of one or more <strong>stages</strong>, where each stage has one or more <strong>jobs</strong> and
each job has one or more <strong>tasks</strong>. Granularity to this level of details is done because of
parallelism. Inside pipeline stages are sequential and can be triggered automatically on
success or manually. Within each stage, jobs are parallel. The outcome of a stage is considered
as failure if at least one job fails. Again, tasks within each job are also sequential.</p>
<p>After installing Go server and client there is no need for an extensive configuration.
However, it is recommended to create a separate partition on a computer’s hard disk for
Go server artifacts (artifacts can grow over time and problems may occur). In server
configuration, there is also an admin tab for URL configuration. We needed to get feedback
on failing builds, so we integrated Go with LDAP so each user of Go had an email and could
subscribe on build information based on preferred filters.
Here is a <a href="https://www.go.cd/documentation/user/current/configuration/dev_authentication.html">link</a>
which explains the authentication process.</p>
<p>It is worth mentioning that GoCD has a powerful API for power users where the entire
configuration can be performed via REST. It has great documentation with examples,
JSON requests and response. Here is a link to
<a href="https://api.go.cd/current/#introduction">GoCD API documentation</a>.</p>
<h2 id="pipeline-dependencies">Pipeline dependencies</h2>
<p>Go supports pipeline dependencies. Artifacts defined in upstream dependencies can be
accessed by downstream dependency. Downstream pipeline can be configured either to
be triggered automatically (for example for building on development environment) or
manually (for example for building on production environment).</p>
<p>Multiple pipeline dependency is called <strong>fan-in</strong> and it ensures that pipeline is triggered
only when all upstream dependencies finish. Upstream dependencies consist of other
pipelines or version control which make them powerful. If you have a client server
application and have functional tests on the client’s side, which depend on the server
being updated, you can make a client functional test pipeline which will trigger on commit
on client and successful build and deploy of server side.</p>
<p><img src="/assets/images/blog/go-cd-continuous-delivery-through-pipelines/goCD-fanIn.png" alt="GoCD fan-in" /></p>
<p>The additional challenge here is a diamond-like dependency, where it is not enough for both
upstream dependencies to finish but to have right versions. The following diagram depicts
that problem. Here, configuration is really important, C1 must be set as material for both
C2 and C3 and C2 and C3 are materials for the pipeline Package. The package will auto trigger
when both C2 and C3 go green with the same version of code.</p>
<p><img src="/assets/images/blog/go-cd-continuous-delivery-through-pipelines/dieamond-problem.png" alt="GoCD diamond problem" /></p>
<h2 id="pipeline-templates">Pipeline templates</h2>
<p>Template engine is a great effort and time saver. Each pipeline can be promoted to template
and, based on that template, other pipelines can be built with a few clicks. We used this
extensively for deployment pipelines. Usually, there are multiple environments
(development, stage, UAT, production) and deployment process is the same with only a
few parameters which are different. You can create one deployment pipeline and test it.
When you make sure it works, extract the template out of it and clone the deployment
pipelines for other environments. Differences can usually be covered with a couple of
parameters, which can be created upon pipeline creation.</p>
<h2 id="conclusion">Conclusion</h2>
<p>In the introductory part, we mentioned that pipelines are modeled in GoCD as first
class citizens. In Jenkins, you can order a row of boxes and let the flow go through
each one of them until it finishes. Each box here in Jenkins is equivalent to each task
in GoCD. Moreover, in GoCD, each box is pipeline itself with its stages, jobs and tasks.</p>
<p>GoCD is a fairly new player in the automation world with refreshing UI and a couple of
nice concepts. The community is still growing but it is responsive. We had a couple of
problems which we posted on StackOverflow and usually got answers pretty quickly.
We are using <a href="https://github.com/ashwanthkumar/gocd-build-github-pull-requests">Gerrit/Github plugin</a>
to notify Github PR on failed or passed build which is being actively developed.
New releases are pushed frequently. Documentation is great, especially API documentation.
It’s our pleasure to use such a great UI and a couple of nice advanced features.
You have a possibility to model your pipeline in a great variety of ways.
There are some features missing but we in <a href="https://www.smartcat.io/">SmartCat</a> are all
about open source so, in the future, we will try to help this project and start contributing.</p>
<h2 id="about-the-author">About the author</h2>
<p><em>This is guest post by Nenad Bozic, one of Co-Founders of SmartCat. You can find out more about
Nenad and SmartCat team on their <a href="https://www.smartcat.io/">website</a>.</em></p>
</content>
</entry>
</feed>