Nothing Special   »   [go: up one dir, main page]

DEV Community

Cover image for #DEVDiscuss: CI/CD Pipelines
Erin Bensinger for The DEV Team

Posted on

#DEVDiscuss: CI/CD Pipelines

image created by Margaux Peltat for the Chilled Cow YouTube channel

Time for #DEVDiscuss — right here on DEV 😎

Inspired by @pavanbelagatti's Top 7 post, tonight’s topic is...Continuous Integration and Continuous Delivery/Deployment (CI/CD) pipelines 🔁


Questions:

  • In your opinion, what are the pros and cons of using CI/CD in development?
  • Which tools are you using for CI/CD?
  • How is your pipeline set up? How could it be improved or simplified?
  • Any triumphs, fails, or other stories you'd like to share on this topic?

Top comments (12)

Collapse
 
moraym profile image
Moray Macdonald

I find that while automated testing on push is essential, having manual deployments can be quite fun. Having a bit of ceremony around deployments is a great way to celebrate the team's achievements. I always script my deployments but they're not always automated. Particularly for new devs it's a nice confidence boost to let them "push the button" on the deployment that makes their code live.

When you're a startup building out fast though, spending a day to get your CI/CD up and running nicely can really pay dividends later on. And make sure that your rollback process is automated too - being able to get changes live quickly isn't so valuable if rolling back a bad commit takes hours of hair-pulling and Git wizardry!

Collapse
 
ryencode profile image
Ryan Brown

Pros:

  • Reliable build process, each version is built the same, instead of on Jerry's desktop dev machine. Jerry doesn't keep his libraries updated, Except that one that is in the beta channel. Sometimes Jerry forgets a step. (Sorry for any real Jerrys out there.)
  • Each dev doesn't need to know the full build process head-to-toe (they should though)
  • Build errors are apparent, allowing quicker fixes
  • Saved time in the long run

Cons:

  • Increased time up-front to construct the pipeline
  • Separates the devs from the final build (requires more education to keep people in the know)

My setups:

  • Work:
    • Azure GIT
    • TeamCity (via a push notice middleware trigger)
    • Octopus Deploy (server install)
  • Home:
    • BitBucket
    • TeamCity (via a push notice middleware trigger * different than above)
    • Octopus Deploy (free cloud instance)

How could these be improved? For the Work one, we don't have a tie in with as much Azure Pipelines as needed. That is a parallel pipeline that serves different needs.

I've not done much in the way of setting up config-as-code in either TeamCity or Octopus deploy. Those are both areas of interest for future development. The existing UI-friendly interfaces are good for our less technical developers (those who are more specialized in BI or Data specifically but less versed in other technologies used in the packaging of deployable assets.)

Triumphs are being able to support about 400 some odd projects. Some small, only collecting a few SQL script files, others wide ranging Process Automation engines and client libraries used in other projects. The system is stable enough that most of our code pushers forget it exists and just expect the magic that comes between git push and a deployment to our test environment.

Fails include the lack of education, or the short-shelf-life of any teaching in this area. So much write-only documentation has been created to solve this, but despite being only a quick search away, seems to be not found often.

The occasional break due to someone checking in a whole database and blowing the package size checker or someone else not including a file and being unable to understand the error log.

Using Octopus deploy reduced the incidence of Production deploy errors to nealy zero. It did highlight the differences between our test and production systems as the deployment process became scripted and repeatable. No more chance for the deployment person to skip steps or quickly "solve" an issue during install.

Collapse
 
m4ty profile image
Maty

Is the free version of Octopus Deploy limiting, that you can only have 5 projects?

Collapse
 
ryencode profile image
Ryan Brown

The free license limits the number of deploy targets (deploy agents really) and concurrent tasks:

Octopus Free License

Thread Thread
 
m4ty profile image
Maty

Oh that’s nice, I guess I understood their plans wrongly. Thanks for letting me know, it looks like something I would love to implement in my CI/CD flow.

Thread Thread
 
ryencode profile image
Ryan Brown

They had changed their plans a while ago. They seem to often re-evaluate their offerings and licensing (usually to the user's benefit)

  • As stated in the original, it really cut down on failed deployments, and reduced the barrier to deploying to almost nil.
  • I use it almost exclusively to do docker deployments now. Earlier steps to setup env files and volumes, then create/run the containers. It's very slick.
Thread Thread
 
m4ty profile image
Maty

I use Docker compose deployments only, currently my redeployments are happening via Watchtower. I don’t have much control of it, thus I al searching for possible improvements. Octopus Deploy looks really promising.

Collapse
 
ben profile image
Ben Halpern

Any triumphs, fails, or other stories you'd like to share on this topic?

I think the idea of CI/CD is pretty well understood and adopted, but IMO the fails come when you do CI/CD just because it's sort of become standard, but you don't actually practice the continuous part, and accept the tradeoffs. I think effective CI means you will push out bugs, and you need processes and cultures for accepting these scenarios and dealing with them appropriately.

CI/CD doesn't work if practiced only in lip service.

Collapse
 
berviantoleo profile image
Bervianto Leo Pratama

I host my open-source projects on GitHub, so I use GitHub Action for CI/CD. Mostly I use CI, unit testing, and integrations testings. It helps me to check when upgrading some dependencies.

Collapse
 
jphi_baconnais profile image
Jean-Phi Baconnais

I use GitLab CI for my personal works, it's so great. Easy to create some good jobs, performan, I can execute some actions (lint, test, build, deploy) after commit.

Collapse
 
techwatching profile image
Alexandre Nédélec

Only cons is that it takes time to set up CI/CD pipelines. Some tools I used :

  • Azure Pipelines: easy to use thanks to a GUI assistant to choose pre-made tasks for your pipeline, lot of features
  • GitHub Actions: very similar to Azure Pipelines but without assistant, more community actions than Azure Pipelines
  • GitLab CI: very powerful but developer experience is not as good as others. No pre-made tasks, only scripts All these 3 use YAML for editing and are very specific to their platform. Using a platform agnostic tool and a programming language to author pipelines is a better way in my opinion, check tools like nuke build.
Collapse
 
mihneasim profile image
Mihnea Simian • Edited

CI/CD should be a hard requirement on any project that involves more than one contributor; although I would practice it even on individual / side projects. It's hygienical.

Steps should be automatic, as much as possible (sample steps: build, test, apply db changes, asset deployment, cache invalidations etc.)
The complexity and number of the stages and checks depend on your functional and non-functional requirements, and how much they matter to you (read: to the stakeholders): what's the minimum test coverage you want, do you care about linting, how about sonar/code quality? Four eyes principle, using a protected git branch - any change management steps? Accessibility tests, load tests? Pre-production environment for smoke test? Canary deploy on 5-10% of traffic before moving forward?
The list can go on and on - the sky is the limit. But the principles remain:

  • as much as possible, everything should be versioned and automatic (automate anything, anything replay-able)
  • make it run fast, keep the feedback loop short
  • every change should trigger the feedback process
  • broken pipeline = every body should focus on making it work again (although i've heard there are some alternatives on this).
  • include the checks / quality gates that matter to your stakeholders

Imho, CI/CD is a craft in itself. I read the first parts of Continuous Delivery by Jez Humble and David Farly, as advised by my lead, and I can't not recommend it further. It's great stuff, regardless of your career path in software products, or engineering.

In my org., i've studied the frontend build & deploy process, and we do use all the harness for building (npm, webpack), running tests (unit), then also vulnerability scan, dependencies versions check (again, for security checks). They're all triggered when merging to designated protected branches. A subset is run when opening the PR (so a merge does not break the integration pipeline).
All pipeline configurations are stored in versioned .yml-s, inside the repo of the deliverable; since we use Azure DevOps for the whole ecosystem.

Mkay, maybe I should keep this short. Wish you all of the peaceful, in-control moments the CI/CD process brings into teams:)!