DevStories: hogyan kezdtem programozni?

Olyan logikusnak tűnt minden: bepötyögök pár sor kódot egy könyvből és máris futhat a játékom  a kölcsön kapott TVC-n, és akár kazettára is kimenthetem, vagy módosíthatom. Teljesen ártatlanul kezdődött úgy nyolc évesen.

De hamar jött egy kis programozás c4-en alsó tagozatban, majd egy 286-os másik kölcsön gép, ahol már a qBasic mindent vitt. Akár a megoldott matek házik ellenőrzését: koordináta geometria feladatok, egyenlet rendszerek megoldása kis szkriptekkel a kíváncsiság kedvéért.

Sokat inspirált nagybátyám, akivel fél évente, ha találkoztam, de mindig valami érdekeset mondott, de neki ez nem is volt a szakmája.

Az első látványos dolgok már HTML és JavaScript kódolással jöttek 1995-ben. A korábbi procedúrák a fejére álltak és a leírások absztrakciójának megértésében sokat segített AlieN a gimiben – a SHIFT gombot mindig nyomva tartjuk 😊

Azóta sikerült jó sok mindenféle jóba bele kontárkodni, bántam, hogy egyszer konzulensnek mentem, egy igazi fejlesztő csapatban már több mindent lehetett kódolni és tanulni.

Ma már mindegy milyen nyelv, vagy architektúra elé ülök. Sőt, az a legjobb, egy új frameworköt megtanulni egy coacheeval.

DevOps health radar

 Ok, you have your value stream, you know how we deliver value to the users.

DevOps mindset about continuous learning we need that during the full delivery.



SAFE published a useful DevOps assessment tool to review your current situation and start conversation where you would like to go.

You dont have to follow safe to review your current situation. 

I would recommend to use the online tool to get proper graphics and also a benchmark report.


What are the aspects?


Continuous Learning Loop Aspects

Activity

Description and Criteria

Score

Continuous Exploration

Hypothesize

Hypothesizing entails expressing a business idea (or epic) in terms of the business value it is expected to deliver. This hypothesis is then implemented as a Minimum Viable Product (MVP) through the Continuous Delivery Pipeline and evaluated for accuracy when released.

Rate your team’s ability to translate business ideas into hypothesis statements that are clear and measurable.

Sit (1-2): Ideas are vague or not defined.

Crawl (3-4): Ideas are defined (example: as epics) but do not include hypothesis statements.

Walk (5-6): Some ideas are expressed as hypothesis statements with measurable outcomes.

Run (7-8): Most ideas are expressed as hypothesis statements with measurable outcomes and include MVPs.

Fly (9-10): All ideas are expressed as hypothesis statements with measurable outcomes and include MVPs.

Tools

Process

Practice

Principal

Score:

Collaborate & Research

Collaborate and Research involves Product Management working directly with end-users, stakeholders, and subject matter experts to understand customers' needs. Product Management also identifies specific business outcomes and associated metrics to guide solution delivery.

Rate your team's ability to collaborate with customer experts and IT experts to define Minimum Marketable Features (MMF) in support of the hypothesis.

Sit (1-2): Product Management roles and responsibilities are not defined or followed.

Crawl (3-4): Product Management creates requirements in large batches with little customer or development collaboration.

Walk (5-6): Product Management collaborates with business-side or development-side experts, but not both, when defining requirements.

Run (7-8): Product Management regularly collaborates with business-side, development-side, and operation-side experts but does not define MMFs.

Fly (9-10): Product Management always collaborates with business-side, development-side, and operation-side experts and defines MMFs.


Architect

Architecting for continuous delivery involves applying "just enough" intentional architecture to assure policy compliance without sacrificing product development flow, to ensure solutions are loosely coupled, and to continuously pay down technical debt.

Rate your team's effectiveness at architecting for continuous delivery.

Sit (1-2): Architecture is monolithic and fragile; it is difficult to change and involves managing complex dependencies across many components and systems.

Crawl (3-4): Architecture is predominantly monolithic but some applications and systems are loosely coupled.

Walk (5-6): Architectures is mostly decoupled but doesn't allow Release on Demand.

Run (7-8): Architecture is aligned around value delivery and with few dependencies across components and systems.

Fly (9-10): Architecture is built for Release on Demand and operability.


Synthesize

Synthesizing involves combining the outputs of Hypothesize, Collaborate & Research and Architect to produce well-formed, prioritized features. These features then become the primary vehicle of value delivery through the remainder of the Continuous Delivery Pipeline.

Rate your team's ability to synthesize the results of Continuous Exploration activities into a well-crafted, prioritized, actionable feature backlog.

Sit (1-2): The program backlog does not exist or is not shared.

Crawl (3-4): The program backlog exists but the Features are incomplete and prioritization is an afterthought.

Walk (5-6): The program backlog contains fully defined features but are not prioritized using weighted shortest job first (WSJF).

Run (7-8): Features in the program backlog are complete, prioritized using WSJF and calibrated to the delivery capacity of the agile release train (ART).

Fly (9-10): The program backlog is a collection of Minimum Marketable Features created using Behavior Driven Development (BDD) and prioritized using WSJF.


Continuous Integration

Develop

Developing in the Continuous Delivery Pipeline involves splitting features into stories, implementing stories in vertical slices using Test-Driven Development (TDD), and committing changes to version control as they are made.

Rate your team's ability to quickly and reliably define and implement stories.

Sit (1-2): The team backlog does not exist or is not used to manage daily work.

Crawl (3-4): Stories are either incomplete or too verbose; unit tests are generally not written; peer reviews are not conducted.

Walk (5-6): Stories are complete; most changes have unit tests; peer reviews are usually conducted.

Run (7-8): Code is checked in daily; unit test coverage is 80%+; peer reviews are always conducted.

Fly (9-10): Code is checked in multiple times per day; tests are written before code (TDD); pair work and other Built-in quality practices are the norm.


Build

Build is triggered at the moment of check-in and involves compiling, unit testing (and other forms of component-level validation), successfully merging to trunk/main, committing to the repository, and producing deploy-able artifacts.

Rate your team's effectiveness at building and integrating continuously.

Sit (1-2): Builds are run fewer than once per iteration and/or are completely manual.

Crawl (3-4): Builds are run once per iteration and are partially automated. Dev branches are open for a month or more and builds break often.

Walk (5-6): Automated builds run once a day. Broken builds are corrected in 2-4 hours. Manual unit tests are run against each build. Dev branches are open for 2-4 weeks.

Run (7-8): Builds run automatically upon code commit; broken builds are corrected within 1 hour; automated unit tests are run against each build; dev branches are merged to trunk every iteration.

Fly (9-10): Builds run on every commit; builds include static code analysis and security testing; gated commits prevent defects from entering the version control; dev branches are merged to trunk on every commit."


Test End-to-End

Testing involves validating feature-level functionality in production-like environments. End-to-end testing typically includes functional testing, integration testing, regression testing, performance testing and exploratory testing.

Rate your team's effectiveness at testing continuously, end-to-end in production-like environments.

Sit (1-2): Testing is performed manually in environments that do not mimic production; testing occurs in large batches during a scheduled "testing" phase.

Crawl (3-4): Testing is mostly manual in non-production-like environments; stories are implemented and tested independently within a single PI.

Walk (5-6): Half the testing is automated and performed in production-like, or production-simulated, environments every PI.

Run (7-8): The majority of tests are automated and run in production-like environments; stories are implemented and fully tested every iteration.

Fly (9-10): Successful builds trigger automatic deployment to production-like test environments; all tests are automated; tests run in parallel and changes are fully validated after every commit.


Stage

Staging involves deploying features to a full copy of the production environment, from where they can be demonstrated to stakeholders, user acceptance tested and hosted for training purposes prior to production launch.

Rate your team's ability to stage features in full production-like (non-test) environments for final validation prior to production deployment.

Sit (1-2): No staging environment exists or we use a test environment for staging.

Crawl (3-4): Features are deployed manually to a staging environment once every PI.

Walk (5-6): Features are deployed to a staging environment once per month and demonstrated to Product Management.

Run (7-8): Features and infrastructure are auto-deployed to a staging environment every iteration and accepted by Product Management.

Fly (9-10): Stories, changes and infrastructure are auto-deployed to a staging environment, validated, and immediately proceed to deployment.


Continuous Deployment

Deploy

Deployment is the actual migration of features into the production environment. Because the Continuous Delivery Pipeline separates deployment from release, deployed features are not assumed to be live to end users.

Rate your team's ability to continuously deploy features to production as well as the ability to control their visibility using feature toggles and/or other means.

Sit (1-2): Features are deployed to production every 3+ months; deployments are manual and painful; "deployed" implies "released".

Crawl (3-4): Features are deployed to production at PI boundaries; deployments are mostly manual; "deployed" implies "released".

Walk (5-6): Features are deployed to production every iteration; deployments are mostly automated; some features can be deployed without being released.

Run (7-8): Features are deployed to production every iteration and fully automated through the pipeline; dark releases are common.

Fly (9-10): Features are deployed continuously throughout each iteration; Dev teams initiate deployments directly via pipeline tools; release is completely decoupled from deployment; dark releases are the norm.


Verify

Deployments must be verified for completeness and integrity before releasing to end users.

Rate your team's ability to accurately determine deployment success or failure and ability to roll back or fix forward as appropriate to correct deployment issues.

Sit (1-2): Deployments are not verified in production before being released to end users.

Crawl (3-4): Deployments are verified with manual smoke tests and/or user acceptance testing (UAT); we address deployment issues within a stated grace/triage/warranty period; we often correct issues directly in production.

Walk (5-6): Deployments are verified with manual tests prior to releasing to end users; rolling back is painful or impossible; we do not make changes directly in production.

Run (7-8): Deployments are verified using automated smoke tests, synthetic transactions and penetration tests prior to release; we can easily roll back or fix forward to recover from failed deployments.

Fly (9-10): Automated production tests run on an ongoing basis and feed monitoring systems; failed deployments can be rolled back instantly or fixed forward through the entire pipeline.


Monitor

Monitoring implies that full-stack telemetry is active for all features deployed through the Continuous Delivery Pipeline so that system performance, end-user behavior, incidents and business value can be determined quickly and accurately in production.

Rate your team's effectiveness at monitoring the full solution stack and ability to analyze feature value based on these events.

Sit (1-2): No feature level production monitoring exists; only infrastructure monitoring is in place.

Crawl (3-4): Features only log faults and exceptions; analyzing events involves manually correlating logs from multiple systems.

Walk (5-6): Features log faults, user activity and other events; data is analyzed manually to investigate incidents and measure business value of Features.

Run (7-8): Full-stack monitoring is in place; events can be correlated throughout the architecture; data is presented through system-specific dashboards.

Fly (9-10): Federated monitoring platform provides one-stop access to full-stack insights; data is used to gauge system performance and business value.


Respond

Responding to unforeseen production incidents is critical to the Continuous Delivery Pipeline.

Rate your team's effectiveness at proactively detecting high severity production issues, identifying root causes using monitoring systems and quickly resolving issues by building, testing and deploying fixes through the pipeline (versus applying changes directly in production).

Sit (1-2): Customers find issues before we do; resolving high priority issues is time consuming and reactive; customers have low confidence in our ability to recover from production issues.

Crawl (3-4): Operations owns production issues; development involvement requires significant escalation; teams blame each other in times of crisis.

Walk (5-6): Development and Operations collectively own the incident resolution process; recovering from major incidents is reactive but a team effort.

Run (7-8): Our monitoring systems detect most issues before our customers do; Dev and Ops work proactively to recover from major incidents.

Fly (9-10): Our monitoring systems alert us to dangerous conditions based on carefully-designed tolerance thresholds; Developers are responsible for supporting their own code and proactively issue fixes through the pipeline before users are affected.


Release on Demand

Release

Releasing involves making deployed features available to end users.

Rate your team's ability to release features to users on demand using feature toggles, blue/green environments, canary releases, and so on.

Sit (1-2): Releases are tightly coupled to deployments and customers are extremely dissatisfied with the frequency of releases.

Crawl (3-4): Releases are tightly coupled to deployments but customers are somewhat dissatisfied with the frequency of releases.

Walk (5-6): Release and deployment are coupled but both occur continuously or on demand.

Run (7-8): Release is decoupled from deployment; deployed features are released to the end user population based on business readiness.

Fly (9-10): Deployed Features can be released to individual segments of the user population; feature toggles are refactored when no longer used.


Stabilize

The Continuous Delivery Pipeline requires the production environment to be continually stable, reliable, available, supportable and secure.

Rate your team's effectiveness at maintaining stable solutions that avoid unplanned down time and security breaches.

Sit (1-2): We experience frequent unplanned outages and/or security breaches with long recovery times.

Crawl (3-4): We experience occasional unplanned outages but recover within our service level agreements.

Walk (5-6): We have very few unplanned outages; availability, security, and disaster recovery measures are effective.

Run (7-8): We have no unplanned outages; We plan and rehearse failure and recovery.

Fly (9-10): We maximize resiliency by deliberately injecting faults into our production environment and rehearsing recovery procedures.


Measure

Measurement involves collecting factual information about the value of a deployed feature and evaluating it against the original hypothesis statement.

Rate your team's ability to collect objective information about the actual value realized by deployed features so that it can inform strategic financial decisions.

Sit (1-2): We don’t define or measure the value of Features.

Crawl (3-4): We’ve defined what "value" is but don’t know how to measure it.

Walk (5-6): We capture qualitative feedback from the business about the value of our Features.

Run (7-8): We capture qualitative and quantitative feedback from the business and our monitoring systems about the value of our features.

Fly (9-10): We aggregate the quantitative and qualitative feedback to objectively validate the original hypothesis and inform pivot-or-persevere decisions.


Learn

Learning entails making a judgment call to validate or invalidate the original hypothesis based on objective measures of business value, system performance and customer feedback.

Please rate your team's ability to make strategic, pivot-or-persevere decisions based on empirical performance data and commitment to actively applying those insights to continuously improve the pipeline.

Sit (1-2): Features are never evaluated post-release.

Crawl (3-4): Features are sometimes evaluated using subjective information and/or unilateral opinions.

Walk (5-6): Hypotheses are evaluated using objective measures but actions are heavily influenced by corporate politics.

Run (7-8): Hypotheses are objectively evaluated; pivot-or-persevere decisions are made without mercy or guilt.

Fly (9-10): Continuous learning and experimentation are ingrained in the DNA of the organization.