Climate for Learning: Building Generative Engineering Cultures
At the start of the DevSecOps transformation at a Tier-1 bank, the assumption was that the hardest problems would be technical — migrating legacy pipelines, integrating SAST tooling, automating compliance gates. That assumption was wrong. The hardest problem was culture. Engineers had spent years in an environment where production incidents triggered blame, where raising concerns about technical debt was treated as disloyalty, and where the safest career move was to stay quiet and follow the prescribed process. The tooling was fixable. The learned helplessness was not — at least not quickly.
It took two years to understand that culture change is not a programme you run alongside the technical transformation. It is the transformation. Ron Westrum's research on organisational typologies provided the language, and Amy Edmondson's work on psychological safety provided the framework. The DORA research programme then supplied the empirical evidence to make the business case: generative culture is not a "nice to have" — it is a statistically significant predictor of software delivery performance, organisational performance, and employee wellbeing. Every technical capability invested in — CI/CD, observability, infrastructure as code — delivered returns only to the degree that the surrounding culture allowed engineers to use them effectively.
Why Culture Is an Engineering Capability
A positive climate for learning fosters psychological safety, encourages experimentation, and promotes continuous improvement. In engineering organisations, culture is not separate from technical capability — it is the substrate on which technical capability grows. You can deploy the most sophisticated CI/CD platform in the industry, but if engineers are afraid to push code to production because a failed deployment means a career-limiting incident report, your investment is wasted.
The Westrum Organisational Culture Model
In 2004, sociologist Ron Westrum published "A Typology of Organisational Cultures" in the BMJ Quality & Safety journal. He identified three types of organisational culture based on how information flows:
| Characteristic | Pathological (Power-Oriented) | Bureaucratic (Rule-Oriented) | Generative (Performance-Oriented) |
|---|---|---|---|
| Information | Hidden | Ignored | Actively sought |
| Messengers | Shot | Neglected | Trained |
| Responsibilities | Shirked | Narrow | Shared |
| Bridging | Discouraged | Tolerated | Encouraged |
| Failure | Covered up | Leads to justice | Leads to inquiry |
| Novelty | Crushed | Creates problems | Implemented |
The Westrum model is powerful because it connects culture directly to outcomes. In generative cultures, information flows freely, failure leads to learning rather than punishment, and novelty is implemented rather than crushed. The DORA research confirmed this empirically: Westrum culture is predictive of both software delivery performance and organisational performance.
What Generative Culture Looks Like in Practice
A generative culture is one where individuals feel safe to take risks, learn from failures, and openly share ideas. This type of culture is essential for fostering innovation and continuous improvement. In practice, when leading engineering teams in banking, generative culture manifests in specific, observable behaviours:
- Blameless Postmortems: Conducting postmortems without blaming individuals to learn from failures and prevent future incidents. At the bank, we introduced a strict rule: postmortem documents could not contain individual names. The focus was always on systemic factors — what process, tooling, or environmental condition allowed the failure to occur?
- Open Communication: Encouraging open and honest communication among team members, including the ability to challenge decisions made by senior leaders without fear of retaliation.
- Psychological Safety: Creating an environment where team members feel safe to express their thoughts, admit mistakes, and ask questions without fear of retribution.
- Shared Ownership: Moving away from siloed "my code, your problem" thinking toward collective ownership of quality, reliability, and security.
Example: Transforming Incident Response Culture in Banking
At the bank, the incident response process was deeply pathological. A production incident would trigger a "war room" where the first 30 minutes were spent assigning blame rather than restoring service. Engineers had learned to hide information, delay incident declarations, and write postmortem reports that were exercises in self-protection rather than learning.
The team replaced this with a structured, blameless incident response process modelled on Google's SRE practices. The key changes were:
- Incident Commander role separated from the engineering teams involved, eliminating the conflict of interest in declaring and managing your own incidents.
- Postmortem template that explicitly prohibited individual blame and required identification of systemic contributing factors.
- Postmortem review sessions that were open to the entire engineering organisation, normalising the idea that incidents are learning opportunities.
- "Excellent Postmortem" awards that celebrated the most insightful analyses, creating positive incentives for candour.
Within 18 months, the mean time to declare an incident dropped by 40% — engineers were no longer afraid to raise the alarm. The mean time to recovery improved in parallel, because information was flowing freely during incidents rather than being hoarded.
Psychological Safety
Amy Edmondson, the Novartis Professor of Leadership and Management at Harvard Business School, defined psychological safety as "a shared belief held by members of a team that the team is safe for interpersonal risk-taking." In The Fearless Organization (2018), she demonstrated through two decades of research that psychological safety is the foundation of high-performing teams.
Edmondson's research shows that psychological safety does not mean an absence of accountability. High-performing teams combine high psychological safety with high performance standards. The absence of safety with high standards produces anxiety. The absence of both produces apathy. Safety without standards produces comfort — but not excellence.
Emerging empirical research reinforces the connection between psychological safety and sustained engineering performance. Sesari, Sarro, and Rastogi (2025) studied over 60,000 pull requests across multiple open-source repositories and found that contributors show greater likelihood of sustained participation — both short-term (1 year) and long-term (4-5 years) — in repositories with higher psychological safety levels, as measured through merge decisions, comment patterns, and interaction quality. Their data-driven methodology for assessing psychological safety at scale provides a model for how engineering organisations can move beyond survey-based measurement to continuous, objective assessment of team health.
The Psychological Safety and Performance Standards Matrix
| Low Performance Standards | High Performance Standards | |
|---|---|---|
| High Psychological Safety | Comfort Zone | Learning and High Performance Zone |
| Low Psychological Safety | Apathy Zone | Anxiety Zone |
The goal is the top-right quadrant: teams that feel safe to take risks, ask questions, and admit mistakes, while simultaneously being held to exacting standards of quality, reliability, and delivery.
Example: Psychological Safety in Regulated Environments
One of the paradoxes of building psychological safety in banking is that you are operating in one of the most heavily regulated environments on earth. There are genuine consequences for compliance failures. The temptation is to create a culture of fear around compliance — but fear produces concealment, not compliance.
We addressed this by separating two distinct concerns: the system is accountable for compliance (automated controls, pipeline gates, audit trails), while people are accountable for learning and improvement. When an engineer bypassed a security control, the response was not punishment — it was inquiry. Why did they feel the need to bypass it? Was the control creating friction without adding value? Was there a gap in training? This approach consistently uncovered systemic issues that, once fixed, prevented entire categories of future violations.
Empowering Teams to Choose Tools
Empowering teams to choose the tools that best suit their needs promotes autonomy and efficiency. The DORA research found that teams that can choose their own tools are more likely to be high performers. This does not mean anarchy — it means providing sensible defaults (golden paths) while allowing teams to deviate when they have a justified reason.
In banking, this is a delicate balance. Enterprise architecture teams often mandate tooling for legitimate governance and procurement reasons. The key is distinguishing between constraints that serve genuine risk management purposes (approved container registries, mandated SAST tools) and constraints that exist purely out of organisational inertia (everyone must use the same IDE, all projects must use the same test framework).
Practices that support team autonomy:
- Tool Evaluation: Allowing teams to evaluate and select tools based on their specific requirements, within the bounds of security and compliance policy.
- Autonomy in Decision-Making: Granting teams the authority to make architectural and tooling decisions that affect their own domain, while maintaining alignment on cross-cutting concerns.
- Support for Experimentation: Encouraging teams to experiment with new tools and technologies through time-boxed spikes and proof-of-concept work. At the bank, we created a formal "Innovation Time" allocation — two days per sprint for experimentation and learning.
- Golden Paths, Not Golden Cages: Providing well-supported default toolchains that make the right thing easy, while keeping the door open for teams that need to diverge.
Example: Tool Autonomy in Practice
When we rolled out the DevSecOps pipeline platform, we offered a golden-path CI/CD template that included compilation, testing, SAST, DAST, SCA, and deployment. Most teams adopted it gratefully — it saved them weeks of pipeline engineering. But one team working on a high-frequency trading system needed sub-second build times that the standard template could not deliver. Rather than forcing them into a one-size-fits-all solution, we gave them the freedom to build a custom pipeline, with the constraint that it must satisfy the same security and compliance gates. They delivered a pipeline that was 10x faster than the standard and eventually contributed optimisations back to the golden path that benefited everyone.
Measuring Culture
Culture can feel intangible, but it can be measured. The most effective approaches include:
- Westrum survey instruments embedded in quarterly engineering health checks, measuring information flow, collaboration, and novelty acceptance.
- Incident metrics as cultural proxies: Time to declare incidents, postmortem completion rates, and the ratio of systemic vs. individual findings in postmortem reports.
- eNPS (Employee Net Promoter Score) for engineering teams, tracked over time to detect trends.
- Voluntary attrition rates as a lagging indicator — by the time this metric moves, the cultural damage has already been done.
Benefits
- Increased Innovation and Creativity: When people are safe to experiment, they try more things, and some of those things work brilliantly.
- Improved Employee Satisfaction and Retention: Engineers stay at organisations where they feel respected, heard, and able to grow. In a competitive talent market, culture is a retention strategy.
- Faster Adaptation to Change: Generative cultures respond to change — regulatory changes, market shifts, technology disruptions — faster because information flows freely and novelty is implemented rather than crushed.
- Better Software Delivery Performance: The DORA research demonstrates a statistically significant relationship between Westrum generative culture and software delivery performance metrics.
References
-
Westrum, R. (2004). "A Typology of Organisational Cultures." BMJ Quality & Safety, 13(suppl 2), ii22-ii27. The foundational paper establishing the pathological-bureaucratic-generative culture typology and its relationship to information flow and safety outcomes.
-
Edmondson, A.C. (2018). The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Wiley. Two decades of research on psychological safety, with practical frameworks for leaders seeking to build high-performing teams.
-
Forsgren, N., Humble, J., & Kim, G. (2018). Accelerate: The Science of Lean Software and DevOps. IT Revolution Press. Provides the empirical evidence linking Westrum generative culture to software delivery performance and organisational outcomes.
-
Edmondson, A.C. (1999). "Psychological Safety and Learning Behavior in Work Teams." Administrative Science Quarterly, 44(2), 350-383. The original academic paper establishing the construct of psychological safety and its relationship to team learning behaviour.
-
Dekker, S. (2014). The Field Guide to Understanding 'Human Error'. CRC Press. Essential reading on just culture, systems thinking in incident analysis, and why blame is both unjust and counterproductive.
-
Google re:Work. "Guide: Understand Team Effectiveness." Available at rework.withgoogle.com. Google's Project Aristotle research, which independently confirmed psychological safety as the most important factor in team effectiveness.
-
Sesari, E., Sarro, F., & Rastogi, A. (2025). "Safe to Stay: Psychological Safety Sustains Participation in Pull-based Open Source Projects." arXiv:2504.17510. https://arxiv.org/abs/2504.17510