Contents
1.4. QA and UI Test Automation
1.5. Product Representatives as Scrum Masters
1.6. Don’t Let Engineers Define the Product
1.7. Don’t Skimp on Wireframes for Interactive Applications
1. Lessons in Scrum
1.1. Honour Your Retrospectives
1.2. Conducting Comprehensive Sprint Demos
Sprint demos are a well-known ceremony in Scrum but I feel like typical guidance offered on how they should be conducted can be thin in some circumstances and I see several missed opportunities, especially as it relates to more technical tasks that are not necessarily user journey centric such as code quality efforts and refactoring efforts, or to specific activities of QA and DevOps. Demoing working software is meaningful, but how can we demo that it was done with quality in mind, and that it was appropriately QAed, or that it was done with an eye to the "ilities"?
Some of the most effective sprint demos I’ve seen have been run by the Scrum master because they were very focused on the way the business likes to see product development, and do not highlight the individual contributions of the developers and QA because that tends toward a lower level of detail that the business need not care about; if the software works like they expect it to, they’ll consider the sprint a success and move on. There is nothing wrong with this. However, agile software development is incremental, and although the aim should be to produce demo-able increments, this may not always be possible; it can force the team into an unnatural mode just for the sake of being able to demo working software every sprint, but which may not be optimal for their own needs. An accountability mechanism is still needed when this happens, and in my view this can still be generated using a non-orthodox approach.
Be they technical sub-tasks broken down from a user story or technical tasks related to refactoring or architectural frameworks, we can be creative about the way that we show progress in the right direction, and so that developers can prove they are doing what they said they would. The accountability needs to be proven not to the business in these occasions, but to engineering leadership instead. As the technology executive seeing incremental progress on technical items should be a key goal.
1.2.1. Demoing Front End Only
As a general practice, within a team within a vertical (my preference) but especially if teams are horizontal, I’ve liked the front-end developers to code to mock interfaces or web services so that their development can proceed at a different pace and parallel to the backend work. This not only aids in the ability to demo a working (if mocked) front end, but also enables Quality Engineers (hereafter QEs) to commence writing their automated UI tests. Even if back-end work is ultimately finished before the front-end work, it enables front-end development to take place concurrently.
1.2.2. Demoing Functionally Driven Back End Work Only
On the flip side of the coin, API work can be demoed to engineering leadership using end point invoking tools such as Postman, and if the endpoints are Swagger enabled, they can be invoked interactively during a demo. A wonderfully meaningful goal of a back-end-only demo should be to demonstrate passing API tests that have been written (and almost without fail they should always be) by actually running them during the ceremony.
1.2.3. Demoing Framework and Non-Functionally Driven Work
By its very nature, framework code can be harder to demonstrate than functionally driven work, but it doesn’t mean that it should be allowed a lack of accountability to both promised delivery milestones and code quality. Framework projects can quickly get away from you because they tend to be more difficult tasks, it’s harder to know what reasonable assumptions are out of the gate, and from time to time you might find that a particular approach just won’t work. This can introduce long delays as developers wrestle with trying to make their original plan work, before finally giving up and researching or experimenting with alternatives.
Also, the true intellectual owner of this work is usually not a business-minded Product Owner; it tends to be the Architect, operating in a capacity that I describe as the Technical Product Owner. For an Architect with solid sensibilities about accountability to the business this might be fine, but for less inclined Architects (purely engineering-minded ones) they may not place as much importance on project planning and visibility.
It therefore becomes particularly important to plan these initiatives as tightly as is practicable, meaning that the natural break points of an increment should be very well defined, and done so in such a way that they can be explained to you as the technology executive who is sponsoring the project. The Architect should be on the hook to find ways to help their developers demonstrate that they have achieved their milestones at least by way of code reviews, if nothing else. If they cannot prove that the initiative is working as planned because a developer just cannot get something to work, they must assist you in making a balanced choice about how much of a sunk cost you’re prepared to incur before trying alternatives.
- Proving that an event publisher is actually pushing events into your event framework (Kafka, AWS EventBridge or SNS, for example), even if a downstream consumer isn’t configured
- Proving that an event consumer can consume events by publishing mock events
- Demonstrating that you can pick up on an AWS Lambda function in response to some event by configuring a mock implementation that, for example, logs to your log implementation
- Demonstrating that your JWT tokens are populated with all the appropriate roles and permissions from your authorisation apparatus even if the front-end app isn’t ready to use the tokens yet
- Demonstrating that your database sharding mechanism is working by mocking an implementation that uses, for example, a mock user identifier to direct queries to separate databases
- Demonstrating your circuit breaker implementation is working by deliberately killing a dependency service and bringing it back up again
More than anything else, you’re going to depend on your Architect very heavily to make sure this happens with full transparency, so don’t let them off the hook if things start to slip. Ensure they’re explaining the risks for a given approach and making you aware of when their effort starts to yield diminishing return. As stated earlier, the decision to bail on an approach is one that you should be making because you’ll be the most sensitive to cost and risk; engineers might be happy to continue slogging it out, but you have to be able to make the call on trying something else when you feel the time is right.
1.2.4. Demoing Code Quality Work and Metrics
Code quality counts, and if we’re going to insist that developers pay attention to it every sprint, we need an accountability mechanism to ensure they are. Again, CodeScene can come to the rescue here as it enables developers to show their commits are overall contributing to a higher (or at least not lower) code health score. Additionally, for a given architectural component (as configured in CodeScene) they can demonstrate that as a team the overall health remained level or improved. If a decline took place this needs to be to justified and the team should be prepared to speak to the plan they have to bring it back up again. Additionally, they should be able to speak to the amount of time they spent in code quality work; this is often best done however as an administrative activity by management to monitor the number of code quality tickets (work items in Jira for example) and the total in estimates. Demoing CodeScene code health scores to the business can also be meaningful to them, as it will build confidence that you’re extending their platform responsibly.
1.2.5. Demoing QA Tasks
Oftentimes the work of QEs is behind the scenes, but one of the most fun and meaningful things that can be demonstrated are running end-to-end UI tests during the demo. Not only does it show the software works, it shows that you’ve created test coverage, again building confidence that you’re extending the system responsibly. Take every opportunity to build trust with the business. As a technology executive it is really satisfying to see an automation suite run, because it’s a sign that your organization is working properly; appropriate, running tests are evidence the relationship between Product and Engineering is functioning properly. Not only is this confidence building for you as an executive, but it gives the QEs a chance to tout their work also.
1.3. Product Ownership and Scrum
Engineering can do their level best to implement a solid scrum process, performing all the required ceremonies, and having good engineering discipline and principles, but without the Product part of the team truly integrated, they will struggle to get beyond a friction-fraught experience and things will never truly feel “right”. A lot of engineers don’t know what this relationship should look like in its optimal form. This is (usually) no one person's fault on either side, as much as it is an under-developed relationship.
It’s a worn-out refrain from engineers that “the business doesn’t state written requirements well enough” and in my distant past I have been guilty of this, yet I recognize it now for the cop-out I feel it is. It’s like blaming the guy in the next lane for not braking to prevent the crash you caused as you swapped into their lane without looking over your shoulder. As a driver you have an accountability to prevent the preventable from happening, and so does a development team. It is on the team to ensure that the expectations are unequivocally clear, and the output testable. This can be achieved if all of product ownership, scrum masters, developers, and QE understand the optimal roles they should play. If the engineers are not coming back with engaging questions during refinement sessions, it's fair for the Product Owner to assume the engineers have what they need. This is a make-or-break consideration.
Solving this can be a tough problem but make no mistake, if the reasons this problem exists are not understood, your engineering organisation will struggle to succeed. I’ve seen not even particularly talented engineers turn out relevant, quality product when Product has been a more organic part of their team. Loading a development team up with super talented senior engineers will not alone solve this problem; they tend to just speak up and push back much earlier that they don’t know what they’re writing, but possibly without being articulate about describing what they need from Product, either. As a technology executive it comes to you to support the integration of the Product function by being responsive to Product ownership’s needs and ensuring that the customer service relationship that should exist between the two functions is well-defined and executed as designed.
As it relates to functional code, the answers to this problem are stated directly in the Agile Manifesto so I won’t describe them here. For my experience, if there is one portion of the Manifesto that you must make sure is working properly, it’s this one. I don’t feel the Manifesto holds up as well when dealing with pure technical tasks or sometimes even sub-tasks of a user story, but I’ve already covered this in the section “Demoing Framework and Non-Functionally Driven Initiatives”,
1.4. QA and UI Test Automation
It takes a disciplined team to complete UI test automation within the same sprint in which functionality is developed; very good UI wireframes need to be developed up-front, and user journeys need to be thoroughly thought through. This is the absolute underpinning of your quality engineering efforts for UI-centric functionality.
If this function isn’t well oiled, developers often try to work out the details themselves. This practice is poison to a QE; by its very nature, it forces the QA function into a reactive, pipelined mode of execution relative to the development effort itself because they cannot write tests for something that is inadequately described, and that they cannot see until it’s developed. Not everything is lost if this isn’t working optimally because writing tests “sprint after” is an option too, but it just takes longer for functionality to get out the door once it is started. If this is all you can manage it’s a good first step, but you should aim to bring this left. Doing so can take a lot of effort though, and if the gains aren't significant at first for the effort invested you might get more value from an improvement initiative somewhere else and get back to this later.
Another effect of inadequately specified stories is that you’ll often find engineers trying to re-engage with the business seeking clarification part way through a sprint when they work out that there were aspects that weren’t fully thought through – and this is done after the delivery estimates were given. This can happen from time to time to the best of teams, but it is really easy to get lazy and skip this discipline, causing it to occur routinely. This precipitously drops predictability and the trustworthiness of the engineering team.
It is entirely incumbent on Scrum masters and QA management to ensure that QEs are fully enabled to create automated UI tests by way of properly specified user stories. Tests should be considered first-class artifacts in a team’s sprint backlog and need to be estimated during sprint planning. Failure here will cast a long shadow and cause functionality to get well ahead of test coverage.
By contrast to UI tests, especially as it relates to back-end web service work, the data models and interface semantics are usually very well defined and there is generally little reason that API tests cannot be developed concurrently to the functionally itself. It should be the case that the team isn’t overloaded with functional work at such a pace that API test development cannot keep up. It can be helpful to have other developers on the team write back-end tests to ensure neutrality and aid knowledge management.
1.5. Product Representatives as Scrum Masters
Especially in a smaller organisation it isn’t entirely uncommon to see more product and analyst focused staff sitting over a development team, fulfilling the function of Scrum Master. On the face of it this makes a lot of sense because their affinity to Product can inform their Scrum Master persona and give them a good sense of what needs to be developed. However, except with disciplined individuals I believe there are risks to this as there is a built-in conflict of interest between the two functions.
On one hand you have the Scrum Master persona whose job it is to represent developers by playing devils advocate and brokering agreement with Product to ensure user stories are thoroughly specified, and to own the accountability to Product for their execution. On the other hand, you have the analyst and product persona who implicitly knows the functional expectations. If both these personas are held by an undisciplined person, there can be a natural tendency to say “I know what I mean by this” and to skimp on the appropriate details of a user story. I've seen this leave developers and QEs guessing (especially new team members who have fewer operating assumptions) and this will precipitate the situation described in the last section.
Moreover, if the head count exists to permit this, Scrum Masters must report to engineering, not to Product. This may seem intuitively obvious, but not every organisation I’ve worked with has seen it this way. Ultimately, engineers need a direct line of accountability to technology leadership. When Scrum Masters live in a different organisation this accountability is broken, and in effect engineers report to Product. This arrangement limits the influence of technology leadership to change outcomes; technology (especially Architecture) tries to enforce standards but on a necessarily less frequent basis than their daily interactions with their Scrum Master. A Scrum Master owned by Product is motivated by fast delivery, while Architecture is motivated by correct delivery. Naturally these would optimally be in balance, but in practice this arrangement dilutes the patterns and standards message; under pressure to deliver, developers doing what they are told to do on a daily basis will override the way in which they are supposed to do it.
A last perspective on this assertion is that fundamentally, once a sprint is kicked off it is the Scrum Master function that is intended to shepherd the engineers through their day-to-day execution, to help manage technical and project dependencies across multiple teams, and to prepare the team to demonstrate the team’s work at sprint end. Product should be managing product backlogs, not engineers.
1.6. Don’t Let Engineers Define the Product
Sometimes it’s really simple things that if not done well can cause ripples of expense throughout your entire SDLC and release cycle, that in retrospect can be kind of obvious.
I’ve spent enough time writing UI code in my engineering past to know I was a back-end engineer at heart, and really focused my efforts on excelling there instead. Earlier in my career I had been of the mindset that I could architect a solid backend, throw some properly granular, performant APIs at the UI team, and 75% of my job was done. Naturally this wasn’t done with complete insensitivity to the needs of the UI team but providing a great UX wasn’t forefront in my mind, either.
It took some introspection, but in time I cottoned onto the idea that although I could architect and lead the development effort of a large team of engineers in the implementation of an eminently scalable, performant, message-driven platform of which I could be proud, these platform attributes are amongst the most disposable of concepts to the way the average business describes their ideas. Of course these attributes had to be there no matter what the business proposition, and I was never going to architect a platform that failed on that front.
Yet of course the business never describes a product in terms of those things; that’s the speak of engineers describing a platform. What was important to me wasn’t remotely coupled to the way the business would describe the value proposition, and the actual business rules themselves were only important to me in as much as I needed to create an extensible and flexible way to model them without backing the product into a functionality corner within two years.
The purpose of taking the reader through this entirely self-evident journey is to state a single point: a lot of engineers are all too happy to create a platform and work for months on the architecture without it being remotely tied to business value. It’s super obvious that for an interactive experience, the user journey is what is best used to describe what’s required of the system, and it drives right to the heart of defining MVP. But here’s the kicker: without a solid Product Owner’s leadership to get your engineers to focus, a back-end and front-end framework is exactly what you’ll get. With proper teaming between Product and Architecture, you’ll get to value faster, and the rest will come out in the wash.1
1.7. Don’t Skimp on Wireframes for Interactive Applications
Further to my point in the last section, I’m going to advocate for wireframing as a starting point for a product; yes, it is critical to help visualize the business’ high-level expectations but critically, it’s also the absolute underpinning that will inform your quality engineering effort.
I’ve lost count of the number of front-end efforts I’ve witnessed in which a user interface screen or element has been described in words in a “user story”, with perhaps a rudimentary wireframe showing general layout and the data required attached to the ticket. Yet the actual sequence of user interactions within that UI such as click sequences, error handling, required fields, conditionally visible elements and the like, is left up to the interpretation of the Scrum Master and engineers. This is unequivocally poison to a QE; by its very nature, it forces the QA function into a reactive, pipelined mode of execution relative to the development effort itself because they cannot write tests for something that is inadequately described, and that they cannot see until it’s developed. When this happens you’ll often find engineers trying to re-engage with the business to ask them for clarification half-way through a sprint, when they work out that there were questions that weren’t fully thought through – and this is done after the estimates were given. This can happen from time to time to the best of teams, but it is really easy to get lazy and skip this discipline, causing it to occur every single sprint. This precipitously drops predictability and the trustworthiness of the engineering team.
Next Up
In this post I spent a little time discussing that the Product and Technology organisations must be intimately connected, and their relationship well defined and executed as defined. In the next section I am going to expand on this topic; if Product does not feel like they have a good partner in Technology they will struggle to succeed.
You can view my next post titled "Technology and Product as Partner Organisations" here.
No comments:
Post a Comment