Write user stories – IBM Garage Practices
Write user stories
In the Garage Method for Cloud and many other agile methods, one of the key tools to communicate between the product owner (the customer) and the development team is the user story. Martin Fowler and Kent Beck define a user story as “…a chunk of functionality (some people use the word feature) that is of value to the customer… The shorter the story the better. The story represents a concept and not a detailed specification. A user story is nothing more than an agreement that the customer and the developers will talk together about a feature.” 1 (Author’s emphasis.)
How to write a user story
User stories act as the common language between all participants in the development process: product owners, architects, designers, and developers must share a common understanding of stories. You can achieve this common language by focusing on the value that each story provides to users. This idea of user value can be understood across disciplines and is key in prioritization, design, and implementation decisions.
A user story envisions and describes a future in which a small increment of user value is delivered. Approach writing stories by asking three questions:
- Who is the user that benefits? You can use a persona name to concisely identify the user.
- What can the user do that they couldn’t do before? Maybe they can see some important information or take an action that helps them to achieve something.
- How does this change benefit the user? Sometimes the benefit is obvious. If it isn’t, state it explicitly.
After you have the answers to those three questions, write them down in a sentence or two.
For example, Adam sees a summary with the status of each service, which allows him to identify any outages at a glance.
In this example, Adam is a site administrator. His name is in many stories, with each one using the team’s shared knowledge of the persona.
In the Garage Method for Cloud, this concise format is preferred over the “As a <role>, I want to <capability> so that I can <benefit>” template, which is popular elsewhere. The Garage style is more straightforward and direct, which increases the signal-to-noise ratio. This style makes it easier to quickly extract meaning from a list of stories (a backlog) and harder to disguise bad stories with boilerplate.
In addition to a concise statement of user value, stories must have acceptance criteria and a definition of done:
- Acceptance criteria is the set of requirements that must be met for a user story to be completed.
- The definition of done is a set of criteria that is common across related user stories. The criteria must be met to close the stories.
For example, every story must have automated test cases that validate the functional and nonfunctional requirements. Acceptance criteria and a definition of done provide a clear view to the whole team of which conditions must be met to declare that the user story is done.
A user story describes a future in which a small increment of user value is delivered.
Examples of well-written user stories
To learn how to help your team write good user stories, review the following examples of poorly written user stories and see how they can be improved.
Doug is a web developer. He needs to know the areas of his website that aren’t being accessed so that he can improve the website.
- Bad story:
As a web developer, Doug needs a report on website usage.
This story is too general. A developer wouldn’t know what to implement. The story doesn’t indicate why Doug needs the report. Because that information is excluded, the development team might write function that doesn’t meet Doug’s requirement.
- Good story:
Doug views a report displaying usage statistics about each page of his website. He can see which pages are not visited frequently and make improvements.
Carly is a customer who uses Doug’s website. She wants to provide text feedback and a rating to the website development team when she encounters a problem or likes a page.
- Bad story:
Carly can report problems and rate pages on the website.
This story is too general, contains two different functions to implement, and might be too large to complete in one iteration. The team can break this type of requirement into a few stories that can be written in parallel.
- Good story 1:
Carly can rate each web page using a 1 to 5 star system so that she can indicate which pages she likes and which she thinks need improvement.
- Good story 2:
Carly provides text feedback to the development team when she encounters a problem with the website. Her feedback is delivered by email.
Mike is a mechanic. He needs to use a website to look up parts for the vehicles that he is working on.
- Bad story:
As a mechanic, I want to see a list of parts suitable for my vehicle so that I don't have to look at irrelevant parts.
The not-great story has a poor signal-to-noise ratio. The actual information is hidden behind the “as a” construct that has no meaning, and the “so that” construct is redundant. In the better version, the persona Mike is called out directly. The “can” phrase is more story-like and describes what Mike needs and why.
- Good story:
Mike the mechanic can see a list of parts suitable for his vehicle.
Mary is an integration developer who needs to use documentation to do her job.
- Bad story:
As a developer, I need to write documentation.
The boilerplate might look like a user story, but it isn’t. The user shouldn’t be the person who implements the story. The feature that is described in the story must be a delightful capability, not a burdensome obligation. In the better story, Mary benefits from the feature for the reason that is stated in the story.
- Good story:
Mary, the integration developer, can create new flows by referencing documentation.
Although a good user story is expressed concisely, it also needs enough detail for a developer to act on it.
- Bad story:
Make a dashboard.
This story gives the developer no idea of what to implement or how. The better story includes a concise description of the user, the function that he needs, and why he needs it.
- Good story:
Pat, the platform specialist, can track and monitor stats of the plan to check that a flow is performing its function.
Sometimes a user feature is driven when something happens in the system versus a specific user action. Capture this type of work as a task, not a user story.
- Task:
When an order is received, Frank sees a "sparkler" on his dashboard.
This story reflects that when an order event is received by the system, Frank is notified with a sparkler on his dashboard. The story mentions that the event has a visual indicator, but doesn’t specify the design for what is shown.
Each good example includes all of the facets that make up a complete user story: a persona, a capability, and the resulting benefit.
Manage your user stories
After you learn to write good user stories, you need a place to track and manage them. Store user stories in a feature tracking system such as GitHub Issues. By using a tracking system, your team can rank the stories and track them through to completion.
No tool can replace human-to-human communication. Don’t allow the tool to affect communication and become a human-to-system only. People must talk to each other; the stories are placeholders for deeper communication.
As user stories rise in priority in the backlog, the team must communicate with the stakeholders—both architects and customers who want the user story—to ensure that when the story is delivered, it satisfies the stakeholder requirements.
Your team can store the results of stakeholder communications directly in the tracking system. That way, all the information that is needed to design, implement, test, and deliver the user story is in one place.
The repository where you manage user stories must be readily accessible to the entire team. To track use stories, use tools such as kanban boards and dashboards. By doing so, you always know where each story is first in the ranked backlog and then on the path through development and testing to deployment.
Test your user stories
Each user story must be testable. According to Kent Beck, tests must be “isolated and automatic.” 2 Beck expanded on his idea of automatic testing when he introduced the process of doing test-driven development with the JUnit framework. 3 Throughout Beck’s examples, the tests are always what testers call functional tests.
Developers and product owners often stop after they implement functional tests, as though functional tests are the only tests that must be done. As a result, they test no more and no less than what is defined in the phrasing of the user stories. That mindset isn’t the original intent of what a user story represents or what agile testing is supposed to cover.
Next, follow a practical example of how to develop user stories for a minimum viable product (MVP) and see how both functional and nonfunctional facets are handled throughout the process.
Define user stories for an MVP: A practical example
The Garage Method for Cloud follows a process that begins by using Enterprise Design Thinking to produce an MVP statement. An MVP statement is the absolute bare minimum in a delightful experience that your target persona accepts to accomplish a goal. After the MVP is defined, the inception process begins. In the inception process, the MVP statement is expanded into user stories.
The MVP must explicitly state a measurable goal. Often, the MVP statement includes all these facets.
For example, consider a recent MVP statement from the airline industry:
Polly the Passenger should be able to rebook her cancelled flight on her phone within one minute without having to speak to a human at the gate.
Notice that this MVP statement has all the mentioned facets, including a measurable goal and targets on the actions. In the inception process when this MVP statement is turned into user stories, stories might be as follows:
Notice the progression in the examples. The action in the MVP (rebook a canceled flight), is expanded into a set of user stories that either have a functional or a nonfunctional (performance) facet. User stories must be as small as possible, which is why the performance tests are broken out. Each user story has one or more tests, such as a test for whether you can display 20 flights, or an automated performance test to ensure that the display is shown within 2 seconds. These tests ensure that after a functional or nonfunctional story is done, it stays done. The tests that are delivered alongside the story act as a guarantee. For example, even if the page already loads within 2 seconds, marking the performance story complete means adding tests to detect future regressions.
What user stories are not
Now that you know what user stories are, how to write them, and how squads manage them, make sure that you know what isn’t included as part of a user story. User stories are not a design specification for the function. The initial version of the user story focuses on what is needed, by whom, why, and a measurable result. The story doesn’t focus on how to implement it. The implementation is determined when the story is under development. Then, the developer can enhance the story with design information and decisions that were made as part of the interactions with stakeholders and the development process.
User stories don’t define when to implement the function. After a user story is written, it’s added to a product backlog and ranked against all of the other known user stories.
Go deeper: User stories must cover more than what the product does
A whole set of user stories is often forgotten in the inception process. Those user stories are the ones that are in between the obvious actions. Consider internal stakeholders when you write stories because stakeholders are often critical to measuring the business hypothesis. For example, a team might have stories for Bob the business owner, who needs a dashboard to track sales or to coordinate product dispatch.
User stories around the nonfunctional facets are equally important. These stories capture what is required to delight the user, such as performance and security. As shown, performance requirements can be expressed as response-time targets and validated by the product owner.
Expressing security requirements as a user story requires more domain knowledge. Something like As a shopper, the site is secure
is a poor user story because it doesn’t have a clear definition of done. What would a product owner to do test it? Instead, discussions about security might lead to the development of new personas, such as offensive security researchers and attackers.
For example, no user wants to log in, but users care about whether other people can see their data. You might express this requirement as Hank the hacker cannot see Polly's flight details
. In practice, this story is implemented as a login page. The POscript (acceptance steps) for this story are a set of actions that show how Polly continues to see her flight details by logging in. A second set shows how Hank can’t log in with an incorrect password. For a more security-critical application, you might have a number of deeper, but still concrete, security stories. For example, you might write Blake the attacker cannot use an SQL injection attack on any of the APIs
or Hank the hacker cannot hack the site by using the OWASP (Open Web Application Security Project) Top 10 vulnerabilities
. These stories are good because they’re measurable.
Work items for nonfunctional facets
Some nonfunctional requirements shouldn’t be expressed as user stories but should still be in the same ranked backlog as the user stories. For example, in the flight-booking MVP, the measurable goal was to be able to rebook without having to talk to a human. If the expectation is that no human interaction is required 100% of the time, continuous availability is required. That requirement is expensive to implement.
To offset cost, you can scale back that availability requirement to a percentage of the time, such as 99%, which allows for a cheaper solution that is short of continuous availability. Meeting an availability target is important, but it’s not a user story. A product owner can’t validate a story such Polly should be able to access the mobile application and avoid talking to a human during no less than 99.9% of the year. Outages should last no longer than 30 minutes at a time.
Although the story has a numeric success criteria, it is describing events that happen (or don’t happen) in the future. A product owner can’t validate it by using normal “given/when/then” criteria unless they watched the site for a year and timed outages. Instead, they can convert the requirements into actions that help to achieve them and track the actions in the backlog as tasks. For example, activities to support a high availability requirement might include Perform chaos testing to ensure system resilience
or Mirror infrastucture to a second region for HA/DR
. Nonfunctional work items emerge as part of discussions between the developers, the squad lead, and the product owner.
Stories can be satisfied by implementing new product features or by changing the nonfunctional characteristics of an application. Use tasks only for plumbing work or nonfunctional requirements. Nonfunctional requirements are verifiable or non-verifiable. Direct measurement is either too difficult or requires more time than is practical before sign-off. All stories must be verifiable.
Tasks that track availability work are only one type of nonfunctional work item. Others include operations such as maintenance, logging, and alerting. A Cloud Service Management and Operations shift left and observability need work from the development team. This work is important and must be tracked, but it isn’t visible to a user. Another type of work item is plumbing (infrastructure) to support new capability.
Work items for plumbing
In the Garage Method for Cloud, the ideal user story takes about a day to complete. In practice, some stories are larger. You might need substantial invisible plumbing to make them work, or they might have complex dependencies. For example, setting up the ledger infrastructure and chain code in a blockchain project might take several days, but that effort isn’t directly visible to a user or even a product owner.
To limit work-in-progress and also avoid multiday user stories, pull out substantial plumbing or dependency work from these user stories. This plumbing work can be tracked on the backlog as a task and the product owner doesn’t track it. Normally, this split into tasks happens closer to when work starts on the user story.
The following table summarizes the different types of work items that can appear in a backlog:
Work item | User story | Task | Defect |
---|---|---|---|
Has a persona | Yes | No | Maybe |
Who creates it | Product owner | Delivery team | Product owner, delivery team, or users |
Who prioritizes it | Product owner | Delivery team | Product owner |
Who accepts it | Product owner | No one | Originator |
Measurable (directly verifiable) | Yes | Maybe | Yes |
Creates capability | Yes | No | No |
Must have points | Yes | No | No |
Who owns the work items that cover nonfunctional facets?
After you write your nonfunctional work items (user stories or tasks), make sure that they’re not ranked so far down the backlog that they’re never addressed. Because product owners often think in functional terms, they must be educated in nonfunctional thinking and persuaded that nonfunctional work items are as important to the business as functional user stories. Consider the airline example. If you implemented the ability to reschedule a canceled flight from a cell phone and that function was available only 50% of the time, your customers would be unhappy.
In the Garage Method for Cloud, an ongoing negotiation occurs between the squad lead and the product owner to ensure that the backlog doesn’t swing too far in one direction. A key responsibility of the squad lead is to make sure that both the nonfunctional and functional facets of the application are addressed. Another role that can be helpful to rank user stories is an architect. Some agile methods maintain two different owners: an architecture owner and a product owner. Your team might or might not choose to go that far. Make sure that all three roles have a seat at the table in ranking discussions.
Implications for squads: Testing
Planning Extreme Programming makes this key point:
Eventually the customer will have to specify acceptance tests whose execution will determine whether the user stories have been successfully implemented.
Performance testing is an important part of developing a system in the Garage Method for Cloud. Individual functional user stories often have performance constraints that must be tested as part of the unit testing process, especially when the team is using a microservices architecture. In a microservices architecture, small sets of user stories often map directly to specific microservices. However, like those user-story specific tests, the team must also develop two types of acceptance tests: functional acceptance tests and nonfunctional acceptance tests.
In the airline MVP statement, one agreement between the team and the product owner was that the entire rebooking process must occur within a minute. The total performance budget of all the different steps of the rebooking process must fit within that minute. The application must pass the functional acceptance tests that are defined by the product owners and pass an end-to-end performance test that demonstrates that the entire process can occur within 1 minute.
The availability work item must have a set of tests defined. In availability testing, you shut down all or part of the system (destructive testing). Then, you either restore it or switch to inactive regions to ensure that the process can be done within the time allotted. Similarly, you can conduct chaos testing by using a framework like Chaos Monkey to ensure that the system meets the requirements that are defined by the availability tasks even when components unexpectedly fail.
What’s next
- Needs Statements
- Stakeholder Map
- User Stories, Epics and Themes
- 10 Tips for Writing Good User Stories
References
1 Fowler, Martin and Kent Beck. Planning Extreme Programming. Boston: Addison-Wesley, 2001.
2 Beck, Kent and Cynthia Andres. Extreme Programming Explained. Boston: Addison-Wesley, 2005.
3 Beck, Kent. Test-Driven Development: By Example. Boston: Addison-Wesley, 2003.
原创文章,作者:奋斗,如若转载,请注明出处:https://blog.ytso.com/174463.html