18 July 2012

Unsolicited advice for all of us, too

If you haven't heard yet, Marissa Mayer, 20th employee at Google, is the new CEO of Yahoo! (don't ever forget the exclamation point, its that important!) I've always been impressed with her work at Google and am glad to see her get a shot to run the show, even at a company as troubled as Yahoo! With that said, she's got a lot of work waiting for her and more than a few pundits waiting to give her advice on what she should be doing.

That said, a person who until recently worked at Yahoo! decided to give her a bit of unsolicited advice. As I read this list, it cracked me up because there are soooo many items on it that our companies could probably learn from as well. Item #7 especially caught my eye:
No roadmaps from product teams looking out over 6 months. Make a poster with the words “We ship code, not slide decks” and circulate it with product management.
What items could you add to this list?

16 July 2012

'Paper' Requirements

For those of you who have been reading this blog for a while, you know what a fan I am of using Apple iOS devices in our jobs. At my company, nearly every activity manager on every project has an iPhone or iPad (often times both) and uses them for email, taking notes and tracking deliverables.

One area that they have not (yet) been as useful is in the realm of requirements. We are now starting to see some apps that, while they are not strictly requirements apps, they definitely can be used to help eliciting requirements. One of those is Paper by FiftyThree.com. This app is sold as a drawing app, but even for people like myself who draw so poorly that they embarrass their own stick figures, it is an awesome requirements tool.

A month or so ago, while in a working session with our marketing team, we ran across a situation where no one could quite nail exactly how a requirement should look in the system when we implemented it. We knew what it should do, but every idea we had come up with far had some kind of limitation that made it a no-go.

I had this idea that I thought would work, but needed to get it out of my head so that everyone in the room could see it. I opened up Paper, did a quick 30 second sketch that the meeting participants saw and we nailed the requirement. Everyone said 'save that picture!' because it hit the exact feel we were looking for with this functionality.

Now, I know what some of you are thinking... Ted, its called a pencil and a piece of paper. Yes, this could have been done that way, but what you are not seeing yet is what I did next... I emailed my graphic to the BA in charge of the meeting. Sure, I could have handed him the piece of paper, but his requirements documentation was not on nor would ever be on a piece of paper. His requirements lived a digital life. By creating this drawing digitally, it meant the BA didn't have to do any translation between hand drawing and digital copy. It removed a barrier, albeit a small one, between mediums to make the BA's life just a slight bit easier.

So I challenge you all to pick up a digital drawing app and see how you can incorporate it into your next requirements elicitation session.

7 July 2012

LAST Conference programme published

Hey everyone. There is going to be no better place to be on July 27th that the LAST Conference in Melbourne.

I just finished putting the schedule together for the majority of sessions. There is a good range of planned sessions and a couple of open space sessions so there is room for the important conversations we don't yet know about.

The details are here.

There is a wide range of topics being presented and we are delivering in a range of formats including games, presentations, workshops, fishbowl conversations, case studies, lightning talks and more.

You'll be stimulated, challenged and reward. I hope to see you there.

6 July 2012

Testing Conference Notes

Last week I spent a couple days in Boston, MA attending a testing conference that a vendor of mine puts on every year. I don't often get to attend events like this as work is normally insanely busy (notice the 6 weeks since I've blogged here). It was great to take some time out and learn about ways one of my departments (I manage BA and QA teams) could have a bigger impact on the organization.

(Note, the picture to the right was me driving the water taxi across Boston Harbor, from the airport to the hotel. To say that this was an AWESOME way to start out the trip would be the biggest of understatements. If you are ever flying into Boston and staying downtown, I highly recommend this relatively inexpensive and gorgeous method of travel. I can't guarantee you'll have the option of driving the boat, but it is awesome if you can swing it.)

What you will see here is a slightly redacted version of my notes from the two days. I've removed any reference to the vendor who hosted the event and to any of the clients who also presented at the conference. I thought this information was incredibly valuable and wanted to share it with all of you, but since I didn't ask before hand if this was ok, I have decided to impersonalize it a bit. This doesn't in any way change the content, I just wanted to say all this up front so you all know why its so 'generic'.

Session Highlights

  • Imagine Apple decides to enter our business. What would they change? What would they keep the same? These kinds of questions can be used to ensure we keep our edge. Think about how Apple tests software and the level of quality they achieve across their product lines. We don't know their processes, but we need to be thinking about what we can do to achieve similar success.
  • Continuous integration is the way to improve QA. Push the testing process left. Don't test during QA, but start testing when requirements are elicited. Test the requirements themselves, create test scripts and have the business sign off on those at the same time as the requirements.
  • Always need to know the state of the code at any point in the project. Use continuous integration dashboards to see when issues have crept into the code. Always make sure that at the start of the cycle the automation is as solid as possible to get a good baseline. Check status regularly to identify issues prior to the manual tests beginning.
  • Lots of open defects cause lots of management work to track and resolve. Big backlogs are bad because they need more people to do the management of the backlog. If you are not going to fix something anytime soon, close the defect and search for it if you need it in the future.
  • A stable code base is one of the most valuable assets in your organization. Delaying testing to the end of the project destroys code stability.
  • Effectiveness of your lab is the number of scripts you can run each day. The more efficiently you test, the faster you can code and release.
  • Gartner analyst who spoke knows a highly productive dev team that has zero defects and processes millions in transactions every night. New release every month.
  • Done, done and done. It's coded, it's tested and all tests passed. Its important to know what 'done' is for your organization.
  • Don't track partial complete, its either done or it's not. Gives a false sense of progress. Focus should be on working software. Similar point to using continuous integration from earlier.
  • Behavior driven development requires writing high level test scripts that the user can read and test themselves. These should be written before coding and at the same time as requirements and approved by the business.
  • QA should focus on automating test and validating the solution, not 'doing testing'.
  • Should focus on 'being agile' and not necessarily 'Agile.' We can get these results without Agile development processes, but it is probably a good idea to transition to Agile development anyway.
  • Start out with writing automated tests for code that is changing, new code and for code that is broken. If the code doesn't change and it is working fine, don't write tests for it.
  • Start measuring days of QA downtime due to environment and code issues. Very important metric that can help build a business case for implementing automation.
  • Less than 25% of the organizations represented here have any continuous integration environment set up and running. About 60% have no automation at all. Another 30% have some automation and the remaining 10% have a significant amounts of their tests automated. No one had more than 60% of their test suite automated.
  • Even large companies don't have any experience in mobile or cloud. Even a large financial services company didn't really have any experience with this. Most people had issues with testing mobile, especially Android, due to device fragmentation.
  • Spend more time on defect prevention than detection. Spend more time teaching developers how to better test. Less QA resources and more devs who do testing while writing code. QA needs to become evangelists for quality.
  • A test package should be completed before a development item can be dropped into the dev backlog. This consists of what scripts need to be run and if any new scripts need to be created.
  • T-shaped individual.  Deep knowledge in testing but broad in other areas of project roles. 

A large online travel company

  • It takes 1-3 testers 3-4 days to certify a release. It used to take a large team 6-8 weeks to do the same. 
  • Set standards for cross-browser and load testing across all teams.
  • They now use a common tool to see defect rates and environment health across teams.
  • They use a Jenkins server with selenium for automation testing. Junit for java unit tests. Chibot for cross browser testing. Testrail for test case management.
  • They use a host data provider for mocked services. This helps them test integrations without being reliant on their partners to have active test environments.
  • Got rid of HP quality center. Was not a good fit for agile processes.
  • They do multiple daily releases to production. Did over 800 releases to production last year.
  • Started this process change in 2008. 
  • Collocation of quality team with developers. If you're developing elsewhere, you need to have testing with the dev team.
  • The feature teams are responsible for quality, not the QA department head. If the teams say if quality is high then the release is done.
  • With less, do more. Positive version of the saying 'do more with less'.
  • They have gone from 200+ QA people down to 130 and are trying to get down to 30 by the end of this year. Vendor will have additional testing resources offshore with the offshore devs, but the company will only have 30 internal people who will focus on quality measurement, not actual testing.
  • They use Git for code management. They have multiple code repositories; they don't have a single, large code base but multiple smaller code bases. Jenkins will do integration testing between code bases.
  • $500M in revenue. 1400 employees worldwide, 700 in IT. 130 currently in QA.

A large financial services firm

  • Set up a governance committee to define timelines for setting up better quality processes. Have advisors from the business and set metrics and practices in business terms.
  • Give status of systems as integrated not individually. Take a platform approach to measuring value to the business.
  • Create a quality performance index. Need to be able to always answer the questions, "What would work if we release tomorrow? What are our gaps?"
  • Need to do regular quality reviews of our releases.
  • Need to measure duration after code for the release lands that it takes to fix all defects.
  • Find quality champions in all roles in the organization.
  • Break functional scripts into smaller chunks and make them into unit tests. Easier to maintain and less chance of failure when code changes.
  • Better metrics when functional testing with devs, prior to regression. Better root cause analysis.
  • Achieved a reduced testing duration of 2.5 weeks. 95% pass rate on first day of functional testing. 10% reduction in cost of quality for release.

A medium-sized health insurance company 

  • Enterprise quality score. PMO tracks results but quality division defines the score. Weekly reported to executive leadership team. They have a dashboard for this purpose. They track Pre and Post production to know if any quality issues are impacting customers.
  • Measurements they use: Number of open defects, fix time for defects, budget run rate, defect severity.
  • The weights for each measure are different for each project, but each project uses the same measures.
  • Each measure 1-5, with larger number being best.
  • They track this with a field in defect system. The business participates in the defect process and helps to set this score.
  • Use ReplayDirector to see how defects occur. 


If I could sum the whole two days up, it would be to say three things: 1) move testing to as early in the project lifecycle as you can, 2) always be looking for ways to innovate your testing strategy and 3) be advocates for quality in your organizations.

3 July 2012

Lean and learning vs Buying a solution

A while back I had a coffee with Karel Schuller and he drew a diagram that clarified something that I knew but struggled to explain.  It's amazing the power of a good diagram.

When managing change we make trade-offs. They essentially boil down to understanding whether a change initiative is urgent and whether it is important.

If an change is important, don't leave it until the last minute or else you'll be caught in a loop of reactive and shallow changes. True change will remain elusive until you can find the time to nurture it over time.

2 July 2012

Re: Are the Kanban practices in the right order

The Juggler's Assistant By Pryere, @Flickr

[Edited after some feedback from David Anderson and Kurt Hausler]

First read
Haken led a discussion at the recent #klrat meetup where the participants discussed the order of Kanban implementations and the depth of application. Sequence, avoiding maturity models and spider maps were all discussed. I wasn't there and the blog post probably on skims the content.

Jason takes the conversation and reflects on the order of adoption of the Kanban method practices and discussed the order he approaches people and teams with the techniques.  There is a thing here where, like scrum, applying the whole system lead to exponentially increased benefits, but the focus of Jason's and this blog post is on the introduction phase of Kanban practices.

My experience usually Jason's, but I also have exceptions I can point to.

In summary Hakan and Jason's variation from David Anderson's model is that you are more likely to make policies explicit before you get around to address WIP limits. Both of the blog posts above provide a good -and concise- explanation of the variance in their (and my) experience.

I also think limiting WIP has a bigger pay-off than explicit policies. At least it has in my experience.  And there is a likely scenario where you will go to limiting WIP before explicit policies. Here is my thinking, based upon my experience and observations;

Typically, acknowledge the current situation before improving
Typically in an enterprise we are conditioned to expect and work with defined processes and standards. In my work I have often invested a good amount of work in stripping away process and standards to free up the team to do the work.  When I do this everyone gets nervous, especially the people who are now accountable and responsible for the outcomes of the work.

Process is like a blanket on a cold winter day. (Hello Melbourne!) It protects you from the elements.  "I was just following process!" is a legitimate excuse for shipping shoddy product on time. Or even late. But when the safety blanket is removed people need to take action and make sure they are actually doing something valuable. Mindfulness becomes a responsibility the staff member can't shirk.

In this context explicit policies feel like process. They give some comfort that the team members are not being offered up as sacrifices to the elements. 

And in this context you are almost trading "visualising the work and flow" which gives us all understanding and transparency, for explicit policies for the team's sake. The team then get to explain the current mess by observing the polices that enabled it to emerge.  The team can then get on with the business of improving without having to deal with any blame or guilt that might be a legacy of their past.

There is also the culture of busyness, and it is a hard nut to crack, so it is sometimes easier to deal with things that are simpler to execute than trying to break entrained thinking and behaviours. Frankly despite the learning games, and general acknowledgement that "we are all trying to do too much" people find it really hard to break their habits, let alone speak with their customers about this change in work practices.

Exception to the common case
The exception is the nicely improving Scrum or XP team.  These teams, depending on their heritage might simply be looking for the next most valuable thing to introduce to their team practices. Given the potentially massive pay-off by limiting WIP, you'd have a hard time trying to convince these teams to deal with much else before tackling WIP limits.

Have you read David Anderson's bookblog or heard him talk on the topic? I wonder what he has to say.

1 July 2012

Learning organisations

I really admire teams and organisations that value learning. Last Friday I had the good fortune to sit in two sessions where team mates shared lessons they have been learning with their colleagues.

The first was a brown bag session on current thinking around CRM tools and social media and hat it might mean for a university.  An interesting discussion that ended with the challenge to the non IT folk in the room to start thinking about how the organisation should adapt to these new opportunities (with the offer of our support.)  The people that came were thankful to the IT department for bringing up the issue and bringing people together.  There was much chatter as people headed off to the elevators.

After lunch some of the members of one of our agile software development teams presented a narrative of their 6 month agile journey. It was nice to hear the constant refrain of "we are just beginners" and "we have a lot to learn" peppered throughout the presentation. At the same time each of the three people's narration was  classic "benefits of agile" bundled with the importance of discipline. I couldn't have scripted it better.

Delivering product is nice, but collaboration and learning are the real success criteria.

The picture above come from one of Andrea's slides. A story map on day 0 and a week or two later; Pink notes for issues to be resolved; one of the stories torn in half to indicate it is now two stories, themed by process steps. Stickies changing as new information emerges. All very nice.