Skip to main content

Estimate the Minimum Viable Product Backlog

Estimated size needs to include all work done by any silo'd team to get the next iteration on the vertical slice to a done-done state in a agreed upon production or production like environment. That means that this includes qa, dba and sysadmin effort to get this out the door not just what a silo'd dev team does. That way the estimate accounts for the time it takes to wade through the red tape of db change requests, glitches in continuous integration, system config changes, qa effort including customer feedback and anything else that creeps up on the scope of delivering your Minimum Viable Product.

Estimating across silos has several additional beneficial outcomes. 1. Metrics gathered from estimates can be used to retrospect the entire process and effort through delivery. 2. It's a start toward deeper collaboration and breaking down knowledge silos - anyone can call bs on an estimate as they learn more about what it takes to execute a change in a part of the system they are unfamiliar with.

This is a break form the conventional hand-off approach. Here teams are unified and have investment in a commitment to deliver value. Your silos are not customers of each other, i.e. - delivering a database implementation to the internal organization, they are collaborators on a value stream.

Comments

Popular posts from this blog

Microservices Design Patterns

functional decomposition or domain-driven design well-defined interfaces  explicitly published interface  single responsibility principle potentially polyglot http://blog.arungupta.me/microservice-design-patterns/ http://blog.arungupta.me/microservices-monoliths-noops/ https://go.forrester.com/blogs/13-11-20-mobile_needs_a_four_tier_engagement_platform/ three-tier architecture — presentation, application, data vs. 4 tier -- client, delivery, aggregation, services

JSON stores and Aggregation for system logs and audit trail. Stop logging with format strings!

This first article explains why you might want to store logs and BI data in a machine readable format. https://journal.paul.querna.org/articles/2011/12/26/log-for-machines-in-json/ The second two get at how you might do that and be able to transition/re-purpose your existing relational stores into a cloud based scalable framework.  http://docs.mongodb.org/ecosystem/tutorial/use-aggregation-framework-with-java-driver/ http://www.bityota.com/3-steps-to-analytics-on-mongodb/ Logging might present a relatively low profile target for schema-less implementation and give you a chance to expose your organization to an important technology for modern web development. With your foot in the door you might convince them to add usage auditing and before you know it you are doing up front analysis on usage in front of your next project.

homebrew, git

IN PROGRESS - Steps to contribute a cask to homebrew. Also a good way to stay in touch with git use if you don't have it at work. https://github.com/caskroom/homebrew-cask/blob/master/CONTRIBUTING.md#getting-set-up-to-contribute 1: Fork the repository in GitHub with the  Fork  button. create or login into your github account use the Fork button at https://github.com/caskroom/homebrew-cask A  fork  is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project. Most commonly,  forks  are used to either propose changes to someone else's project or to use someone else's project as a starting point for your own idea. Now get a 'remote' local copy of the fork you must have git installed comes pre-installed on later osx versions can switch to most current with homebrew - $brew install git $ github_user= ' <my-github-username> ' $ cd " $( brew --repository ) ...