Posts Tagged Agile
Preparing for Your Organization’s DevOps Journey
Posted by Gary A. Stafford in DevOps, Software Development, Technology Consulting on May 10, 2017
Introduction
Recently, I was asked two questions regarding DevOps. The first, ‘How do you get started implementing DevOps in an organization?’ A question I get asked, and answer, fairly frequently. The second was a bit more challenging to answer, ‘How do you prepare your organization to implement DevOps?’
Getting Started
The first question, ‘How do you get started implementing DevOps in an organization?’, is a popular question many companies ask. The answer varies depending on who you ask, but the process is fairly well practiced and documented by a number of well-known and respected industry pundits. A successful DevOps implementation is a combination of strategic planning and effective execution.
A successful DevOps implementation is a combination of strategic planning and effective execution.
Most commonly, an organization starts with some form of a DevOps maturity assessment. The concept of a DevOps maturity model was introduced by Jez Humble and David Farley, in their ground-breaking book, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series), circa 2011.
Humble and Farley presented their ‘Maturity Model for Configuration and Release Management’ (page 419). This model, which encompassed much more than just CM and RM, was created as a means of evaluating and improving an organization’s DevOps practices.
Although there are several variations, maturity models ordinarily all provide some means of ranking the relative maturity of an organization’s DevOps practices. Less sophisticated models focus primarily on tooling and processes. More holistic models, such as Accenture’s DevOps Maturity Assessment, focus on tooling, processes, people, and culture.
Following the analysis, most industry experts recommend a strategic plan, followed an implementation plan. The plans set milestones for reaching higher levels of maturity, according to the model. Experts will identify key performance indicators, such as release frequency, defect rates, production downtime, and mean time to recovery from failures, which are often used to measure DevOps success.
Preparing for the Journey
As I said, the second question, ‘How do you prepare your organization to implement DevOps?’, is a bit more challenging to answer. And, as any good consultant would respond, it depends.
The exact answer depends on many factors. How engaged is management in wanting to transform their organization? How mature is the organization’s current IT practices? Are the other parts of the organization, such as sales, marketing, training, product documentation, and customer support, aligned with IT? Is IT aligned with them?
Even the basics matter, such as the organization’s size, both physical and financial, as well as the age of the organization? The industry? Are they in a highly regulated industry? Are they a global organization with distributed IT resources? Have they tried DevOps before and failed? Why did they fail?
As overwhelming as those questions might seem, I managed to break down my answer to the question, “How do you prepare your organization to implement DevOps?”, into five key areas. In my experience, each of these is critical for any DevOps transformation to succeed. Before the journey starts, these are five areas an organization needs to consider:
- Have an Agile Mindset
- Breakdown Silos
- Know Your Business
- Take the Long View
- Be Introspective
Have an Agile Mindset
It is commonly accepted that DevOps was born from the need of Agile software development to increase the frequency of releases. More releases required faster feedback loops, better quality control methods, and the increased use of automation, amongst other necessities. DevOps practices evolved to meet those challenges.
If an organization is considering DevOps, it should have already successfully embraced Agile, or be well along in their Agile transformation. An outgrowth of Agile software development, DevOps follow many Agile practices. Such Agile practices as cross-team collaboration, continuous and rapid feedback loops, continuous improvement, test-driven development, continuous integration, scheduling work in sprints, and breaking down business requirements into epics, stories, and tasks, are usually all part of a successful DevOps implementation.
If your organization cannot adopt Agile, it will likely fail to successfully embrace DevOps. Imagine a typical scenario in which DevOps enables an organization to release more frequently — monthly instead of quarterly, weekly instead of monthly. However, if the rest of the organization — sales, marketing, training, product documentation, and customer support, is still working in a non-Agile manner, they will not be able to match the improved cycle time DevOps would provide.
Breakdown Silos
Closely associated with an Agile mindset, is breaking down departmental silos. If your organization has already made an Agile transformation, then one should assume those ‘silos’, the physical or more often process-induced ‘walls’ between departments, have been torn down. Having embraced Agile, we assume that Development and Testing are working side-by-side as part of an Agile software development team.
Implementing DevOps requires closing the often wide gap between Development and Operations. If your organization cannot tear down the typically shorter wall between Development and Testing, then tearing down the larger walls between Development and Operations will be impossible.
Know Your Business
Before starting your DevOps journey, an organization needs to know thyself. Most organizations establish business metrics, such as sales quotas, profit targets, employee retention objectives, and client acquisition goals. However, many organizations have not formalized their IT-related Key Performance Indicators (KPIs) or Service Level Agreements (SLAs).
DevOps is all about measurements — application response time, incident volume, severity, and impact, defect density, Mean Time To Recovery (MTTR), downtime, uptime, and so forth. Established meaningful and measurable metrics is one of the best ways to evaluate the continuous improvements achieved by a maturing DevOps practice.
To successfully implement DevOps, an organization should first identify its business-critical performance metrics and service level expectations. Additionally, an organization must accurately and honestly measure itself against those metrics, before beginning the DevOps journey.
Take the Long View
Rome was not built in a day, organizations don’t transform overnight, and DevOps is a journey, not a time-boxed task in a team’s backlog. Before an organization sets out on their journey, they must be willing to take the long view on DevOps. There is a reason DevOps maturity models exist. Like most engineering practices, cultural and organizational transformation, and skill-building exercise, DevOps takes the time to become successfully entrenched in a company.
Rome was not built in a day, organizations don’t transform overnight, and DevOps is a journey, not a time-boxed task in a team’s backlog.
Organizations need to value quick, small wins, followed by more small wins. They should not expect a big bang with DevOps. Achieving high levels DevOps performance is similar to the Agile practice of delivering small pieces of valuable functionality, in an incremental fashion.
Getting the ‘Hello World’ application successfully through a simple continuous integration pipeline might seem small, but think of all the barriers that were overcome to achieve that task — source control, continuous integration server, unit testing, artifact repository, and so on. Your next win, deploy that ‘Hello World’ application to your Test environment, automatically, through a continuous deployment pipeline…
This practice reminds me of an adage. Would you prefer a dollar, every day for the next week, or seven dollars at the end of the week? Most people prefer the immediacy of a dollar each day (small wins), as well as the satisfaction of seeing the value build consistently, day after day. Exercise the same philosophy with DevOps.
Be Introspective
As stated earlier, generally, the first step in creating a strategic plan for implementing DevOps is analyzing your organization’s current level of IT maturity. Individual departments must be willing to be open, honest, and objective when assessing their current state.
The inability of organizations to be transparent about their practices, challenges, and performance, is a sign of an unhealthy corporate culture. Not only is an accurate perspective critical for a maturity analysis and strategic planning, but the existence of an unhealthy culture can also be fatal to most DevOps transformation. DevOps only thrives in an open, collaborative, and supportive culture.
Conclusion
As Alexander Graham Bell once famously said, ‘before anything else, preparation is the key to success.’ Although not a guarantee, properly preparing for a DevOps transformation by addressing these five key areas, should greatly improve an organization’s chances of success.
All opinions in this post are my own and not necessarily the views of my current employer or their clients.
Copyright: peshkova / 123RF Stock Photo
Operational Readiness Analysis
Posted by Gary A. Stafford in Continuous Delivery, DevOps, Software Development on October 12, 2016
“Analysis: a detailed examination of the elements or structure of something, typically as a basis for discussion or interpretation.” — Google
Introduction
Recently, I had the opportunity, along with a colleague, to perform an operational readiness analysis of a client’s new application platform. The platform was in late-stage development, only a few weeks from going live. Being relatively new to the project team, our objective was to determine the amount of work remaining to make the platform operational, from a technical perspective. As well, we wanted to ensure there were no gaps in the project’s scope, as it related to technical operational readiness.
There were various approaches we could have taken to establish the amount of unfinished work and any existing gaps. We could have reviewed the state of the project in the client’s Agile project management system. We could have discussed the project’s state with the Product Owner, Project Manager, Team Lead, and Business Analyst. Neither of these methods on their own would have provided us with a complete picture.
The approach we ultimately took was what we’ve coined, an Operational Readiness Analysis, or our trendier title, the ‘State of the DevOps’. This approach is particularly effective and highly interactive. The analysis may be conducted at any stage of the Software Development Lifecycle (SDLC), to review a project’s operational state of readiness. The analysis involves five simple steps:
- Categorize
- Itemize
- Organize
- Prioritize
- Document
Grab Your Post-It Notes
We began, where all good ThoughtWorkers tend to begin, with a meeting room, whiteboard, dry-erase markers, and lots of brightly colored Post-It notes. ThoughtWorkers do love their Post-It notes. We gathered a small subset of the project team, including the Product Owner, Project Manager, and the team’s DevOps Engineer.
Step 1: Categorize
To start, we broadly categorized the operational requirements into buckets of work. In my experience, categorization typically takes one of two forms, either the use of role-based descriptors, like ‘Security’ and ‘Monitoring’, or the use of the ‘ilities’, such as ‘Scalability’ and ‘Maintainability’.
The ‘ilities’ are often referenced in regards to software architecture and the functional and nonfunctional requirements (NFRs) of a project. Although there are many ‘ilities’, project requirements frequently include Usability, Scalability, Maintainability, Reliability, Testability, Availability, and Serviceability. Although I enjoy referring to the ‘ilities’ when drafting project requirements, I find a broad audience more readily understands the high-level category descriptions.
We collectively agreed on eight categories, in addition to a catch-all ‘Other’ category, and distributed them on the whiteboard.
Role-based categories tend to include the following:
- Security and Compliance
- Monitoring and Alerting
- Logging
- Continuous Integration and Delivery
- Release Management
- Configuration
- Environment Management
- Infrastructure
- Networking
- Data Management
- High Availability and Fault Tolerance
- Performance
- Backup and Restoration
- Support and Maintenance
- Documentation and Training
- Technical Debt
- Other
Step 2: Itemize
Next, each person placed as many Post-It notes as they wanted, below the categories on the whiteboard. Each Post-It represented an item someone felt needed to be addressed as part the analysis. Items might represent requirements currently in progress, or requirements still untouched in the project’s backlog. A Post-It note might contain a new requirement. Maybe there is a question about an aspect of the project’s relative importance, or there were concerns about how an aspect of the project had been implemented.
Participants placed an average of five to ten Post-It notes on the whiteboard. The end product was a collective mind-map of the project’s operational state (the following is only an example and doesn’t represent the actual results of any real client analysis).
Whereas categories tend to be generic in nature, the items are usually particular to the project, and it’s technology choices. The ‘Monitoring/Logging’ category might include individual items such as ‘Log Rotation Policy’ or ‘Splunk vs. ELK?’. The ‘Security’ category might include individual items such as ‘Need Password Rotation’ or ‘SSL Certificate Management’.
Participants often use one or two words to summarize more complex thoughts. For example, ‘Rollback’ might be stating, ‘we still don’t have a good rollback strategy for the database’. ‘Release Frequency’ may be questioning, ‘why can’t we release more frequently?’. Make sure to discuss and capture the participant’s full thoughts behind each Post-It, the during the Organize, Prioritize, and Document stages.
Don’t let participants give up too soon placing Post-It notes on the board. Often, one participant’s Post-It will spark additional thoughts other team members. Don’t be afraid to suggest a few items, if the group needs some initial motivation. Lastly, fear not, it’s never too late to go back and add missing items, uncovered in later stages of analysis.
Step 3: Organize
Quickly and non-judgmentally, review all items on the whiteboard. Group duplicates together. Ask for clarification on items where necessary. Move miscategorized items, if the author and team agree there is a more appropriate category.
Examine the ‘Other’ category. If there are items in the ‘Other’ category that are similar, consider adding a new category to capture them. Maybe the team missed identifying a key category, earlier. Conversely, there is nothing wrong with leaving a few stragglers in the ‘Other’ category; don’t overthink the exercise.
Participants should leave this stage with a general understanding of each item on the whiteboard.
Step 4: Prioritize
With all items identified, organized, and understood, discuss each item’s status. Does the item represent incomplete requirements? Does everyone agree an item’s requirements are complete? Team members may not agree on the completeness of particular items. Inconsistencies may reflect unclear requirements. More often, I find, disagreements are a result of an inconsistent or incomplete implementation of requirements across environments — Development, QA, Performance, Staging, and Production.
Why are operational requirements frequently implemented inconsistently? Often, a story is played early in the development stage, prior to all environments being available. For example, a story is played to add monitoring to Development and QA. The story is completed, tested, and closed. Afterward, the Performance environment is created. Later yet, the Staging and Production environments are created. But, there are no new stories to address monitoring the new environments.
One way to avoid this common issue with operational requirements is an effective DevOps automation strategy. In this example, an effective strategy would mean all new environments would get monitoring, automatically. The story should broadly address monitoring, and not, short-sightedly, monitoring of specific environments.
Often, only one or two project team members are responsible for ‘all the DevOps stuff’. They alone possess an accurate perspective on an item’s current status.
Gain agreement from the Product Owner, Project Manager, and key team members on the priority of incomplete requirements. Are they ‘Day One’ must-haves, or ‘Day Two’ and can be completed after the application goes live? Flag all high-priority items.
If the status of an item is unknown, such as ‘why is the QA environment always down?!’, note it as an open question and move on. If the item simply requires a quick answer to resolve, like ‘do we backup the database?’, answer it (‘the database is replicated and backed up daily’) and move on.
Step 5: Document
Snap a picture of the whiteboard and document its contents. We chose the client’s knowledge management system as a vehicle to share the whiteboard’s results. We created a table with columns for Category, Item, Description, Priority, Owner, and Notes. Also, it’s helpful to have a column for the project’s corresponding Story ID or Defect ID. Along with the whiteboard, capture key discussion points, questions, and concerns, raised by the participants.
Step 5: Document
Snap a picture of the whiteboard and document its contents. We chose Confluence to share the whiteboard results, using a table with columns for Category, Item, Description, Priority, Owner, and Notes. We also bulleted all key discussion points, questions, and concerns, raised by the participants.
Make the analysis results available to all team members. Let the team ask questions and poke holes in the results. Adjust the results if necessary.
Next Steps
The actions you take, given the results of the analysis, are up to you. In our case, we ensured all high-priority items were called out on the team’s Agile board. Any new items were captured in the client’s Agile project management system. Finally, we ensured open questions and concerns were addressed in a timely fashion. We continue to track each item’s status weekly, throughout the launch and post-launch periods.
We anticipate conducting a follow-up analysis, thirty days after launch. The goal will be to evaluate the effectiveness of the first analysis and identify additional operational needs as the application enters a business-as-usual (BAU) application lifecycle phase.
Postscript: the ‘ilities’
The ‘ilities’, courtesy of codesqueeze.com and en.wikipedia.org
- Scalability
The capability of a system, network, or process to handle a growing amount of work or its potential to be enlarged to accommodate that growth. - Testability
The practical feasibility of observing a reproducible series of such counterexamples if they do exist. - Reliability (Resilience)
The ability of a system or component to perform its required functions under stated conditions for a specified time. - Usability (Performance)
The degree to which software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use. - Serviceability (Supportability)
The ability of technical support personnel to install, configure, and monitor computer products, identify exceptions or faults, debug or isolate faults to root cause analysis, and provide hardware or software maintenance in pursuit of solving a problem and restoring the product into service. - Availability
The proportion of time a system is in a functioning condition. Defined in the service-level agreement (SLA). - Maintainability
The ease with which a product can be maintained. Isolate and correct defects or their cause, prevent unexpected breakdowns, maximize a product’s useful life, maximize efficiency, reliability, and safety, meet new requirements, make future maintenance easier, and cope with a changed environment.
All opinions in this post are my own and not necessarily the views of my current employer or their clients.
Software Delivery: Evaluating Risk within the Enterprise
Posted by Gary A. Stafford in Build Automation, DevOps, Enterprise Software Development, Software Development on November 9, 2014
Introduction
Many vendor whitepapers, industry publications, blog posts, podcasts, and e-books, extol the best practices in software development and delivery. Best practices include industry-standard concepts, such as Agile, DevOps, TTD, continuous integration, and continuous delivery. Generally, these best practices all strive to improve the process of delivering software enhancements and bug fixes to customers.
Rapidly, reliably and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead. – Wikipedia
Most learning resources present one of two idealized environments, ‘applications as islands’ and ‘utopian enterprise’. I am also often guilty of tailoring my own materials to one of these two idealized environments. Neither ‘applications as islands’ or ‘utopian enterprise’, best model the typical enterprise software environments in which many of us work.
Applications as Islands
The ‘applications as islands’ environment is one of completely isolated application stacks. These types of environments have multiple application stacks, consisting of web, mobile, and desktop components, services, data sources, utilities and scripts, messaging and reporting components, and so forth. Unrealistically, each application stack is completely isolated from the other application stacks within the same environment.
The Utopian Enterprise
The ‘utopian enterprise’ environments have multiple application stacks with multiple shared components. However, they are built, unrealistically, using consistent and modern architectural patterns and compatible technology stacks. They are designed from the ground up to be compartmentalized, scalable, and highly risk-tolerant to changes. They often avoid the challenges of monolithic legacy applications. The closest things in the real world are probably industry trendsetters, such as Facebook, Etsy, Amazon, and Twitter. We all probably wish we could evolve our own software environments into one of these Utopias.
Complexity and Risk
As an organization continues to evolve their software, they naturally increase the overall complexity, and thereby the challenge of effectively delivering reliable and performant software. In this post, I will explore the challenges of software delivery, as a software environment grows in complexity. Specifically, I will focus on how to evaluate the level of risk based on software changes made to various components within the software environment.
Sensitivity and Impact
As we examine the level of risk introduced by software changes within the environment, two aspects of risk are inescapable, sensitivity and impact. Sensitivity will be defined as the potential degree of which one component, such as an application, service, or data source, is affected by changes to other components within the same software environment. How sensitive is ‘Application A’ to changes made to other components within the same software environment, on which ‘Application A’ is directly or indirectly dependent?
The impact will be defined as the potential effect a component’s changes have on other components within the software environment. Teams tend to only evaluate the impact of changes to the immediate component or application stack. They do not sufficiently consider how those changes impact those components that are directly and indirectly dependent on them. What level of impact do changes to ‘Service B’ have on all other components within the software environment that are directly and indirectly dependent on ‘Service B’?
Notice I use the word potential. Any change has the potential to introduce risk. The level of risk varies, based on the type and volume of changes. A few simple changes should have a low potential for impact, as opposed to a high number of changes, or more complex changes. For example, changing an internal error message logged by a particular service operation should present a very low risk. This, as opposed to rewriting that operation’s complex algorithm for calculating a customer’s creditworthiness. The potential impact of those two types’ changes to dependent components varies significantly.
Measuring Risk
For both sensitivity to change and impact of change, I will use a color-coded scale to subjectively assign a level of potential risk to each component within a given software environment. The scale ranges from ‘Low’, to ‘Moderate’, to ‘High’, to ‘Very High’. Using the scale, it is possible to ‘heat map’ a software environment, based on the level of risk from changes.
Independent Aspects of Risk
Sensitivity and impact are two independent aspects of risk. Changes to one component may have a ‘Low’ potential impact on all other components within the environment. While at the same time, that same component may have a ‘High’ sensitivity to changes made to other components within the environment. Alternatively, a component may have a ‘Very High’ risk for potential impact on multiple components within the environment. At the same time, that same component may have a ‘Low’ potential sensitivity to changes made to other components. Sensitivity and risk do not parallel each other.
Growing Complexity
Let’s look at how sensitivity and impact change as we increase the software environment’s complexity. In the first example, we will look at one of the two environments I described earlier, isolated applications. Applications may have their own web and mobile components, SOAP or RESTful services, data sources, utilities, scheduled tasks, and so forth. However, the applications do not depend on each other or components outside their own immediate application stack; the applications are self-contained.
When making changes in this type of environment, the real potential impact is to the overall stability, security, and performance of the individual applications, themselves. As long as they are in isolation, the applications will have no impact on each other. Therefore, applications potential sensitivity to changes and their impact on other applications is ‘Low’.
Shared Components
A slightly more complex example is a software environment in which one or more applications have a dependency on a component outside their immediate application stack. For example, a healthcare provider develops a Windows-based application to track their employee’s work schedules (Application A). In addition, they develop a web application to track patient appointments (Application B). Lastly, they offer a client-facing mobile application for patients to track personal fitness and nutrition goals (Application C). Applications B and C share a common set of services and a database for managing patient data.
Software changes made to Applications A, B, and C, should have no effect on other components within the software environment. However, Applications B and C are potentially impacted by changes made to either the Services Layer or Data Layer. The Services Layer has ‘High’ potential impact to the software environment. Lastly, the Data Layer should not be directly impacted by changes made to the Services Layer or Applications. However, the Data Layer has the potential to directly affect the Services Layer, and indirectly affect Applications B and C. Therefore, the Data Layer’s potential impact on other dependent components within the environment is ‘Very High’.
Multiple Shared Components
An even more complex example is a software environment in which multiple applications have one or more dependencies on multiple components outside their immediate application stack (many-to-many).
Take, for example, a small financial institution. They have a ‘legacy’ COBOL-based application for managing their commercial mortgage business (Application A). They also have an older J2EE-based application, they acquired through a business merger, for managing their commercial banking relationships (Application B). Next, they have a relatively new Java EE-based investment banking application to manage their retail customers (Application C). Lastly, they have web-based, client-facing application for secure, online retail banking.
Since both Application A and B serve commercial clients, it is necessary to send financial data between the two application stacks. Since both applications are built on different, older technologies, the development team built a Custom Messaging Middleware component to connect the two applications. The Custom Messaging Middleware component receives, transforms, and delivers messages between the two applications.
Changes made to Applications C and D should have no impact on other components within the software environment. However, changes made to either Application A or B has the potential to indirectly affect the ability to successfully communicate with the other application, via the Custom Messaging Middleware. Changes to the Custom Messaging Middleware have the potential to affect both Applications A and B. The Custom Messaging Middleware has a ‘Moderate’ potential sensitivity to risk, versus ‘Low’, because one could argue that changes to either Application A or Application B’s messaging format could impact the Custom Messaging Middleware’s ability to properly process that application’s messages and successfully deliver them to opposite application.
Applications B, C, and D have a direct dependency on the Services Layer, and indirectly on the Data Layer. Therefore, the potential impact of changes to the Services Layer on other components is arguably higher than in the last example. The Services Layer’s potential impact on other components is ‘Very High’.
Since Application B has a direct dependency on both the Messaging Middleware and the Services Layer, it has a higher sensitivity to changes then the other three applications. Application B’s potential sensitivity to changes by other components is ‘Very High’.
Changes made to the Services Layer or the Applications will not affect the Data Layer. However, the Data Layer has the potential to directly affect the Services Layer, and indirectly affect Applications B, C, and D. Therefore, the Data Layer’s potential impact on the software environment is ‘Very High’.
Small Enterprise
The last example of increasing complexity is an environment in which even more applications are dependent on even more components. Additionally, there may be different types of components, such as a common UI and third-party APIs, which only increase the complexity of the dependencies. Although this example is nowhere near as complex as many enterprise software environments, it does begin to reflect their intricate, inner-dependent structure.
Let’s use an example of a large web-based retailer. The retailer has a standalone ERM application for managing their wholesale purchasing and product distribution (Application A). Next, they have their primary client-facing storefront (Application B). They also have a separate application to handle customer accounts (Application C). Lastly, they have an application that manages their online media retail business and media storage (Application D).
In addition to the Common Services Layer, Common Data Layer, and Custom Messaging Middleware, as seen in earlier examples, the retailer has two other components in their environment, a Common Web User Interface (UI) and a Web API. The Web UI provides the customer with a seamless branded experience, no matter which application they use – Application B, C, or D. The customer enters the Common Web UI and has all three application’s features seamlessly available to them.
The retailer also exposes a RESTful Web API for its marketing affiliates. Third parties can develop a variety of applications that drive sales to the retailer, in return for a sales commission.
In the earlier examples, individual applications had separate points of entry. However, in this example, the Common Web UI provides a single point of entry for users of Applications B, C, and D. Having a single point of entry also introduces a single point of failure for all three applications. Thus, the potential risk to the retailer and their customers is much greater. The Common Web UI’s potential impact on other components is ‘Very High’.
A single point of entry also introduces a single point of failure.
The potential sensitivity of the Common Web UI to changes comes from its direct dependency on the Services Layer, and indirectly on the Data Layer. Additionally, one could argue, since the Common Web UI displays the three Applications, it is also sensitive to changes made by those applications. If one of those applications becomes impaired due to a bad change, that application would seem to affect the Web UI’s functionality. The Common UI’s potential sensitivity to change is ‘High’.
The Web API is similar to the Common Web UI, in terms of potential sensitivity and impact. The potential impact of changes to the Web API is ‘Very High’, since a defect there could result in the potential impairment of the retailer’s affiliate applications. The potential sensitivity of the Web API to changes comes from its direct dependency on the Services Layer, and indirectly on the Data Layer. The Web API’s potential sensitivity to change is ‘High’. There is very little chance of potential impact to the Web API from the retailer’s affiliate applications.
Impact of Key Components
Lastly, as systems grow in complexity, certain components often become so key, they have the potential to impact the entire environment, a true single point of failure. Below, note the potential impact of changes to the Common Services Layer on all other components. As the software environment has grown in complexity, the Common Services Layer sits at the heart of the system. The Services Layer has multiple components directly dependent on it (i.e. Application C), as well as other components indirectly dependent on it (i.e. Third-Party Applications). It is also the only point of access to and from the Common Data Layer.
There are steps organizations can take to mitigate the potential risk caused by changes to key components, like the Services Layer. Areas organizations commonly focus on to reduce risk are higher code quality, increased test coverage, and improved performance, fault tolerance, system redundancy, and rollback capabilities. Additionally, management should more thoroughly scrutinize proposed software changes to key components, balancing new features with the need for stability, availability, and performance.
Management must balance the need for new features with need for stability, availability, and performance.
Specific to services, organizations often look to decouple larger services, creating smaller, more focused services. Better separation of concerns increases the likelihood that potential impairments caused by code defects are isolated to a smaller subset of functionality.
Conclusion
In this brief post, we examined a potential risk to delivering reliable software, the impact of software changes. There are many risks to delivering reliable software. Once all sources of risk are identified and quantified, the overall level of risk to delivering reliable software can be assessed, and steps taken to reduce the potential impact.
Automating Task Creation in Team Foundation Server with PowerShell
Posted by Gary A. Stafford in .NET Development, PowerShell Scripting, Software Development, Team Foundation Server (TFS) Development on April 15, 2012
Administrating Team Foundation Server often involves repeating the same tasks over and over with only slight variation in the details. This is especially true if your team adheres to an Agile software development methodology. Every few weeks a new Iteration begins, which means inputting new Change Requests into Team Foundation Server along with their associated Tasks*.
Repetition equals Automation equals PowerShell. If you have to repeat the same task in Windows more than a few times, consider automating it with PowerShell. Microsoft has done an outstanding job equipping PowerShell to access a majority of the functionary of their primary application; Team Foundation Server 2010 (TFS) is no exception.
Microsoft’s latest release of Team Foundation Server Power Tools December 2011 includes Windows PowerShell Cmdlets for Visual Studio Team System Team Foundation Server. According to Microsoft, Power Tools are a set of enhancements, tools and command-line utilities that increase productivity of Team Foundation Server scenarios. Power Tool’s TFS PowerShell Cmdlets give you control of common version control commands in TFS.
One gotcha with TFS Power Tools, it doesn’t install PowerShell extras by default. Yes I agree, it makes no sense. If you already have Power Tools installed, you must rerun the installer, select the Modify Install option and add the PowerShell features. If you are installing Power Tools for the first time, make sure to select the Custom install option and add the PowerShell features.
*Tasks are a type of TFS Work Item. Work Item types can also include Bugs, Defects, Test Cases, Risks, QoS Requirements, or whatever your teams decides to define as Work Items. There is a comprehensive explanation of Work Items in chapter 12 of Microsoft’s Patterns & Practices, available to review on Codeplex.
Automating Task Creation
Working with different teams during my career that practice SCRUM, a variation of Agile, we usually start a new Sprint (Iteration) ever four to six weeks with an average Sprint Backlog of 15-25 items. Each item in the backlog translates into individual CRs in TFS. Each CR has several boilerplate Tasks associated with them. Many Tasks are common to all Change Requests (CR). Common Tasks often include analysis, design, coding, unit testing, and administration. Nothing is more mind-numbing as a Manager than having to input a hundred or more Tasks into TFS every few weeks, with each Task requiring an average of ten or more fields of data. In addition to the time requirement, there is the opportunity for human error.
The following PowerShell script creates a series of five different Tasks for a specific CR, which has been previously created in TFS. Once the Tasks are created, I use a separate method to link the Tasks to the CR. Every team’s development methodologies are different; ever team’s use of TFS is different. Don’t get hung up on exactly which fields I’ve chosen to populate. Your processes will undoubtedly require different fields.
There are many fields in a Work Item template that can be populated with data, using PowerShell. Understanding each field’s definition – name, data type, and rules for use (range of input values, required field, etc.) is essential. To review the field definitions, in Visual Studio 2010, select the Tools tab -> Process Editor -> Work Item Types -> Open WIT from Server. Select your Work Item Template (WIT) from the list of available templates. The template you chose will be the same template defined in the PowerShell script, with the variable $workItemType. To change the fields, you will need the necessary TFS privileges.
Avoiding Errors
When developing the script for this article, I was stuck for a number of hours with a generic error (shown below) on some of the Tasks the script tried to create – “…Work Item is not ready to save” I tried repeatedly debugging and altering the script to resolve the error without luck. An end up the error was not in the script, but in my lack of understanding of the Task Work Item Template (WIT) and its field definitions.
By trial and error, I discovered this error usually means that either the data being input into a field is invalid based on the field’s definition, or that a required field failed to have data input for it. Both were true in my case at different points in the development of the script. First, I failed to include the Completed Time field, which was a required field in our Task template. Secondly, I tried to set the Priority of the Tasks to a number between 1 and 5. Unbeknownst to me, the existing Task template only allowed values between 1 and 3. The best way to solve these types of errors is to create a new Task in TFS, and try inputting the same data as you tried to inject with the script. The cause of the error should quickly become clear.
The Script
For simplicity sake I have presented a simple PowerShell script. The script could easily be optimized by wrapping the logic into a function with input parameters, further automating the process. I’ve placed a lot of comments in the script to explain what each part does, and help make customization easier.The script explicitly declares all variables and adheres to PowerShell’s Strict Mode (Set-StrictMode -Version 2.0
). I feel this makes the script easier to understand and reduces the number of runtime errors.
############################################################# # # Description: Automatically creates (5) standard Task-type # Work Items in TFS for a given Change Request. # # Author: Gary A. Stafford # Created: 04/12/2012 # Modified: 04/14/2012 # ############################################################# # Clear Output Pane clear # Loads Windows PowerShell snap-in if not already loaded if ( (Get-PSSnapin -Name Microsoft.TeamFoundation.PowerShell -ErrorAction SilentlyContinue) -eq $null ) { Add-PSSnapin Microsoft.TeamFoundation.PowerShell } # Set Strict Mode - optional Set-StrictMode -Version 2.0 # Usually changes for each Sprint - both specific to your environment [string] $areaPath = "Development\PowerShell" [string] $iterationPath = "PowerShell\TFS2010" # Usually changes for each CR [string] $changeRequestName = "Create Task Automation PowerShell Script" [string] $assignee = "Stafford, Gary" # Values represent units of work, often 'man-hours' [decimal[]] $taskEstimateArray = @(2,3,10,3,.5) # Remaining Time is usually set to Estimated time at start (optional use of this array) [decimal[]] $taskRemainingArray = @(2,3,10,3,.5) # Completed Time is usually set to zero at start (optional use of this array) [decimal[]] $tasktaskCompletedArray = @(0,0,0,0,0,0) # Usually remains constant # TFS Server address - specific to your environment [string] $tfsServerString = "http://[YourServerNameGoesHere]/[PathToCollection]" # Work Item Type - specific to your environment [string] $workItemType = "Development\Task" [string[]] $taskNameArray = @("Analysis", "Design", "Coding", "Unit Testing", "Resolve Tasks") [string[]] $taskDisciplineArray = @("Analysis", "Development", "Development", "Test", $null) # Loop and create of eack of the (5) Tasks in prioritized order [int] $i = 0 Write-Host `n`r**** Script started...`n`r while ($i -le 4) { # Concatenate name of task with CR name for Title and Description fields $taskTitle = $taskNameArray[$i] + ": " + $changeRequestName # Build string of field parameters (key/value pairs) [string] $fields = "Title=$($taskTitle);Description=$($taskTitle);Assigned To=$($assignee);" $fields += "Area Path=$($areaPath);Iteration Path=$($iterationPath);Discipline=$($taskDisciplineArray[$i]);Priority=$($i+1);" $fields += "Estimate=$($taskEstimateArray[$i]);Remaining Work=$($taskRemainingArray[$i]);Completed Work=$($tasktaskCompletedArray[$i])" #For debugging - optional console output Write-Host $fields # Create the Task (Work Item) tfpt workitem /new $workItemType /collection:$tfsServerString /fields:$fields $i++ } Write-Host `n`r**** Script completed...
The script begins by setting up a series of variables. Some variables will not change once they are set, such as the path to the TFS server, unless you work with multiple TFS instances. Some variables will only change at the beginning of each iteration (Sprint), such as the Iteration Path. Other variables will change for each CR or for each Task. These include the CR title and Estimated, Completed, and Remaining Time. Again, your process will dictate different fields with different variables.Once you have set up the script to your requirements and run it successfully, you should see output similar to the following:
In TFS, the resulting Tasks, produced by the script look like the Task, below:
Deleting Work Items after Developing and Testing the Script
TFS Administrators know there is no Work Item delete button in TFS. So, how do you delete the Tasks you may have created during developing and testing this script? The quickest way is from the command line or from PowerShell. You can also delete Work Items programmatically in .NET. I usually use the command line, as follows:
- Open the Visual Studio 2010 Command Prompt.
- Change the directory to the location of witadmin.exe. My default location is:
C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE. - Run the following command, substituting the Task Id for the Task Id or Task Ids, comma delimited without spaces, of the Tasks you want to delete:
witadmin destroywi /collection:[Your TFS Collection Path Here] /id:12930 /noprompt
Almost the same command can be run in PowerShell by including the path to witadmin.exe
in the script. I found this method at Goshoom.NET Dev Blog. You can read more, there.
Be warned, there is no undoing the delete command. The noprompt
is optional; using it speeds up the deletion of Tasks. However, leaving out noprompt
means you are given a chance to confirm the Task’s deletion. Not a bad idea when you’re busy doing a dozen other things.
Further PowerShell Automation
Creating Tasks with PowerShell, I save at least two hours of time each Sprint cycle, and greatly reduce my chance for errors. Beyond Tasks, there are many more mundane TFS-related chores that can be automated using PowerShell. These chores include bulk import of CRs and Tasks from Excel or other Project Management programs, creating and distributing Agile reports, and turnover and release management automation, to name but a few. I’ll explore some of these topics in future blog.