Posts Tagged Software Development

Operational Readiness Analysis

“Analysis: a detailed examination of the elements or structure of something, typically as a basis for discussion or interpretation.” — Google

Introduction

Recently, I had the opportunity, along with a colleague, to perform an operational readiness analysis of a client’s new application platform. The platform was in late-stage development, only a few weeks from going live. Being relatively new to the project team, our objective was to determine the amount of work remaining to make the platform operational, from a technical perspective. As well, we wanted to ensure there were no gaps in the project’s scope, as it related to technical operational readiness.

There were various approaches we could have taken to establish the amount of unfinished work and any existing gaps. We could have reviewed the state of the project in the client’s Agile project management system. We could have discussed the project’s state with the Product Owner, Project Manager, Team Lead, and Business Analyst. Neither of these methods on their own would have provided us with a complete picture.

The approach we ultimately took was what we’ve coined, an Operational Readiness Analysis, or our trendier title, the ‘State of the DevOps’. This approach is particularly effective and highly interactive. The analysis may be conducted at any stage of the Software Development Lifecycle (SDLC), to review a project’s operational state of readiness. The analysis involves five simple steps:

  1. Categorize
  2. Itemize
  3. Organize
  4. Prioritize
  5. Document

Grab Your Post-It Notes

We began, where all good ThoughtWorkers tend to begin, with a meeting room, whiteboard, dry-erase markers, and lots of brightly colored Post-It notes. ThoughtWorkers do love their Post-It notes. We gathered a small subset of the project team, including the Product Owner, Project Manager, and the team’s DevOps Engineer.

Step 1: Categorize

To start, we broadly categorized the operational requirements into buckets of work. In my experience, categorization typically takes one of two forms, either the use of role-based descriptors, like ‘Security’ and ‘Monitoring’, or the use of the ‘ilities’, such as ‘Scalability’ and ‘Maintainability’.

The ‘ilities’ are often referenced in regards to software architecture and the functional and nonfunctional requirements (NFRs) of a project. Although there are many ‘ilities’, project requirements frequently include Usability, Scalability, Maintainability, Reliability, Testability, Availability, and Serviceability. Although I enjoy referring to the ‘ilities’ when drafting project requirements, I find a broad audience more readily understands the high-level category descriptions.

We collectively agreed on eight categories, in addition to a catch-all ‘Other’ category, and distributed them on the whiteboard.

whiteboard-step-1

Role-based categories tend to include the following:

  • Security and Compliance
  • Monitoring and Alerting
  • Logging
  • Continuous Integration and Delivery
  • Release Management
  • Configuration
  • Environment Management
  • Infrastructure
  • Networking
  • Data Management
  • High Availability and Fault Tolerance
  • Performance
  • Backup and Restoration
  • Support and Maintenance
  • Documentation and Training
  • Technical Debt
  • Other

Step 2: Itemize

Next, each person placed as many Post-It notes as they wanted, below the categories on the whiteboard. Each Post-It represented an item someone felt needed to be addressed as part the analysis. Items might represent requirements currently in progress, or requirements still untouched in the project’s backlog. A Post-It note might contain a new requirement. Maybe there is a question about an aspect of the project’s relative importance, or there were concerns about how an aspect of the project had been implemented.

Participants placed an average of five to ten Post-It notes on the whiteboard. The end product was a collective mind-map of the project’s operational state (the following is only an example and doesn’t represent the actual results of any real client analysis).

whiteboard-step-2

Whereas categories tend to be generic in nature, the items are usually particular to the project, and it’s technology choices. The ‘Monitoring/Logging’ category might include individual items such as ‘Log Rotation Policy’ or ‘Splunk vs. ELK?’. The ‘Security’ category might include individual items such as ‘Need Password Rotation’ or ‘SSL Certificate Management’.

Participants often use one or two words to summarize more complex thoughts. For example, ‘Rollback’ might be stating, ‘we still don’t have a good rollback strategy for the database’. ‘Release Frequency’ may be questioning, ‘why can’t we release more frequently?’. Make sure to discuss and capture the participant’s full thoughts behind each Post-It, the during the Organize, Prioritize, and Document stages.

Don’t let participants give up too soon placing Post-It notes on the board. Often, one participant’s Post-It will spark additional thoughts other team members. Don’t be afraid to suggest a few items, if the group needs some initial motivation. Lastly, fear not, it’s never too late to go back and add missing items, uncovered in later stages of analysis.

Step 3: Organize

Quickly and non-judgmentally, review all items on the whiteboard. Group duplicates together. Ask for clarification on items where necessary. Move miscategorized items, if the author and team agree there is a more appropriate category.

Examine the ‘Other’ category. If there are items in the ‘Other’ category that are similar, consider adding a new category to capture them. Maybe the team missed identifying a key category, earlier. Conversely, there is nothing wrong with leaving a few stragglers in the ‘Other’ category; don’t overthink the exercise.

Participants should leave this stage with a general understanding of each item on the whiteboard.

Step 4: Prioritize

With all items identified, organized, and understood, discuss each item’s status. Does the item represent incomplete requirements? Does everyone agree an item’s requirements are complete? Team members may not agree on the completeness of particular items. Inconsistencies may reflect unclear requirements. More often, I find, disagreements are a result of an inconsistent or incomplete implementation of requirements across environments — Development, QA, Performance, Staging, and Production.

Why are operational requirements frequently implemented inconsistently? Often, a story is played early in the development stage, prior to all environments being available. For example, a story is played to add monitoring to Development and QA. The story is completed, tested, and closed. Afterward, the Performance environment is created. Later yet, the Staging and Production environments are created. But, there are no new stories to address monitoring the new environments.

One way to avoid this common issue with operational requirements is an effective DevOps automation strategy. In this example, an effective strategy would mean all new environments would get monitoring, automatically. The story should broadly address monitoring, and not, short-sightedly, monitoring of specific environments.

Often, only one or two project team members are responsible for ‘all the DevOps stuff’. They alone possess an accurate perspective on an item’s current status.

Gain agreement from the Product Owner, Project Manager, and key team members on the priority of incomplete requirements. Are they ‘Day One’ must-haves, or ‘Day Two’ and can be completed after the application goes live? Flag all high-priority items.

If the status of an item is unknown, such as ‘why is the QA environment always down?!’, note it as an open question and move on. If the item simply requires a quick answer to resolve, like ‘do we backup the database?’, answer it (‘the database is replicated and backed up daily’) and move on.

whiteboard-step-4

Step 5: Document

Snap a picture of the whiteboard and document its contents. We chose the client’s knowledge management system as a vehicle to share the whiteboard’s results. We created a table with columns for Category, Item, Description, Priority, Owner, and Notes. Also, it’s helpful to have a column for the project’s corresponding Story ID or Defect ID. Along with the whiteboard, capture key discussion points, questions, and concerns, raised by the participants.

Step 5: Document

Snap a picture of the whiteboard and document its contents. We chose Confluence to share the whiteboard results, using a table with columns for Category, Item, Description, Priority, Owner, and Notes. We also bulleted all key discussion points, questions, and concerns, raised by the participants.

operational-readiness-spreadsheet

Make the analysis results available to all team members. Let the team ask questions and poke holes in the results. Adjust the results if necessary.

Next Steps

The actions you take, given the results of the analysis, are up to you. In our case, we ensured all high-priority items were called out on the team’s Agile board. Any new items were captured in the client’s Agile project management system. Finally, we ensured open questions and concerns were addressed in a timely fashion. We continue to track each item’s status weekly, throughout the launch and post-launch periods.

We anticipate conducting a follow-up analysis, thirty days after launch. The goal will be to evaluate the effectiveness of the first analysis and identify additional operational needs as the application enters a business-as-usual (BAU) application lifecycle phase.


Postscript: the ‘ilities

The ‘ilities’, courtesy of codesqueeze.com and en.wikipedia.org

  • Scalability
    The capability of a system, network, or process to handle a growing amount of work or its potential to be enlarged to accommodate that growth.
  • Testability
    The practical feasibility of observing a reproducible series of such counterexamples if they do exist.
  • Reliability (Resilience)
    The ability of a system or component to perform its required functions under stated conditions for a specified time.
  • Usability (Performance)
    The degree to which software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use.
  • Serviceability (Supportability)
    The ability of technical support personnel to install, configure, and monitor computer products, identify exceptions or faults, debug or isolate faults to root cause analysis, and provide hardware or software maintenance in pursuit of solving a problem and restoring the product into service.
  • Availability
    The proportion of time a system is in a functioning condition. Defined in the service-level agreement (SLA).
  • Maintainability
    The ease with which a product can be maintained. Isolate and correct defects or their cause, prevent unexpected breakdowns, maximize a product’s useful life, maximize efficiency, reliability, and safety, meet new requirements, make future maintenance easier, and cope with a changed environment.

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , ,

Leave a comment

Software Delivery: Evaluating Risk within the Enterprise

As software evolves from less-complex applications to enterprise platforms, how does increasing complexity raise the potential risk of delivering unreliable software?
  

Cover Drawing

 Introduction

Many vendor whitepapers, industry publications, blog posts, podcasts, and e-books, extol the best practices in software development and delivery. Best practices include industry-standard concepts, such as Agile, DevOps, TTD, continuous integration, and continuous delivery. Generally, these best practices all strive to improve the process of delivering software enhancements and bug fixes to customers.

Rapidly, reliably and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead. – Wikipedia

Most learning resources present one of two idealized environments, ‘applications as islands’ and ‘utopian enterprise’. I am also often guilty of tailoring my own materials to one of these two idealized environments. Neither ‘applications as islands’ or ‘utopian enterprise’, best model the typical enterprise software environments in which many of us work.

Applications as Islands

The ‘applications as islands’ environment is one of completely isolated application stacks. These types of environments have multiple application stacks, consisting of web, mobile, and desktop components, services, data sources, utilities and scripts, messaging and reporting components, and so forth. Unrealistically, each application stack is completely isolated from the other application stacks within the same environment.

The Utopian Enterprise

The ‘utopian enterprise’ environments have multiple application stacks with multiple shared components. However, they are built, unrealistically, using consistent and modern architectural patterns and compatible technology stacks. They are designed from the ground up to be compartmentalized, scalable, and highly risk-tolerant to changes. They often avoid the challenges of monolithic legacy applications. The closest things in the real world are probably industry trendsetters, such as Facebook, Etsy, Amazon, and Twitter. We all probably wish we could evolve our own software environments into one of these Utopias.

Complexity and Risk

As an organization continues to evolve their software, they naturally increase the overall complexity, and thereby the challenge of effectively delivering reliable and performant software. In this post, I will explore the challenges of software delivery, as a software environment grows in complexity. Specifically, I will focus on how to evaluate the level of risk based on software changes made to various components within the software environment.

Sensitivity and Impact

As we examine the level of risk introduced by software changes within the environment, two aspects of risk are inescapable, sensitivity and impact. Sensitivity will be defined as the potential degree of which one component, such as an application, service, or data source, is affected by changes to other components within the same software environment. How sensitive is ‘Application A’ to changes made to other components within the same software environment, on which ‘Application A’ is directly or indirectly dependent?

The impact will be defined as the potential effect a component’s changes have on other components within the software environment. Teams tend to only evaluate the impact of changes to the immediate component or application stack. They do not sufficiently consider how those changes impact those components that are directly and indirectly dependent on them. What level of impact do changes to ‘Service B’ have on all other components within the software environment that are directly and indirectly dependent on ‘Service B’?

Notice I use the word potential. Any change has the potential to introduce risk. The level of risk varies, based on the type and volume of changes. A few simple changes should have a low potential for impact, as opposed to a high number of changes, or more complex changes. For example, changing an internal error message logged by a particular service operation should present a very low risk. This, as opposed to rewriting that operation’s complex algorithm for calculating a customer’s creditworthiness. The potential impact of those two types’ changes to dependent components varies significantly.

Measuring Risk

For both sensitivity to change and impact of change, I will use a color-coded scale to subjectively assign a level of potential risk to each component within a given software environment. The scale ranges from ‘Low’, to ‘Moderate’, to ‘High’, to ‘Very High’. Using the scale, it is possible to ‘heat map’ a software environment, based on the level of risk from changes.

Independent Aspects of Risk

Sensitivity and impact are two independent aspects of risk. Changes to one component may have a ‘Low’ potential impact on all other components within the environment. While at the same time, that same component may have a ‘High’ sensitivity to changes made to other components within the environment. Alternatively, a component may have a ‘Very High’ risk for potential impact on multiple components within the environment. At the same time, that same component may have a ‘Low’ potential sensitivity to changes made to other components. Sensitivity and risk do not parallel each other.

Growing Complexity

Let’s look at how sensitivity and impact change as we increase the software environment’s complexity. In the first example, we will look at one of the two environments I described earlier, isolated applications. Applications may have their own web and mobile components, SOAP or RESTful services, data sources, utilities, scheduled tasks, and so forth. However, the applications do not depend on each other or components outside their own immediate application stack; the applications are self-contained.


When making changes in this type of environment, the real potential impact is to the overall stability, security, and performance of the individual applications, themselves. As long as they are in isolation, the applications will have no impact on each other. Therefore, applications potential sensitivity to changes and their impact on other applications is ‘Low’.

Shared Components

A slightly more complex example is a software environment in which one or more applications have a dependency on a component outside their immediate application stack. For example, a healthcare provider develops a Windows-based application to track their employee’s work schedules (Application A). In addition, they develop a web application to track patient appointments (Application B). Lastly, they offer a client-facing mobile application for patients to track personal fitness and nutrition goals (Application C). Applications B and C share a common set of services and a database for managing patient data.

Software changes made to Applications A, B, and C, should have no effect on other components within the software environment. However, Applications B and C are potentially impacted by changes made to either the Services Layer or Data Layer. The Services Layer has ‘High’ potential impact to the software environment. Lastly, the Data Layer should not be directly impacted by changes made to the Services Layer or Applications. However, the Data Layer has the potential to directly affect the Services Layer, and indirectly affect Applications B and C. Therefore, the Data Layer’s potential impact on other dependent components within the environment is ‘Very High’.

Multiple Shared Components

An even more complex example is a software environment in which multiple applications have one or more dependencies on multiple components outside their immediate application stack (many-to-many).

Take, for example, a small financial institution. They have a ‘legacy’ COBOL-based application for managing their commercial mortgage business (Application A). They also have an older J2EE-based application, they acquired through a business merger, for managing their commercial banking relationships (Application B). Next, they have a relatively new Java EE-based investment banking application to manage their retail customers (Application C). Lastly, they have web-based, client-facing application for secure, online retail banking.

Since both Application A and B serve commercial clients, it is necessary to send financial data between the two application stacks. Since both applications are built on different, older technologies, the development team built a Custom Messaging Middleware component to connect the two applications. The Custom Messaging Middleware component receives, transforms, and delivers messages between the two applications.

Changes made to Applications C and D should have no impact on other components within the software environment. However, changes made to either Application A or B has the potential to indirectly affect the ability to successfully communicate with the other application, via the Custom Messaging Middleware. Changes to the Custom Messaging Middleware have the potential to affect both Applications A and B. The Custom Messaging Middleware has a ‘Moderate’ potential sensitivity to risk, versus ‘Low’, because one could argue that changes to either Application A or Application B’s messaging format could impact the Custom Messaging Middleware’s ability to properly process that application’s messages and successfully deliver them to opposite application.

Applications B, C, and D have a direct dependency on the Services Layer, and indirectly on the Data Layer. Therefore, the potential impact of changes to the Services Layer on other components is arguably higher than in the last example. The Services Layer’s potential impact on other components is ‘Very High’.

Since Application B has a direct dependency on both the Messaging Middleware and the Services Layer, it has a higher sensitivity to changes then the other three applications. Application B’s potential sensitivity to changes by other components is ‘Very High’.

Changes made to the Services Layer or the Applications will not affect the Data Layer. However, the Data Layer has the potential to directly affect the Services Layer, and indirectly affect Applications B, C, and D. Therefore, the Data Layer’s potential impact on the software environment is ‘Very High’.

Small Enterprise

The last example of increasing complexity is an environment in which even more applications are dependent on even more components. Additionally, there may be different types of components, such as a common UI and third-party APIs, which only increase the complexity of the dependencies. Although this example is nowhere near as complex as many enterprise software environments, it does begin to reflect their intricate, inner-dependent structure.

Let’s use an example of a large web-based retailer. The retailer has a standalone ERM application for managing their wholesale purchasing and product distribution (Application A). Next, they have their primary client-facing storefront (Application B). They also have a separate application to handle customer accounts (Application C). Lastly, they have an application that manages their online media retail business and media storage (Application D).

In addition to the Common Services Layer, Common Data Layer, and Custom Messaging Middleware, as seen in earlier examples, the retailer has two other components in their environment, a Common Web User Interface (UI) and a Web API. The Web UI provides the customer with a seamless branded experience, no matter which application they use – Application B, C, or D. The customer enters the Common Web UI and has all three application’s features seamlessly available to them.

The retailer also exposes a RESTful Web API for its marketing affiliates. Third parties can develop a variety of applications that drive sales to the retailer, in return for a sales commission.

In the earlier examples, individual applications had separate points of entry. However, in this example, the Common Web UI provides a single point of entry for users of Applications B, C, and D. Having a single point of entry also introduces a single point of failure for all three applications. Thus, the potential risk to the retailer and their customers is much greater. The Common Web UI’s potential impact on other components is ‘Very High’.

A single point of entry also introduces a single point of failure.

The potential sensitivity of the Common Web UI to changes comes from its direct dependency on the Services Layer, and indirectly on the Data Layer. Additionally, one could argue, since the Common Web UI displays the three Applications, it is also sensitive to changes made by those applications. If one of those applications becomes impaired due to a bad change, that application would seem to affect the Web UI’s functionality. The Common UI’s potential sensitivity to change is ‘High’.

The Web API is similar to the Common Web UI, in terms of potential sensitivity and impact. The potential impact of changes to the Web API is ‘Very High’, since a defect there could result in the potential impairment of the retailer’s affiliate applications. The potential sensitivity of the Web API to changes comes from its direct dependency on the Services Layer, and indirectly on the Data Layer. The Web API’s potential sensitivity to change is ‘High’. There is very little chance of potential impact to the Web API from the retailer’s affiliate applications.

Impact of Key Components

Lastly, as systems grow in complexity, certain components often become so key, they have the potential to impact the entire environment, a true single point of failure. Below, note the potential impact of changes to the Common Services Layer on all other components. As the software environment has grown in complexity, the Common Services Layer sits at the heart of the system. The Services Layer has multiple components directly dependent on it (i.e. Application C), as well as other components indirectly dependent on it (i.e. Third-Party Applications). It is also the only point of access to and from the Common Data Layer.

There are steps organizations can take to mitigate the potential risk caused by changes to key components, like the Services Layer. Areas organizations commonly focus on to reduce risk are higher code quality, increased test coverage, and improved performance, fault tolerance, system redundancy, and rollback capabilities. Additionally, management should more thoroughly scrutinize proposed software changes to key components, balancing new features with the need for stability, availability, and performance.

Management must balance the need for new features with need for stability, availability, and performance.

Specific to services, organizations often look to decouple larger services, creating smaller, more focused services. Better separation of concerns increases the likelihood that potential impairments caused by code defects are isolated to a smaller subset of functionality.

Conclusion

In this brief post, we examined a potential risk to delivering reliable software, the impact of software changes. There are many risks to delivering reliable software. Once all sources of risk are identified and quantified, the overall level of risk to delivering reliable software can be assessed, and steps taken to reduce the potential impact.

, , , , , , , , , , , , ,

Leave a comment

Using PowerShell to Generate TFS Changed File List for Build Artifact Delivery

Delivering Artifacts for Deployment

In many enterprise-software development environments, delivering release-ready code to an Operations or Release team for deployment, as opposed to deploying the code directly, is common practice. A developer ‘kicks off’ a build of project using a build automation system like Hudson, Jenkins, CruiseControl, TeamCity, or Bamboo. The result is a set of build artifacts that are delivered and deployed as part of the release cycle. Build artifacts are logical collections of deployable code and other files, which form the application. Artifacts are often segregated by type, such as database, web code, services, configuration files, and so forth. Each type of artifact may require a different deployment methods.

There are two approaches to delivering artifacts for deployment. Some organizations deliver all the artifacts from each build for deployment. Alternately, others follow a partial delivery and release model, delivering only the artifacts that contain changes since the last delivery. The entire application is not re-deployed, only what changed. This is considered by many to be a quicker and safer method of software release.

The challenge of partial delivery is knowing precisely what changed since the last delivery. Almost all source control systems keep a history of changes (‘changesets’). Based on the time of the last build, a developer can check the history and decide which artifacts to deliver based on the changes. If you have daily releases, changes between deliveries are likely few. However, if your development cycle spans a few weeks or you have multiple developers working on the same project, there will likely be many changesets to examine. Figuring out what artifacts to deliver is tedious and error prone. Missing one small change out of hundreds of changes can jeopardize a whole release. Having to perform this laborious task ever few weeks myself, I was eager to automate this process!

Microsoft Team Foundation PowerShell Snap-In

The solution is of course PowerShell and the Microsoft Team Foundation PowerShell Snap-In. Using these two tools, I was able to write a very simple script that does the work for me. If you are unfamiliar with the Team Foundation Server (TFS) snap-in, review my earlier post, Automating Task Creation in Team Foundation Server with PowerShell. That post discusses the snap-in and explains how to install on your Windows computer.

The PowerShell script begins with a series of variables. The first two are based on your specific TFS environment. Variables include:

  1. Team Project Collection path;
  2. Source location within the collection to search for changes;
  3. Date and time range to search for changes;
  4. Location of text file that will contain a list of changed files;
  5. Option to open the text file when the script is complete.

Given the Team Project Collection path, source location, and the date range, the script returns a sorted list of all files that changed. Making sure the list is distinct is important. File may change many times over the course of a development cycle. You only want to know if the file changed. How many times the file changed, or when it changed, is irrelevant. The file list is saved to a text file, a manifest, for review. The values of the script’s variables are also included in the manifest.

Excluding Certain Changes

Testing the initial script, I found it returned to much information. There were three main reasons:

  1.  Unrelated Changes – Not every file that changes within the location selected is directly associated the project being deployed. There may be multiple, related projects in that location’s sub directories (child nodes).
  2. Secondary Project Files – Not every file that changes is deployed. For example, build definitions files, database publishing profiles, and manual test documents, are important parts of any project, but are not directly part of the applications within the project being deployed. These are often files in the project used by the build system or required by TFS.
  3. Certain Change Types – Changes in TFS include several types (Microsoft.TeamFoundation.VersionControl.Client.ChangeType) that you may not want to include on the list. For example, you may not care about deleted or renamed files. See the post script about how to get a list of all ChangeTypes using PowerShell.

To solve the problem of too much information, we can filter the results of the Get-TfsItemHistory command, using the Where-Object command with the Select-Object command, in the Get-TfsItemHistory command pipeline. Using the -notlike property of the Where-Object command, which accepts wildcards, we exclude certain ChangeTypes, we exclude files by name and size, and we exclude groups of files based on file path. You will obviously need to change the example’s exclusions to meet your own project’s needs.

Below is the PowerShell script, along with some sample contents of file change manifest text file, based on an earlier post’s SSDT database Solution:

###############################################################
#                                                             
# Search for all unique file changes in TFS 
# for a given date/time range and collection location. 
# Write results to a manifest file.                                              
#                                                             
# Author:  Gary A. Stafford
# Created: 2012-04-18
# Revised: 2012-08-11                          
#                                                             
###############################################################

# Clear Output Pane
clear

# Enforce coding rules
Set-StrictMode -version 2.0

# Loads Windows PowerShell snap-in if not already loaded
if ( (Get-PSSnapin -Name Microsoft.TeamFoundation.PowerShell -ErrorAction SilentlyContinue) -eq $null )
{
    Add-PSSnapin Microsoft.TeamFoundation.PowerShell
}

# Variables - CHECK EACH TIME
[string] $tfsCollectionPath = "http://tfs2010/tfsCollection"
[string] $locationToSearch = "$/Development/AdventureWorks/"
[string] $outputFile = "c:\ChangesToTFS.txt"
[string] $dateRange = "D2012-07-08 00:00:00Z~"
[bool]   $openOutputFile = $true # Accepts $false or $true

# For a date/time range: 'D2012-08-06 00:00:00Z~D2012-08-09 23:59:59Z'
# For everything including and after a date/time: 'D2012-07-21 00:00:00Z~'

[Microsoft.TeamFoundation.Client.TfsTeamProjectCollection] $tfs = get-tfsserver $tfsCollectionPath

# Add informational header to file manifest
[string] $outputHeader =
    "TFS Collection: " + $tfsCollectionPath + "`r`n" + 
    "Source Location: " + $locationToSearch + "`r`n" + 
    "Date Range: " + $dateRange + "`r`n" +
    "Created: " + (Get-Date).ToString() + "`r`n" +
    "======================================================================"

$outputHeader | Out-File $outputFile

Get-TfsItemHistory $locationToSearch -Server $tfs -Version $dateRange `
-Recurse -IncludeItems | 

Select-Object -Expand "Changes" | 
    Where-Object { $_.ChangeType -notlike '*Delete*'} | 
    Where-Object { $_.ChangeType -notlike '*Rename*'} | 

Select-Object -Expand "Item" | 
    Where-Object { $_.ContentLength -gt 0} | 

    Where-Object { $_.ServerItem -notlike '*/sql/*' } | 
    Where-Object { $_.ServerItem -notlike '*/documentation/*' } | 
    Where-Object { $_.ServerItem -notlike '*/buildtargets/*' } | 

    Where-Object { $_.ServerItem -notlike 'build.xml'} | 
    Where-Object { $_.ServerItem -notlike '*.proj'} | 
    Where-Object { $_.ServerItem -notlike '*.publish.xml'} | 

Select -Unique ServerItem | Sort ServerItem | 
Format-Table -Property * -AutoSize | Out-String -Width 4096 | 
Out-File $outputFile -append

Write-Host `n`r**** Script complete and file written ****

If ($openOutputFile) { Invoke-Item $outputFile }

Contents of file change manifest text file, based on my previous post’s SSDT database Visual Studio Solution:

TFS Collection: http://tfs2010/tfsCollection
Source Location: $/Development/AdventureWorks2008/
Date Range: D2012-08-02 00:00:00Z~
Created: 8/10/2012 10:28:46 AM
======================================================================

ServerItem
----------
$/Development/AdventureWorks2008/AdventureWorks2008.sln $/Development/AdventureWorks2008/Development/Development.sln
$/Development/AdventureWorks2008/Development/Development.sqlproj
$/Development/AdventureWorks2008/Development/Schema Objects/Server LevelObjects/Security/Logins/aw_dev.login.sql
$/Development/AdventureWorks2008/AdventureWorksSSDT/AdventureWorksSSDT.sqlproj
$/Development/AdventureWorks2008/AdventureWorksSSDT/dbo/StoredProcedures/uspGetBillOfMaterials.sql
$/Development/AdventureWorks2008/AdventureWorksSSDT/dbo/Stored Procedures/uspLogError.sql
$/Development/AdventureWorks2008/AdventureWorksSSDT/HumanResources/Tables/EmployeePayHistory.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/Purchasing/Tables/ShipMethod.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/Purchasing/Views/vVendorWithContacts.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/Security/aw_dev.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/Security/jenkins.sql

Conclusion

This script saves considerable time, especially for longer release cycles, and eliminates potential errors from missing changes. To take this script a step further, I would like to have it determine which artifacts to deliver based on the files that changed, not leaving it up to the developer to figure out. A further step, I would also have it generate an artifact manifest that would be passed to the build. The build would use the manifest to deliver those artifacts to the release team. This would really make it an end-to-end solution. Challenge accepted…

Post Script, PowerShell Enumeration

Assume you couldn’t find a resource on the web that listed all the ChangeType values? How would you use PowerShell to get a list of all the enumerated ChangeType values (Microsoft.TeamFoundation.VersionControl.Client.ChangeType)? It only takes one line of code, once the TFS plugin and assembly are loaded.

# Loads Windows PowerShell snap-in if not already loaded
if ( (Get-PSSnapin -Name Microsoft.TeamFoundation.PowerShell -ErrorAction SilentlyContinue) -eq $null )
{
    Add-PSSnapin Microsoft.TeamFoundation.PowerShell
}

[Void][Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.VersionControl.Client")
[Enum]::GetNames( [Microsoft.TeamFoundation.VersionControl.Client.ChangeType] )

, , , , , , , , , , , ,

9 Comments