Archive for category PowerShell Scripting
Managing Windows Servers with Chef, Book Review
Posted by Gary A. Stafford in .NET Development, Build Automation, DevOps, Enterprise Software Development, PowerShell Scripting, Software Development on August 11, 2014
Harness the power of Chef to automate management of Windows-based systems using hands-on examples.
Recently, I had the opportunity to read, ‘Managing Windows Servers with Chef’, authored John Ewart, and published in May, 2014 by Packt Publishing. At a svelte 110 pages in paperback form, ‘Managing Windows Servers with Chef’, is a quick read, packed with concise information, relevant examples, and excellent code samples. Available on Packt Publishing’s website for a mere $11.90 for the ebook, it a worthwhile investment for anyone considering Chef Software’s Chef product for automating their Windows-based infrastructure.
As an IT professional, I use Chef for both Windows and Linux-based IT automation, on a regular basis. In my experience, there is a plethora of information on the Internet about properly implementing and scaling Chef. There is seldom a topic I can’t find the answers to, online. However, it has also been my experience, information is often Linux-centric. That is one reason I really appreciated Ewart’s book, concentrating almost exclusively on Windows-based implementations of Chef.
IT professionals, just getting starting with Chef, or migrating from Puppet, will find the ‘Managing Windows Servers with Chef’ invaluable. Ewart does a good job building the user’s understanding of the Chef ecosystem, before beginning to explain its application to a Windows-based environment. If you are considering Chef versus Puppet Lab’s Puppet for Windows-based IT automation, reading this book will give you a solid overview of Chef.
Seasoned users of Chef will also find the ‘Managing Windows Servers with Chef’ useful. Professionals quickly master the Chef principles, and develop the means to automate their specific tasks with Chef. But inevitably, there comes the day when they must automate something new with Chef. That is where the book can serve as a handy reference.
Of all the books topics, I especially found value in Chapter 5 (Managing Cloud Services with Chef) and Chapter 6 (Going Beyond the Basics – Testing Recipes). Even large enterprise-scale corporations are moving infrastructure to cloud providers. Ewart demonstrates Chef’s Windows-based integration with Microsoft’s Azure, Amazon’s EC2, and Rackspace’s Cloud offerings. Also, Ewart’s section on testing is a reminder to all of us, of the importance of unit testing. I admit I more often practice TAD (‘Testing After Development’) than TDD (Test Driven Development), LOL. Ewart introduces both RSpec and ChefSpec for testing Chef recipes.
I recommend ‘Managing Windows Servers with Chef’ for anyone considering Chef, or who is seeking a good introductory guide to getting started with Chef for Windows-based systems.
Cloud-based Continuous Integration and Deployment for .NET Development
Posted by Gary A. Stafford in .NET Development, Build Automation, Client-Side Development, DevOps, Enterprise Software Development, PowerShell Scripting, Software Development on May 25, 2014
Introduction
Whether you are part of a large enterprise development environment, or a member of a small start-up, you are likely working with remote team members. You may be remote, yourself. Developers, testers, web designers, and other team members, commonly work remotely on software projects. Distributed teams, comprised of full-time staff, contractors, and third-party vendors, often work in different buildings, different cities, and even different countries.
If software is no longer strictly developed in-house, why should our software development and integration tools be located in-house? We live in a quickly evolving world of Saas, PaaS, and IaaS. Popular SaaS development tools include Visual Studio Online, GitHub, BitBucket, Travis-CI, AppVeyor, CloudBees, JIRA, AWS, Microsoft Azure, Nodejitsu, and Heroku, to name just a few. With all these ‘cord-cutting’ tools, there is no longer a need for distributed development teams to be tethered to on-premise tooling, via VPN tunnels and Remote Desktop Connections.
There are many combinations of hosted software development and integration tools available, depending on your technology stack, team size, and budget. In this post, we will explore one such toolchain for .NET development. Using Git, GitHub, AppVeyor, and Microsoft Azure, we will continuously build, test, and deploy a multi-tier .NET solution, without ever leaving Visual Studio. This particular toolchain has strong integration between tools, and will scale to fit most development teams.
Git and GitHub
Git and GitHub are widely used in development today. Visual Studio 2013 has fully-integrated Git support and Visual Studio 2012 has supported Git via a plug-in since early last year. Git is fully compatible with Windows. Additionally, there are several third party tools available to manage Git and GitHub repositories on Windows. These include Git Bash (my favorite), Git GUI, and GitHub for Windows.
GitHub acts as a replacement for your in-house Git server. Developers commit code to their individual local Git project repositories. They then push, pull, and merge code to and from a hosted GitHub repository. For security, GitHub requires a registered username and password to push code. Data transfer between the local Git repository and GitHub is done using HTTPS with SSL certificates or SSH with public-key encryption. GitHub also offers two-factor authentication (2FA). Additionally, for those companies concerned about privacy and added security, GitHub offers private repositories. These plans range in price from $25 to $200 per month, currently.
AppVeyor
AppVeyor’s tagline is ‘Continuous Integration for busy developers’. AppVeyor automates building, testing and deployment of .NET applications. AppVeyor is similar to Jenkins and Hudson in terms of basic functionality, except AppVeyor is only provided as a SaaS. There are several hosted solutions in the continuous integration and delivery space similar to AppVeyor. They include CloudBees (hosted-Jenkins) and Travis-CI. While CloudBees and Travis CI works with several technology stacks, AppVeyor focuses specifically on .NET. Its closest competitor may be Microsoft’s new Visual Studio Online.
Identical to GitHub, AppVeyor also offers private repositories (spaces for building and testing code). Prices for private repositories currently range from $39 to $319 per month. Private repositories offer both added security and support. AppVeyor integrates nicely with several cloud-based code repositories, including GitHub, BitBucket, Visual Studio Online, and Fog Creek’s Kiln.
Azure
This post demonstrates continuous deployment from AppVeyor to a Microsoft Server 2012-based Azure VM. The VM has IIS 8.5, Web Deploy 3.5, IIS Web Management Service (WMSVC), and other components and configuration necessary to host the post’s sample Solution. AppVeyor would work just as well with Azure’s other hosting options, as well as other cloud-based hosting providers, such as AWS or Rackspace, which also supports the .NET stack.
Sample Solution
The Visual Studio Solution used for this post was originally developed as part of an earlier post, Consuming Cross-Domain WCF REST Services with jQuery using JSONP. The original Solution, from 2011, demonstrated jQuery’s AJAX capabilities to communicate with a RESTful WCF service, cross-domains, using JSONP. I have since updated and modernized the Solution for this post. The revised Solution is on a new branch (‘rev2014’) on GitHub. Major changes to the Solution include an upgrade from VS2010 to VS2013, the use of Git DVCS, NuGet package management, Web Publish Profiles, Web Essentials for bundling JS and CSS, Twitter Bootstrap, unit testing, and a lot of code refactoring.
The updated VS Solution contains the following four Projects:
- Restaurant – C# Class Library
- RestaurantUnitTests – Unit Test Project
- RestaurantWcfService – C# WCF Service Application
- RestaurantDemoSite – Web Site (JS/HTML5)
The Visual Studio Solution Explorer tab, here, shows all projects contained in the Solution, and the primary files and directories they contain.
As explained in the earlier post, the ‘RestaurantDemoSite’ web site makes calls to the ‘RestaurantWcfService’ WCF service. The WCF service exposes two operations, one that returns the menu (‘GetCurrentMenu’), and the other that accepts an order (‘SendOrder’). For simplicity, orders are stored in the files system as JSON files. No database is required for the Solution. All business logic is contained in the ‘Restaurant’ class library, which is referenced by the WCF service. This architecture is illustrated in this Visual Studio Assembly Dependencies Diagram.
Installing and Configuring the Solution
The README.md file in the GitHub repository contains instructions for installing and configuring this Solution. In addition, a set of PowerShell scripts, part of the Solution’s repository, makes the installation and configuration process, quick and easy. The scripts handle creating the necessary file directories and environment variables, setting file access permissions, and configuring IIS websites. Make sure to change the values of the environment variables before running the script. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.
# Create environment variables [Environment]::SetEnvironmentVariable("AZURE_VM_HOSTNAME", ` "{YOUR HOSTNAME HERE}", "User") [Environment]::SetEnvironmentVariable("AZURE_VM_USERNAME", ` "{YOUR USERNME HERE}", "User") [Environment]::SetEnvironmentVariable("AZURE_VM_PASSWORD", ` "{YOUR PASSWORD HERE}", "User") # Create new restaurant orders JSON file directory $newDirectory = "c:\RestaurantOrders" if (-not (Test-Path $newDirectory)){ New-Item -Type directory -Path $newDirectory } $acl = Get-Acl $newDirectory $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl # Create new website directory $newDirectory = "c:\RestaurantDemoSite" if (-not (Test-Path $newDirectory)){ New-Item -Type directory -Path $newDirectory } $acl = Get-Acl $newDirectory $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl # Create new WCF service directory $newDirectory = "c:\MenuWcfRestService" if (-not (Test-Path $newDirectory)){ New-Item -Type directory -Path $newDirectory } $acl = Get-Acl $newDirectory $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl # Create main website in IIS $newSite = "MenuWcfRestService" if (-not (Test-Path IIS:\Sites\$newSite)){ New-Website -Name $newSite -Port 9250 -PhysicalPath ` c:\$newSite -ApplicationPool "DefaultAppPool" } # Create WCF service website in IIS $newSite = "RestaurantDemoSite" if (-not (Test-Path IIS:\Sites\$newSite)){ New-Website -Name $newSite -Port 9255 -PhysicalPath ` c:\$newSite -ApplicationPool "DefaultAppPool" }
Cloud-Based Continuous Integration and Delivery
Webhooks
The first point of integration in our hosted toolchain is between GitHub and AppVeyor. In order for AppVeyor to work with GitHub, we use a Webhook. Webhooks are widely used to communicate events between systems, over HTTP. According to GitHub, ‘every GitHub repository has the option to communicate with a web server whenever the repository is pushed to. These webhooks can be used to update an external issue tracker, trigger CI builds, update a backup mirror, or even deploy to your production server.‘ Basically, we give GitHub permission to tell AppVeyor every time code is pushed to the GitHub. GitHub sends a HTTP POST to a specific URL, provided by AppVeyor. AppVeyor responds to the POST by cloning the GitHub repository, and building, testing, and deploying the Projects. Below is an example of a webhook for AppVeyor, in GitHub.
Unit Tests
To help illustrate the use of AppVeyor for automated unit testing, the updated Solution contains a Unit Test Project. Every time code is committed to GitHub, AppVeyor will clone and build the Solution, followed by running the set of unit tests shown below. The project’s unit tests test the Restaurant class library (‘restaurant.dll’). The unit tests provide 100% code coverage, as shown in the Visual Studio Code Coverage Results tab, below:
AppVeyor runs the Solution’s automated unit tests using VSTest.Console.exe. VSTest.Console calls the unit test Project’s assembly (‘restaurantunittests.dll’). As shown below, the VSTest command (in light blue) runs all tests, and then displays individual test results, a results summary, and the total test execution time.
VSTest.Console has several command line options similar to MSBuild. They can be adjusted to output various levels of feedback on test results. For larger projects, you can selectively choose which pre-defined test sets to run. Test sets needs are set-up in Solution, in advance.
Configuring Azure VM
Before we publish the Solution from AppVeyor to the Azure, we need to configure the VM. Again, we can use PowerShell to script most of the configuration. Most scripts are the same ones we used to configure our local environment. The README.md file in the GitHub repository contains instructions. The scripts handle creating the necessary file directories, setting file access permissions, configuring the IIS websites, creating the Web Deploy User account, and assigning it in IIS. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.
# Create new restaurant orders JSON file directory $newDirectory = "c:\RestaurantOrders" if (-not (Test-Path $newDirectory)){ New-Item -Type directory -Path $newDirectory } $acl = Get-Acl $newDirectory $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl # Create new website directory $newDirectory = "c:\RestaurantDemoSite" if (-not (Test-Path $newDirectory)){ New-Item -Type directory -Path $newDirectory } $acl = Get-Acl $newDirectory $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl # Create new WCF service directory $newDirectory = "c:\MenuWcfRestService" if (-not (Test-Path $newDirectory)){ New-Item -Type directory -Path $newDirectory } $acl = Get-Acl $newDirectory $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl $ar = New-Object System.Security.AccessControl.FileSystemAccessRule(` "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow") $acl.SetAccessRule($ar) Set-Acl $newDirectory $acl # Create main website in IIS $newSite = "MenuWcfRestService" if (-not (Test-Path IIS:\Sites\$newSite)){ New-Website -Name $newSite -Port 9250 -PhysicalPath ` c:\$newSite -ApplicationPool "DefaultAppPool" } # Create WCF service website in IIS $newSite = "RestaurantDemoSite" if (-not (Test-Path IIS:\Sites\$newSite)){ New-Website -Name $newSite -Port 9255 -PhysicalPath ` c:\$newSite -ApplicationPool "DefaultAppPool" } # Create new local non-admin User and Group for Web Deploy # Main variables (Change these!) [string]$userName = "USER_NAME_HERE" # mjones [string]$fullName = "FULL USER NAME HERE" # Mike Jones [string]$password = "USER_PASSWORD_HERE" # pa$$w0RD! [string]$groupName = "GROUP_NAME_HERE" # Development # Create new local user account [ADSI]$server = "WinNT://$Env:COMPUTERNAME" $newUser = $server.Create("User", $userName) $newUser.SetPassword($password) $newUser.Put("FullName", "$fullName") $newUser.Put("Description", "$fullName User Account") # Assign flags to user [int]$ADS_UF_PASSWD_CANT_CHANGE = 64 [int]$ADS_UF_DONT_EXPIRE_PASSWD = 65536 [int]$COMBINED_FLAG_VALUE = 65600 $flags = $newUser.UserFlags.value -bor $COMBINED_FLAG_VALUE $newUser.put("userFlags", $flags) $newUser.SetInfo() # Create new local group $newGroup=$server.Create("Group", $groupName) $newGroup.Put("Description","$groupName Group") $newGroup.SetInfo() # Assign user to group [string]$serverPath = $server.Path $group = [ADSI]"$serverPath/$groupName, group" $group.Add("$serverPath/$userName, user") # Assign local non-admin User in IIS for Web Deploy [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management") [Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(` $userName, "$Env:COMPUTERNAME\MenuWcfRestService", $FALSE) [Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(` $userName, "$Env:COMPUTERNAME\RestaurantDemoSite", $FALSE)
Publish Profiles
The second point of integration in our toolchain is between AppVeyor and the Azure VM. We will be using Microsoft’s Web Deploy to deploy our Solution from AppVeyor to Azure. Web Deploy integrates with the IIS Web Management Service (WMSVC) for remote deployment by non-administrators. I have already configured Web Deploy and created a non-administrative user on the Azure VM. This user’s credentials will be used for deployments. These are the credentials in the username and password environment variables we created.
To continuously deploy to Azure, we will use Web Publish Profiles with Microsoft’s Web Deploy technology. Both the website and WCF service projects contain individual profiles for local development (‘LocalMachine’), as well as deployment to Azure (‘AzureVM’). The ‘AzureVM’ profiles contain all the configuration information AppVeyor needs to connect to the Azure VM and deploy the website and WCF service.
The easiest way to create a profile is by right-clicking on the project and selecting the ‘Publish…’ and ‘Publish Web Site’ menu items. Using the Publish Web wizard, you can quickly build and validate a profile.
Each profile in the above Profile drop-down, represents a ‘.pubxml’ file. The Publish Web wizard is merely a visual interface to many of the basic configurable options found in the Publish Profile’s ‘.pubxml’ file. The .pubxml profile files can be found in the Project Explorer. For the website, profiles are in the ‘App_Data’ directory (i.e. ‘Restaurant\RestaurantDemoSite\App_Data\PublishProfiles\AzureVM.pubxml’). For the WCF service, profiles are in the ‘Properties’ directory (i.e. ‘Restaurant\RestaurantWcfService\Properties\PublishProfiles\AzureVM.pubxml’).
As an example, below are the contents of the ‘LocalMachine’ profile for the WCF service (‘LocalMachine.pubxml’). This is about as simple as a profile gets. Note since we are deploying locally, the profile is configured to open the main page of the website in a browser, after deployment; a helpful time-saver during development.
<?xml version="1.0" encoding="utf-8"?> <!-- This file is used by the publish/package process of your Web project. You can customize the behavior of this process by editing this MSBuild file. In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121. --> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <WebPublishMethod>FileSystem</WebPublishMethod> <LastUsedBuildConfiguration>Debug</LastUsedBuildConfiguration> <LastUsedPlatform>Any CPU</LastUsedPlatform> <SiteUrlToLaunchAfterPublish>http://localhost:9250/RestaurantService.svc/help</SiteUrlToLaunchAfterPublish> <LaunchSiteAfterPublish>True</LaunchSiteAfterPublish> <ExcludeApp_Data>True</ExcludeApp_Data> <publishUrl>C:\MenuWcfRestService</publishUrl> <DeleteExistingFiles>True</DeleteExistingFiles> </PropertyGroup> </Project>
A key change we will make is to use environment variables in place of sensitive configuration values in the ‘AzureVM’ Publish Profiles. The Web Publish wizard does not allow this change. To do this, we must edit the ‘AzureVM.pubxml’ file for both the website and the WCF service. We will replace the hostname of the server where we will deploy the projects with a variable (i.e. AZURE_VM_HOSTNAME = ‘MyAzurePublicServer.net’). We will also replace the username and password used to access the deployment destination. This way, someone accessing the Solution’s source code, won’t be able to obtain any sensitive information, which would give them the ability to hack your site. Note the use of the ‘AZURE_VM_HOSTNAME’ and ‘AZURE_VM_USERNAME’ environment variables, show below.
<?xml version="1.0" encoding="utf-8"?> <!-- This file is used by the publish/package process of your Web project. You can customize the behavior of this process by editing this MSBuild file. In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121. --> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <WebPublishMethod>MSDeploy</WebPublishMethod> <LastUsedBuildConfiguration>AppVeyor</LastUsedBuildConfiguration> <LastUsedPlatform>Any CPU</LastUsedPlatform> <SiteUrlToLaunchAfterPublish /> <LaunchSiteAfterPublish>False</LaunchSiteAfterPublish> <ExcludeApp_Data>True</ExcludeApp_Data> <MSDeployServiceURL>https://$(AZURE_VM_HOSTNAME):8172/msdeploy.axd</MSDeployServiceURL> <DeployIisAppPath>MenuWcfRestService</DeployIisAppPath> <RemoteSitePhysicalPath /> <SkipExtraFilesOnServer>False</SkipExtraFilesOnServer> <MSDeployPublishMethod>WMSVC</MSDeployPublishMethod> <EnableMSDeployBackup>True</EnableMSDeployBackup> <UserName>$(AZURE_VM_USERNAME)</UserName> <_SavePWD>False</_SavePWD> <_DestinationType>AzureVirtualMachine</_DestinationType> </PropertyGroup> </Project>
The downside of adding environment variables to the ‘AzureVM’ profiles, the Publish Profile wizard feature within Visual Studio will no longer allow us to deploy, using the ‘AzureVM’ profiles. As demonstrated below, after substituting variables for actual values, the ‘Server’ and ‘User name’ values will no longer display properly. We can confirm this by trying to validate the connection, which fails. This does not indicate your environment variable values are incorrect, only that Visual Studio can longer correctly parse the ‘AzureVM.pubxml’ file and display it properly in the IDE. No big deal…
We can use the command line or PowerShell to deploy with the ‘AzureVM’ profiles. AppVeyor accepts both command line input, as well as PowerShell for most tasks. All examples in this post and in the GitHub repository use PowerShell.
To build and deploy (publish) to Azure from the command line or PowerShell, we will use MSBuild. Below are the MSBuild commands used by AppVeyor to build our Solution, and then deploy our Solution to Azure. The first two MSBuild commands build the WCF service and the website. The second two deploy them to Azure. There are several ways you could construct these commands to successfully build and deploy this Solution. I found these commands to be the most succinct. I have split the build and the deploy functions so that the AppVeyor can run the automated unit tests, in between. If the tests don’t pass, we don’t want to deploy the code.
# Build WCF service # (AppVeyor config ignores website Project in Solution) msbuild Restaurant\Restaurant.sln ` /p:Configuration=AppVeyor /verbosity:minimal /nologo # Build website msbuild Restaurant\RestaurantDemoSite\website.publishproj ` /p:Configuration=Release /verbosity:minimal /nologo Write-Host "*** Solution builds complete."
# Deploy WCF service # (AppVeyor config ignores website Project in Solution) msbuild Restaurant\Restaurant.sln ` /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=AppVeyor ` /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD ` /verbosity:minimal /nologo # Deploy website msbuild Restaurant\RestaurantDemoSite\website.publishproj ` /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=Release ` /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD ` /verbosity:minimal /nologo Write-Host "*** Solution deployments complete."
Below is the output from AppVeyor showing the WCF Service and website’s deployment to Azure. Deployment is the last step in the continuous delivery process. At this point, the Solution was already built and the automated unit tests completed, successfully.
Below is the final view of the sample Solution’s WCF service and web site deployed to IIS 8.5 on the Azure VM.
Links
- Introduction to Web Deploy (see ‘How does it work?’ diagram on non-admin deployments)
- ASP.NET Web Deployment using Visual Studio: Command Line Deployment
- IIS: Enable IIS remote management
- Sayed Ibrahim Hashimi’s Blog
- Continuous Integration and Continuous Delivery
- How to: Edit Deployment Settings in Publish Profile (.pubxml) Files
Windows PowerShell 4.0 for .NET Developers, Book Review
Posted by Gary A. Stafford in .NET Development, DevOps, PowerShell Scripting, Team Foundation Server (TFS) Development on May 2, 2014
A brief review of ‘Windows PowerShell 4.0 for .NET Developers’, a fast-paced PowerShell guide, enabling you to efficiently administer and maintain your development environment.
Introduction
Recently, I had the opportunity to review ‘Windows PowerShell 4.0 for .NET Developers‘, published by Packt Publishing. According to its author, Sherif Talaat, the book is ‘a fast-paced PowerShell guide, enabling you to efficiently administer and maintain your development environment.‘ Working in a large and complex software development organization, technologies such as PowerShell, which enable increased speed and automation, are essential to our success. Having used PowerShell on a regular basis as a .NET developer for the past few years, I was excited to see what Sherif’s newest book offered.
Requirements
The book recommends the following minimal software configuration to work through the code samples:
- Windows Server 2012 R2 (includes PowerShell 4.0 and .NET 4.5)
- SQL Server 2012
- Visual Studio 2012/2013
- Visual Studio Team Foundation Server (TFS) 2012/2013
To test the book’s samples, I provisioned a fresh VM, and using my MSDN subscription, installed the required Windows Server, SQL Server, and Team Foundation Server. I worked directly on the VM, as well as remotely from a Windows 7 Enterprise-based development machine with Visual Studio 2012 installed. The code samples worked fairly well, with only a few minor problems I found. There is still no errata published for the book as of the time of review.
A key aspect many authors do not address, is the complexities of using PowerShell in a corporate environment. Working individually or on a small network, developers don’t always experience the added burden of restrictive network security, LDAP, proxy servers, proxy authentication, XML gateways, firewalls, and centralized computer administration. Any code that requires access to remote servers and systems, often requires additional coding to work within a corporate environment. It can be frustrating to debug and extend simple examples to work successfully within an enterprise setting.
Contents
Windows PowerShell 4.0 for .NET Developers, at 115 pages in length, is divided into five chapters:
- Chapter 1: Getting Started with Windows PowerShell
- Chapter 2: Unleashing Your Development Skills with PowerShell
- Chapter 3: PowerShell for Your Daily Administration Tasks
- Chapter 4: PowerShell and Web Technologies
- Chapter 5: PowerShell and Team Foundation Server
Chapter 1 provides a brief introduction to PowerShell. At a scant 30 pages, I would not recommend this book as a way to learn PowerShell for the beginner. For learning PowerShell, I recommend Instant Windows PowerShell, by Vinith Menon, also published by Packt Publishing. Alternatively, I recommend a few books by Manning Publications, including Learn Windows PowerShell in a Month of Lunches, Second Edition.
Chapter 2 discusses PowerShell in relationship to several key Microsoft technologies, including Windows Management Instrumentation (WMI), Common Information Model (CIM), Component Object Model (COM) and Extensible Markup Language (XML). As a .NET developer, it’s almost impossible not to have worked with one, or all of these technologies. Chapter 2 discusses how PowerShell works with .NET objects, and extend the .NET framework. The chapter also includes an easy-to-follow example of creating, importing, and calling a PowerShell binary module (compiled .NET class library), using Visual Studio.
Chapter 3 explores areas where .NET developer can start leveraging PowerShell for daily administrative tasks. In particular, I found the sections on PowerShell Remoting and administering IIS and SQL Server particularly useful. Being able to easily connect to remote web, application, and database servers from the command line (or, PowerShell prompt) and do basic system administration is a huge time savings in an agile development environment.
Chapters 4 focuses on how PowerShell interfaces with SOAP and REST based services, web requests, and JSON. Windows Communication Foundation (WCF) based service-oriented application development has been a trend for the last few years. Being able to manage, test, and monitor SOAP and RESTful services and HTTP requests/responses is important to .NET developers. PowerShell can often quicker and easier than writing and compiling service utilities in Visual Studio, or using proprietary third-party applications.
Chapter 5 is dedicated to Visual Studio Team Foundation Server (TFS), Microsoft’s end-to-end, Application Lifecycle Management (ALM) solution. Chapter 5 details the installation and use of TFS Power Tools and TFS PowerShell snap-in. Having held the roles of lead developer and Scrum Master, I have personally found some of the best uses for PowerShell in automating various aspects of TFS. Managing TFS often requires repetitive tasks, the place where PowerShell excels. You will need to explore additional resources beyond the scope of this book to really start automating TFS with PowerShell.
Conclusion
Overall, I enjoyed the book and felt it was well worth the time to explore. I applaud Sherif for targeting a PowerShell book specifically to developers. Due to its short length, the book did leave me wanting more information on a few subjects that were barely skimmed. I also found myself expecting guidance on a few subjects the book did not touch upon, such as PowerShell for cloud-based development (Azure), test automation, and build and deployment automation. For more information on some of those subjects, I recommend Sherif’s other book, also published by Packt Publishing, PowerShell 3.0 Advanced Administration Handbook.
Instant Oracle Database and PowerShell How-to Book
Posted by Gary A. Stafford in Oracle Database Development, PowerShell Scripting on March 30, 2013
Recently, I finished reading Geoffrey Hudik‘s Instant Oracle Database and PowerShell How-to ebook, part of Packt Publishing‘s Instant book series. Packt’s Instant book series promises short, fast, and focused information; Hudik’s book delivers. It’s eighty pages deliver hundreds of pages worth of the author’s knowledge in a highly-condensed format, perfect for today’s multi-tasking technical professionals.
Hudik’s book is ideal for anyone experienced with the Oracle Database 11g platform, and interested in leveraging Microsoft’s PowerShell scripting technology to automate their day-to-day database tasks. Even a seasoned developer, experienced in both technologies, will gain from the author’s insight on overcoming several PowerShell and .NET framework related integration intricacies.
As a busy developer, I was able to immediately start implementing many of the book’s recipes to improve my productivity. I especially enjoyed the way the book builds upon previous lessons, resulting in a useful collection of foundational PowerShell scripts and modules. Building on top of these, save time when automating a new task.
Getting started with the book’s examples required a few free downloads from Oracle and Microsoft. Starting with a Windows 7 Enterprise laptop, I downloaded and installed Oracle Database 11g Express Edition (Oracle Database XE) Release 2,
Oracle Developer Tools for Visual Studio, and Oracle SQL Developer IDE. I also made sure my laptop was up-to-date with the latest Visual Studio 2012, .NET framework, and PowerShell updates.
If you are new to administering Oracle databases or using Oracle SQL Developer IDE, Oracle has some excellent interactive tutorials online to help you get started.
Quick and Easy File Backup Using PowerShell and DotNetZip
Posted by Gary A. Stafford in PowerShell Scripting, Software Development on September 8, 2012
Backup your files easily, using PowerShell and DotNetZip, from the command line.
Backing Up
There is no shortage of file backup utilities, so there is no excuse not to back up your files. However, over the course of a typical workday, many of us create and edit files on our own computer, as well as files on multiple networked computers. Although these networked computers usually have their own backup processes, restoring lost files from them often requires contacting Support, filling out paperwork, and waiting, and waiting, and…
As a result, I prefer to create my own backup of important files I am working with on networked computers, using a simple PowerShell script. I call the PowerShell script from the command line on an ad-hoc basis, and nightly using a scheduled task. When creating the backup, to save space, the script compresses the files using the free DotNetZip Library, available on CodePlex. This is a popular library used by .NET and PowerShell developers. There are many code examples on the Internet. The script also appends the backup file’s name with a descriptive suffix and timestamp, making the backup file unique.
Using the Script
The script’s main function, Create-ZipBackup
, takes three parameters:
$target
– Target directory or file to be backed up (i.e. ‘\\RemoteServer\ShareName\MyProject’)$destination
– Destination directory for backup file (i.e. ‘c:\My Backups’)$fileNameSuffix
– File suffix used to name the backup file (i.e. ‘ProjectPlan’ – ‘ProjectPlan.BU.20120908_070913.zip’)
Here is an example of calling the script from the command line, using the above example parameters. To save time when calling the script multiple times, I’ve placed the path to the script into a temporary variable:
SET script=C:\Users\gstaffor\Documents\PowerShell\BackupAndZip.ps1 powershell -command "& { . %script%; Create-ZipBackup -target '\\RemoteServer\ShareName\MyProject' -destination 'c:\My Backups' -fileNameSuffix 'ProjectPlan'}"
Alternately, to call the Create-ZipBackup
function from within the script directly, you would use the following PowerShell command:
Create-ZipBackup -target '\\RemoteServer\ShareName\MyProject' -destination 'c:\My Backups' -fileNameSuffix 'ProjectPlan'}"
The Script
################################################ # # # Compress and backup files using DotNetZip # # # # Gary A. Stafford - rev. 09/08/2012 # # www.programmaticponderings.com # # # ################################################ # Enforce coding rules in expressions & scripts Set-StrictMode -version 2.0 # Location of Ionic.Zip.dll [Void] [System.Reflection.Assembly]::LoadFrom( "C:\Ionic.Zip.dll") function Create-ZipBackup { param ( $target, $destination, $fileNameSuffix ) [string] $date = Get-Date -format yyyyMMdd_HHmmss [string] $fileName = "{0}\$fileNameSuffix.BU.{1}.zip" -f $destination, $date [IO.FileInfo] $outputFile = [IO.FileInfo] $fileName [Ionic.Zip.ZipFile] $zipfile = new-object Ionic.Zip.ZipFile [Ionic.Zip.SelfExtractorSaveOptions] $selfExtractOptions = New-Object Ionic.Zip.SelfExtractorSaveOptions $selfExtractOptions.Flavor = [Ionic.Zip.SelfExtractorFlavor]::ConsoleApplication $selfExtractOptions.DefaultExtractDirectory = $outputFile.Directory.FullName $selfExtractOptions.RemoveUnpackedFilesAfterExecute = $false $zipfile.AddDirectory("$target") $zipfile.UseZip64WhenSaving = [Ionic.Zip.Zip64Option]::Always $zipfile.SaveSelfExtractor($outputFile.FullName, $selfExtractOptions) $zipfile.Dispose(); If (!(Test-Path $fileName)) { Write-Host ("ERROR: Backup file '{0}' not created!" -f $fileName) break } Write-Host ("SUCCESS: Backup file '{0}' created." -f $fileName) }
Error Handling
Note, this basic script does not contain much in the way of error handling. There are a some common reasons the script can fail. For example, a file whose file path exceeds the maximum character length of 260 characters, will throw an error. Trying to back up files which you (logged on user account) does not have permissions to, will also throw an error. To catch these types of errors, you would need to add functionality to iterate recursively through all the target files first, before compressing.
Using PowerShell to Generate TFS Changed File List for Build Artifact Delivery
Posted by Gary A. Stafford in PowerShell Scripting, Software Development, Team Foundation Server (TFS) Development on August 10, 2012
Delivering Artifacts for Deployment
In many enterprise-software development environments, delivering release-ready code to an Operations or Release team for deployment, as opposed to deploying the code directly, is common practice. A developer ‘kicks off’ a build of project using a build automation system like Hudson, Jenkins, CruiseControl, TeamCity, or Bamboo. The result is a set of build artifacts that are delivered and deployed as part of the release cycle. Build artifacts are logical collections of deployable code and other files, which form the application. Artifacts are often segregated by type, such as database, web code, services, configuration files, and so forth. Each type of artifact may require a different deployment methods.
There are two approaches to delivering artifacts for deployment. Some organizations deliver all the artifacts from each build for deployment. Alternately, others follow a partial delivery and release model, delivering only the artifacts that contain changes since the last delivery. The entire application is not re-deployed, only what changed. This is considered by many to be a quicker and safer method of software release.
The challenge of partial delivery is knowing precisely what changed since the last delivery. Almost all source control systems keep a history of changes (‘changesets’). Based on the time of the last build, a developer can check the history and decide which artifacts to deliver based on the changes. If you have daily releases, changes between deliveries are likely few. However, if your development cycle spans a few weeks or you have multiple developers working on the same project, there will likely be many changesets to examine. Figuring out what artifacts to deliver is tedious and error prone. Missing one small change out of hundreds of changes can jeopardize a whole release. Having to perform this laborious task ever few weeks myself, I was eager to automate this process!
Microsoft Team Foundation PowerShell Snap-In
The solution is of course PowerShell and the Microsoft Team Foundation PowerShell Snap-In. Using these two tools, I was able to write a very simple script that does the work for me. If you are unfamiliar with the Team Foundation Server (TFS) snap-in, review my earlier post, Automating Task Creation in Team Foundation Server with PowerShell. That post discusses the snap-in and explains how to install on your Windows computer.
The PowerShell script begins with a series of variables. The first two are based on your specific TFS environment. Variables include:
- Team Project Collection path;
- Source location within the collection to search for changes;
- Date and time range to search for changes;
- Location of text file that will contain a list of changed files;
- Option to open the text file when the script is complete.
Given the Team Project Collection path, source location, and the date range, the script returns a sorted list of all files that changed. Making sure the list is distinct is important. File may change many times over the course of a development cycle. You only want to know if the file changed. How many times the file changed, or when it changed, is irrelevant. The file list is saved to a text file, a manifest, for review. The values of the script’s variables are also included in the manifest.
Excluding Certain Changes
Testing the initial script, I found it returned to much information. There were three main reasons:
- Unrelated Changes – Not every file that changes within the location selected is directly associated the project being deployed. There may be multiple, related projects in that location’s sub directories (child nodes).
- Secondary Project Files – Not every file that changes is deployed. For example, build definitions files, database publishing profiles, and manual test documents, are important parts of any project, but are not directly part of the applications within the project being deployed. These are often files in the project used by the build system or required by TFS.
- Certain Change Types – Changes in TFS include several types (
Microsoft.TeamFoundation.VersionControl.Client.ChangeType
) that you may not want to include on the list. For example, you may not care about deleted or renamed files. See the post script about how to get a list of allChangeTypes
using PowerShell.
To solve the problem of too much information, we can filter the results of the Get-TfsItemHistory
command, using the Where-Object
command with the Select-Object
command, in the Get-TfsItemHistory
command pipeline. Using the -notlike
property of the Where-Object
command, which accepts wildcards, we exclude certain ChangeTypes
, we exclude files by name and size, and we exclude groups of files based on file path. You will obviously need to change the example’s exclusions to meet your own project’s needs.
Below is the PowerShell script, along with some sample contents of file change manifest text file, based on an earlier post’s SSDT database Solution:
############################################################### # # Search for all unique file changes in TFS # for a given date/time range and collection location. # Write results to a manifest file. # # Author: Gary A. Stafford # Created: 2012-04-18 # Revised: 2012-08-11 # ############################################################### # Clear Output Pane clear # Enforce coding rules Set-StrictMode -version 2.0 # Loads Windows PowerShell snap-in if not already loaded if ( (Get-PSSnapin -Name Microsoft.TeamFoundation.PowerShell -ErrorAction SilentlyContinue) -eq $null ) { Add-PSSnapin Microsoft.TeamFoundation.PowerShell } # Variables - CHECK EACH TIME [string] $tfsCollectionPath = "http://tfs2010/tfsCollection" [string] $locationToSearch = "$/Development/AdventureWorks/" [string] $outputFile = "c:\ChangesToTFS.txt" [string] $dateRange = "D2012-07-08 00:00:00Z~" [bool] $openOutputFile = $true # Accepts $false or $true # For a date/time range: 'D2012-08-06 00:00:00Z~D2012-08-09 23:59:59Z' # For everything including and after a date/time: 'D2012-07-21 00:00:00Z~' [Microsoft.TeamFoundation.Client.TfsTeamProjectCollection] $tfs = get-tfsserver $tfsCollectionPath # Add informational header to file manifest [string] $outputHeader = "TFS Collection: " + $tfsCollectionPath + "`r`n" + "Source Location: " + $locationToSearch + "`r`n" + "Date Range: " + $dateRange + "`r`n" + "Created: " + (Get-Date).ToString() + "`r`n" + "======================================================================" $outputHeader | Out-File $outputFile Get-TfsItemHistory $locationToSearch -Server $tfs -Version $dateRange ` -Recurse -IncludeItems | Select-Object -Expand "Changes" | Where-Object { $_.ChangeType -notlike '*Delete*'} | Where-Object { $_.ChangeType -notlike '*Rename*'} | Select-Object -Expand "Item" | Where-Object { $_.ContentLength -gt 0} | Where-Object { $_.ServerItem -notlike '*/sql/*' } | Where-Object { $_.ServerItem -notlike '*/documentation/*' } | Where-Object { $_.ServerItem -notlike '*/buildtargets/*' } | Where-Object { $_.ServerItem -notlike 'build.xml'} | Where-Object { $_.ServerItem -notlike '*.proj'} | Where-Object { $_.ServerItem -notlike '*.publish.xml'} | Select -Unique ServerItem | Sort ServerItem | Format-Table -Property * -AutoSize | Out-String -Width 4096 | Out-File $outputFile -append Write-Host `n`r**** Script complete and file written **** If ($openOutputFile) { Invoke-Item $outputFile }
Contents of file change manifest text file, based on my previous post’s SSDT database Visual Studio Solution:
TFS Collection: http://tfs2010/tfsCollection Source Location: $/Development/AdventureWorks2008/ Date Range: D2012-08-02 00:00:00Z~ Created: 8/10/2012 10:28:46 AM ====================================================================== ServerItem ---------- $/Development/AdventureWorks2008/AdventureWorks2008.sln $/Development/AdventureWorks2008/Development/Development.sln $/Development/AdventureWorks2008/Development/Development.sqlproj $/Development/AdventureWorks2008/Development/Schema Objects/Server LevelObjects/Security/Logins/aw_dev.login.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/AdventureWorksSSDT.sqlproj $/Development/AdventureWorks2008/AdventureWorksSSDT/dbo/StoredProcedures/uspGetBillOfMaterials.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/dbo/Stored Procedures/uspLogError.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/HumanResources/Tables/EmployeePayHistory.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/Purchasing/Tables/ShipMethod.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/Purchasing/Views/vVendorWithContacts.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/Security/aw_dev.sql $/Development/AdventureWorks2008/AdventureWorksSSDT/Security/jenkins.sql
Conclusion
This script saves considerable time, especially for longer release cycles, and eliminates potential errors from missing changes. To take this script a step further, I would like to have it determine which artifacts to deliver based on the files that changed, not leaving it up to the developer to figure out. A further step, I would also have it generate an artifact manifest that would be passed to the build. The build would use the manifest to deliver those artifacts to the release team. This would really make it an end-to-end solution. Challenge accepted…
Post Script, PowerShell Enumeration
Assume you couldn’t find a resource on the web that listed all the ChangeType
values? How would you use PowerShell to get a list of all the enumerated ChangeType
values (Microsoft.TeamFoundation.VersionControl.Client.ChangeType
)? It only takes one line of code, once the TFS plugin and assembly are loaded.
# Loads Windows PowerShell snap-in if not already loaded if ( (Get-PSSnapin -Name Microsoft.TeamFoundation.PowerShell -ErrorAction SilentlyContinue) -eq $null ) { Add-PSSnapin Microsoft.TeamFoundation.PowerShell } [Void][Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.VersionControl.Client") [Enum]::GetNames( [Microsoft.TeamFoundation.VersionControl.Client.ChangeType] )
Automating Work Item Creation in TFS 2010 with PowerShell, Continued
Posted by Gary A. Stafford in .NET Development, PowerShell Scripting, Software Development, Team Foundation Server (TFS) Development on July 18, 2012
In a previous post, Automating Task Creation in Team Foundation Server with PowerShell, I demonstrated how to automate the creation of TFS Task-type Work Items using PowerShell. After writing that post, I decided to go back and further automate my own processes. I combined two separate scripts that I use on a regular basis, one that creates the initial Change Request (CR) Work Item, and a second that creates the Task Work Items associated with the CR. Since I usually run both scripts successively and both share many of the same variables, combining the scripts made sense. I now have a single PowerShell script that will create the parent Change Request and the associated Tasks in TFS. The script reduces my overall time to create the Work Items by a few minutes for each new CR. The script also greatly reduces the risk of input errors from typing the same information multiple times in Visual Studio. The only remaining manual step is to link the Tasks to the Change Request in TFS.
The Script
Similar to the previous post, for simplicity sake, I have presented a basic PowerShell script. The script could easily be optimized by wrapping the logic into a function with input parameters, further automating the process. I’ve placed a lot of comments in the script to explain what each part does, and help make customization easier. The script explicitly declares all variables, adhering to PowerShell’s Strict Mode (Set-StrictMode -Version 2.0
). I feel this makes the script easier to understand and reduces the possibility of runtime errors.
############################################################# # # Description: Automatically creates # (1) Change Request-type Work Item and # (5) Task-type Work Items in TFS. # # Author: Gary A. Stafford # Created: 07/18/2012 # Modified: 07/18/2012 # ############################################################# # Clear Output Pane clear # Loads Windows PowerShell snap-in if not already loaded if ( (Get-PSSnapin -Name Microsoft.TeamFoundation.PowerShell -ErrorAction SilentlyContinue) -eq $null ) { Add-PSSnapin Microsoft.TeamFoundation.PowerShell } # Set Strict Mode - optional Set-StrictMode -Version 2.0 ############################################################# # Usually remains constant [string] $tfsServerString = "http://[YourServerNameGoesHere]/[PathToCollection]" [string] $areaPath = "Development\PowerShell" [string] $workItemType = "Development\Change Request" [string] $description = "Create Task Automation PowerShell Script" # Usually changes for each Sprint - both specific to your environment [string] $iterationPath = "PowerShell\TFS2010" # Usually changes for each CR and Tasks [string] $requestName = "Name of CR from Service Manager" [string] $crId = "000000" [string] $priority = "1" [string] $totalEstimate = "10" # Total of $taskEstimateArray [string] $assignee = "Doe, John" [string] $testType = "Unit Test" # Task values represent units of work, often 'man-hours' [decimal[]] $taskEstimateArray = @(2,3,10,3,.5) [string[]] $taskNameArray = @("Analysis", "Design", "Coding", "Unit Testing", "Resolve Tasks") [string[]] $taskDisciplineArray = @("Analysis", "Development", "Development", "Test", $null) ############################################################# Write-Host `n`r**** Create CR started...`n`r # Build string of field parameters (key/value pairs) [string] $fields = "Title=$($requestName);Description=$($description);CR Id=$($crId);" $fields += "Estimate=$($totalEstimate);Assigned To=$($assignee);Test Type=$($testType);" $fields += "Area Path=$($areaPath);Iteration Path=$($iterationPath);Priority=$($priority);" #For debugging - optional console output Write-Host `n`r $fields # Create the CR (Work Item) tfpt workitem /new $workItemType /collection:$tfsServerString /fields:$fields Write-Host `n`r**** Create CR completed...`n`r ############################################################# # Loop and create of eack of the (5) Tasks in prioritized order [int] $i = 0 Write-Host `n`r**** Create Tasks started...`n`r # Usually remains constant $workItemType = "Development\Task" while ($i -le 4) { # Concatenate name of task with CR name for Title and Description fields $taskTitle = $taskNameArray[$i] + " - " + $requestName # Build string of field parameters (key/value pairs) [string] $fields = "Title=$($taskTitle);Description=$($taskTitle);Assigned To=$($assignee);" $fields += "Area Path=$($areaPath);Iteration Path=$($iterationPath);Discipline=$($taskDisciplineArray[$i]);Priority=$($i+1);" $fields += "Estimate=$($taskEstimateArray[$i]);Remaining Work=$($taskEstimateArray[$i]);Completed Work=0" #For debugging - optional console output Write-Host `n`r $fields # Create the Task (Work Item) tfpt workitem /new $workItemType /collection:$tfsServerString /fields:$fields $i++ } Write-Host `n`r**** Create Tasks completed...`n`r
Deleting Work Items with PowerShell
Team Foundation Server Administrators know there is no delete button for Work Items in TFS. So, how do you delete (destroy, as TFS calls it) a Work Item? One way is from the command line, as demonstrated in the previous post. You can also use PowerShell, calling the witAdmin command-line tool, but this time from within PowerShell, as follows:
[string] $tfsServerString = "http://[YourServerNameGoesHere]/[PathToCollection]" [string] $tfsWorkIemId = "00000" $env:path += ";C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE" witadmin destroywi /collection:$tfsServerString /id:$tfsWorkIemId /noprompt
First, use PowerShell to set your path environmental variable to include your local path to witadmin.exe
. Then set your TFS Server path and the TFS Work Item ID of the Work Item you want to delete. Or, you can call witAdmin
, including the full file path, avoiding setting the path environmental variable. True, you could simplify the above to a single line of code, but I feel using variables is easier to understand for readers then one long line of code.
Automating Task Creation in Team Foundation Server with PowerShell
Posted by Gary A. Stafford in .NET Development, PowerShell Scripting, Software Development, Team Foundation Server (TFS) Development on April 15, 2012
Administrating Team Foundation Server often involves repeating the same tasks over and over with only slight variation in the details. This is especially true if your team adheres to an Agile software development methodology. Every few weeks a new Iteration begins, which means inputting new Change Requests into Team Foundation Server along with their associated Tasks*.
Repetition equals Automation equals PowerShell. If you have to repeat the same task in Windows more than a few times, consider automating it with PowerShell. Microsoft has done an outstanding job equipping PowerShell to access a majority of the functionary of their primary application; Team Foundation Server 2010 (TFS) is no exception.
Microsoft’s latest release of Team Foundation Server Power Tools December 2011 includes Windows PowerShell Cmdlets for Visual Studio Team System Team Foundation Server. According to Microsoft, Power Tools are a set of enhancements, tools and command-line utilities that increase productivity of Team Foundation Server scenarios. Power Tool’s TFS PowerShell Cmdlets give you control of common version control commands in TFS.
One gotcha with TFS Power Tools, it doesn’t install PowerShell extras by default. Yes I agree, it makes no sense. If you already have Power Tools installed, you must rerun the installer, select the Modify Install option and add the PowerShell features. If you are installing Power Tools for the first time, make sure to select the Custom install option and add the PowerShell features.
*Tasks are a type of TFS Work Item. Work Item types can also include Bugs, Defects, Test Cases, Risks, QoS Requirements, or whatever your teams decides to define as Work Items. There is a comprehensive explanation of Work Items in chapter 12 of Microsoft’s Patterns & Practices, available to review on Codeplex.
Automating Task Creation
Working with different teams during my career that practice SCRUM, a variation of Agile, we usually start a new Sprint (Iteration) ever four to six weeks with an average Sprint Backlog of 15-25 items. Each item in the backlog translates into individual CRs in TFS. Each CR has several boilerplate Tasks associated with them. Many Tasks are common to all Change Requests (CR). Common Tasks often include analysis, design, coding, unit testing, and administration. Nothing is more mind-numbing as a Manager than having to input a hundred or more Tasks into TFS every few weeks, with each Task requiring an average of ten or more fields of data. In addition to the time requirement, there is the opportunity for human error.
The following PowerShell script creates a series of five different Tasks for a specific CR, which has been previously created in TFS. Once the Tasks are created, I use a separate method to link the Tasks to the CR. Every team’s development methodologies are different; ever team’s use of TFS is different. Don’t get hung up on exactly which fields I’ve chosen to populate. Your processes will undoubtedly require different fields.
There are many fields in a Work Item template that can be populated with data, using PowerShell. Understanding each field’s definition – name, data type, and rules for use (range of input values, required field, etc.) is essential. To review the field definitions, in Visual Studio 2010, select the Tools tab -> Process Editor -> Work Item Types -> Open WIT from Server. Select your Work Item Template (WIT) from the list of available templates. The template you chose will be the same template defined in the PowerShell script, with the variable $workItemType. To change the fields, you will need the necessary TFS privileges.
Avoiding Errors
When developing the script for this article, I was stuck for a number of hours with a generic error (shown below) on some of the Tasks the script tried to create – “…Work Item is not ready to save” I tried repeatedly debugging and altering the script to resolve the error without luck. An end up the error was not in the script, but in my lack of understanding of the Task Work Item Template (WIT) and its field definitions.
By trial and error, I discovered this error usually means that either the data being input into a field is invalid based on the field’s definition, or that a required field failed to have data input for it. Both were true in my case at different points in the development of the script. First, I failed to include the Completed Time field, which was a required field in our Task template. Secondly, I tried to set the Priority of the Tasks to a number between 1 and 5. Unbeknownst to me, the existing Task template only allowed values between 1 and 3. The best way to solve these types of errors is to create a new Task in TFS, and try inputting the same data as you tried to inject with the script. The cause of the error should quickly become clear.
The Script
For simplicity sake I have presented a simple PowerShell script. The script could easily be optimized by wrapping the logic into a function with input parameters, further automating the process. I’ve placed a lot of comments in the script to explain what each part does, and help make customization easier.The script explicitly declares all variables and adheres to PowerShell’s Strict Mode (Set-StrictMode -Version 2.0
). I feel this makes the script easier to understand and reduces the number of runtime errors.
############################################################# # # Description: Automatically creates (5) standard Task-type # Work Items in TFS for a given Change Request. # # Author: Gary A. Stafford # Created: 04/12/2012 # Modified: 04/14/2012 # ############################################################# # Clear Output Pane clear # Loads Windows PowerShell snap-in if not already loaded if ( (Get-PSSnapin -Name Microsoft.TeamFoundation.PowerShell -ErrorAction SilentlyContinue) -eq $null ) { Add-PSSnapin Microsoft.TeamFoundation.PowerShell } # Set Strict Mode - optional Set-StrictMode -Version 2.0 # Usually changes for each Sprint - both specific to your environment [string] $areaPath = "Development\PowerShell" [string] $iterationPath = "PowerShell\TFS2010" # Usually changes for each CR [string] $changeRequestName = "Create Task Automation PowerShell Script" [string] $assignee = "Stafford, Gary" # Values represent units of work, often 'man-hours' [decimal[]] $taskEstimateArray = @(2,3,10,3,.5) # Remaining Time is usually set to Estimated time at start (optional use of this array) [decimal[]] $taskRemainingArray = @(2,3,10,3,.5) # Completed Time is usually set to zero at start (optional use of this array) [decimal[]] $tasktaskCompletedArray = @(0,0,0,0,0,0) # Usually remains constant # TFS Server address - specific to your environment [string] $tfsServerString = "http://[YourServerNameGoesHere]/[PathToCollection]" # Work Item Type - specific to your environment [string] $workItemType = "Development\Task" [string[]] $taskNameArray = @("Analysis", "Design", "Coding", "Unit Testing", "Resolve Tasks") [string[]] $taskDisciplineArray = @("Analysis", "Development", "Development", "Test", $null) # Loop and create of eack of the (5) Tasks in prioritized order [int] $i = 0 Write-Host `n`r**** Script started...`n`r while ($i -le 4) { # Concatenate name of task with CR name for Title and Description fields $taskTitle = $taskNameArray[$i] + ": " + $changeRequestName # Build string of field parameters (key/value pairs) [string] $fields = "Title=$($taskTitle);Description=$($taskTitle);Assigned To=$($assignee);" $fields += "Area Path=$($areaPath);Iteration Path=$($iterationPath);Discipline=$($taskDisciplineArray[$i]);Priority=$($i+1);" $fields += "Estimate=$($taskEstimateArray[$i]);Remaining Work=$($taskRemainingArray[$i]);Completed Work=$($tasktaskCompletedArray[$i])" #For debugging - optional console output Write-Host $fields # Create the Task (Work Item) tfpt workitem /new $workItemType /collection:$tfsServerString /fields:$fields $i++ } Write-Host `n`r**** Script completed...
The script begins by setting up a series of variables. Some variables will not change once they are set, such as the path to the TFS server, unless you work with multiple TFS instances. Some variables will only change at the beginning of each iteration (Sprint), such as the Iteration Path. Other variables will change for each CR or for each Task. These include the CR title and Estimated, Completed, and Remaining Time. Again, your process will dictate different fields with different variables.Once you have set up the script to your requirements and run it successfully, you should see output similar to the following:
In TFS, the resulting Tasks, produced by the script look like the Task, below:
Deleting Work Items after Developing and Testing the Script
TFS Administrators know there is no Work Item delete button in TFS. So, how do you delete the Tasks you may have created during developing and testing this script? The quickest way is from the command line or from PowerShell. You can also delete Work Items programmatically in .NET. I usually use the command line, as follows:
- Open the Visual Studio 2010 Command Prompt.
- Change the directory to the location of witadmin.exe. My default location is:
C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE. - Run the following command, substituting the Task Id for the Task Id or Task Ids, comma delimited without spaces, of the Tasks you want to delete:
witadmin destroywi /collection:[Your TFS Collection Path Here] /id:12930 /noprompt
Almost the same command can be run in PowerShell by including the path to witadmin.exe
in the script. I found this method at Goshoom.NET Dev Blog. You can read more, there.
Be warned, there is no undoing the delete command. The noprompt
is optional; using it speeds up the deletion of Tasks. However, leaving out noprompt
means you are given a chance to confirm the Task’s deletion. Not a bad idea when you’re busy doing a dozen other things.
Further PowerShell Automation
Creating Tasks with PowerShell, I save at least two hours of time each Sprint cycle, and greatly reduce my chance for errors. Beyond Tasks, there are many more mundane TFS-related chores that can be automated using PowerShell. These chores include bulk import of CRs and Tasks from Excel or other Project Management programs, creating and distributing Agile reports, and turnover and release management automation, to name but a few. I’ll explore some of these topics in future blog.