Posts Tagged .NET

Cloud-based Continuous Integration and Deployment for .NET Development

Create a cloud-based, continuous integration and deployment toolchain for distributed .NET development teams, using GitHub, AppVeyor, and Microsoft Azure.

Introduction

Whether you are part of a large enterprise development environment, or a member of a small start-up, you are likely working with remote team members. You may be remote, yourself. Developers, testers, web designers, and other team members, commonly work remotely on software projects. Distributed teams, comprised of full-time staff, contractors, and third-party vendors, often work in different buildings, different cities, and even different countries.

If software is no longer strictly developed in-house, why should our software development and integration tools be located in-house? We live in a quickly evolving world of Saas, PaaS, and IaaS. Popular SaaS development tools include Visual Studio Online, GitHub, BitBucket, Travis-CI, AppVeyor, CloudBeesJIRA, AWS, Microsoft Azure, Nodejitsu, and Heroku, to name just a few. With all these ‘cord-cutting’ tools, there is no longer a need for distributed development teams to be tethered to on-premise tooling, via VPN tunnels and Remote Desktop Connections.

There are many combinations of hosted software development and integration tools available, depending on your technology stack, team size, and budget. In this post, we will explore one such toolchain for .NET development. Using GitGitHub, AppVeyor, and Microsoft Azure, we will continuously build, test, and deploy a multi-tier .NET solution, without ever leaving Visual Studio. This particular toolchain has strong integration between tools, and will scale to fit most development teams.

Git and GitHub
Git and GitHub are widely used in development today. Visual Studio 2013 has fully-integrated Git support and Visual Studio 2012 has supported Git via a plug-in since early last year. Git is fully compatible with Windows. Additionally, there are several third party tools available to manage Git and GitHub repositories on Windows. These include Git Bash (my favorite), Git GUI, and GitHub for Windows.

GitHub acts as a replacement for your in-house Git server. Developers commit code to their individual local Git project repositories. They then push, pull, and merge code to and from a hosted GitHub repository. For security, GitHub requires a registered username and password to push code. Data transfer between the local Git repository and GitHub is done using HTTPS with SSL certificates or SSH with public-key encryption. GitHub also offers two-factor authentication (2FA). Additionally, for those companies concerned about privacy and added security, GitHub offers private repositories. These plans range in price from $25 to $200 per month, currently.

GitHub View of Solution

GitHub View of Solution

AppVeyor
AppVeyor’s tagline is ‘Continuous Integration for busy developers’. AppVeyor automates building, testing and deployment of .NET applications. AppVeyor is similar to Jenkins and Hudson in terms of basic functionality, except AppVeyor is only provided as a SaaS. There are several hosted solutions in the continuous integration and delivery space similar to AppVeyor. They include CloudBees (hosted-Jenkins) and Travis-CI. While CloudBees and Travis CI works with several technology stacks, AppVeyor focuses specifically on .NET. Its closest competitor may be Microsoft’s new Visual Studio Online.

Identical to GitHub, AppVeyor also offers private repositories (spaces for building and testing code). Prices for private repositories currently range from $39 to $319 per month. Private repositories offer both added security and support.  AppVeyor integrates nicely with several cloud-based code repositories, including GitHub, BitBucket, Visual Studio Online, and Fog Creek’s Kiln.

AppVeyor View of Last Build of Solution

AppVeyor View of Latest Build of Solution

Azure
This post demonstrates continuous deployment from AppVeyor to a Microsoft Server 2012-based Azure VM. The VM has IIS 8.5, Web Deploy 3.5, IIS Web Management Service (WMSVC), and other components and configuration necessary to host the post’s sample Solution. AppVeyor would work just as well with Azure’s other hosting options, as well as other cloud-based hosting providers, such as AWS or Rackspace, which also supports the .NET stack.

New Microsoft Azure Portal View of VM

New Microsoft Azure Portal View of VM

Sample Solution

The Visual Studio Solution used for this post was originally developed as part of an earlier post, Consuming Cross-Domain WCF REST Services with jQuery using JSONP. The original Solution, from 2011, demonstrated jQuery’s AJAX capabilities to communicate with a RESTful WCF service, cross-domains, using JSONP. I have since updated and modernized the Solution for this post. The revised Solution is on a new branch (‘rev2014’) on GitHub. Major changes to the Solution include an upgrade from VS2010 to VS2013, the use of Git DVCS, NuGet package management, Web Publish Profiles, Web Essentials for bundling JS and CSS, Twitter Bootstrap, unit testing, and a lot of code refactoring.

Revised Restaurant Menu Demo Viewed on Android Tablet

Revised Restaurant Menu Demo Viewed on Android Tablet

The updated VS Solution contains the following four Projects:

  1. Restaurant – C# Class Library
  2. RestaurantUnitTests – Unit Test Project
  3. RestaurantWcfService – C# WCF Service Application
  4. RestaurantDemoSite – Web Site (JS/HTML5)
VS 2013 View of Solution

VS 2013 View of Solution

The Visual Studio Solution Explorer tab, here, shows all projects contained in the Solution, and the primary files and directories they contain.

As explained in the earlier post, the ‘RestaurantDemoSite’ web site makes calls to the ‘RestaurantWcfService’ WCF service. The WCF service exposes two operations, one that returns the menu (‘GetCurrentMenu’), and the other that accepts an order (‘SendOrder’). For simplicity, orders are stored in the files system as JSON files. No database is required for the Solution. All business logic is contained in the ‘Restaurant’ class library, which is referenced by the WCF service. This architecture is illustrated in this Visual Studio Assembly Dependencies Diagram.

Installing and Configuring the Solution

The README.md file in the GitHub repository contains instructions for installing and configuring this Solution. In addition, a set of PowerShell scripts, part of the Solution’s repository, makes the installation and configuration process, quick and easy. The scripts handle creating the necessary file directories and environment variables, setting file access permissions, and configuring IIS websites. Make sure to change the values of the environment variables before running the script. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.

# Create environment variables
[Environment]::SetEnvironmentVariable("AZURE_VM_HOSTNAME", `
  "{YOUR HOSTNAME HERE}", "User")

[Environment]::SetEnvironmentVariable("AZURE_VM_USERNAME", `
  "{YOUR USERNME HERE}", "User")

[Environment]::SetEnvironmentVariable("AZURE_VM_PASSWORD", `
  "{YOUR PASSWORD HERE}", "User")

# Create new restaurant orders JSON file directory
$newDirectory = "c:\RestaurantOrders"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new website directory
$newDirectory = "c:\RestaurantDemoSite"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new WCF service directory
$newDirectory = "c:\MenuWcfRestService"

if (-not (Test-Path $newDirectory)){
 New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)

Set-Acl $newDirectory $acl
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create main website in IIS
$newSite = "MenuWcfRestService"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9250 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create WCF service website in IIS
$newSite = "RestaurantDemoSite"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9255 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

Cloud-Based Continuous Integration and Delivery

Webhooks
The first point of integration in our hosted toolchain is between GitHub and AppVeyor. In order for AppVeyor to work with GitHub, we use a Webhook. Webhooks are widely used to communicate events between systems, over HTTP. According to GitHub, ‘every GitHub repository has the option to communicate with a web server whenever the repository is pushed to. These webhooks can be used to update an external issue tracker, trigger CI builds, update a backup mirror, or even deploy to your production server.‘ Basically, we give GitHub permission to tell AppVeyor every time code is pushed to the GitHub. GitHub sends a HTTP POST to a specific URL, provided by AppVeyor. AppVeyor responds to the POST by cloning the GitHub repository, and building, testing, and deploying the Projects. Below is an example of a webhook for AppVeyor, in GitHub.

GitHub's AppVeyor Webhook Configuration

GitHub’s AppVeyor Webhook Configuration

Unit Tests
To help illustrate the use of AppVeyor for automated unit testing, the updated Solution contains a Unit Test Project. Every time code is committed to GitHub, AppVeyor will clone and build the Solution, followed by running the set of unit tests shown below. The project’s unit tests test the Restaurant class library (‘restaurant.dll’). The unit tests provide 100% code coverage, as shown in the Visual Studio Code Coverage Results tab, below:

Code Coverage Results for Restaurant Class Library

Code Coverage Results for Restaurant Class Library

AppVeyor runs the Solution’s automated unit tests using VSTest.Console.exe. VSTest.Console calls the unit test Project’s assembly (‘restaurantunittests.dll’).  As shown below, the VSTest command (in light blue) runs all tests, and then displays individual test results, a results summary, and the total test execution time.

AppVeyor Running Automated Unit Tests Using VSTest.Console

AppVeyor Running Automated Unit Tests Using VSTest.Console

VSTest.Console has several command line options similar to MSBuild. They can be adjusted to output various levels of feedback on test results. For larger projects, you can selectively choose which pre-defined test sets to run. Test sets needs are set-up in Solution, in advance.

Configuring Azure VM
Before we publish the Solution from AppVeyor to the Azure, we need to configure the VM. Again, we can use PowerShell to script most of the configuration. Most scripts are the same ones we used to configure our local environment. The README.md file in the GitHub repository contains instructions. The scripts handle creating the necessary file directories, setting file access permissions, configuring the IIS websites, creating the Web Deploy User account, and assigning it in IIS. For reference, below are the contents of several of the supplied scripts. You should use the supplied scripts.

# Create new restaurant orders JSON file directory
$newDirectory = "c:\RestaurantOrders"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "INTERACTIVE","Modify","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new website directory
$newDirectory = "c:\RestaurantDemoSite"

if (-not (Test-Path $newDirectory)){
  New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
  "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create new WCF service directory
$newDirectory = "c:\MenuWcfRestService"

if (-not (Test-Path $newDirectory)){
 New-Item -Type directory -Path $newDirectory
}

$acl = Get-Acl $newDirectory
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IUSR","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)

Set-Acl $newDirectory $acl
$ar = New-Object System.Security.AccessControl.FileSystemAccessRule(`
 "IIS_IUSRS","ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
$acl.SetAccessRule($ar)
Set-Acl $newDirectory $acl

# Create main website in IIS
$newSite = "MenuWcfRestService"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9250 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create WCF service website in IIS
$newSite = "RestaurantDemoSite"

if (-not (Test-Path IIS:\Sites\$newSite)){
  New-Website -Name $newSite -Port 9255 -PhysicalPath `
    c:\$newSite -ApplicationPool "DefaultAppPool"
}

# Create new local non-admin User and Group for Web Deploy

# Main variables (Change these!)
[string]$userName = "USER_NAME_HERE" # mjones
[string]$fullName = "FULL USER NAME HERE" # Mike Jones
[string]$password = "USER_PASSWORD_HERE" # pa$$w0RD!
[string]$groupName = "GROUP_NAME_HERE" # Development

# Create new local user account
[ADSI]$server = "WinNT://$Env:COMPUTERNAME"
$newUser = $server.Create("User", $userName)
$newUser.SetPassword($password)

$newUser.Put("FullName", "$fullName")
$newUser.Put("Description", "$fullName User Account")

# Assign flags to user
[int]$ADS_UF_PASSWD_CANT_CHANGE = 64
[int]$ADS_UF_DONT_EXPIRE_PASSWD = 65536
[int]$COMBINED_FLAG_VALUE = 65600

$flags = $newUser.UserFlags.value -bor $COMBINED_FLAG_VALUE
$newUser.put("userFlags", $flags)
$newUser.SetInfo()

# Create new local group
$newGroup=$server.Create("Group", $groupName)
$newGroup.Put("Description","$groupName Group")
$newGroup.SetInfo()

# Assign user to group
[string]$serverPath = $server.Path
$group = [ADSI]"$serverPath/$groupName, group"
$group.Add("$serverPath/$userName, user")

# Assign local non-admin User in IIS for Web Deploy
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management")
[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(`
  $userName, "$Env:COMPUTERNAME\MenuWcfRestService", $FALSE)
[Microsoft.Web.Management.Server.ManagementAuthorization]::Grant(`
  $userName, "$Env:COMPUTERNAME\RestaurantDemoSite", $FALSE)

Publish Profiles
The second point of integration in our toolchain is between AppVeyor and the Azure VM. We will be using Microsoft’s Web Deploy to deploy our Solution from AppVeyor to Azure.  Web Deploy integrates with the IIS Web Management Service (WMSVC) for remote deployment by non-administrators. I have already configured Web Deploy and created a non-administrative user on the Azure VM. This user’s credentials will be used for deployments. These are the credentials in the username and password environment variables we created.

To continuously deploy to Azure, we will use Web Publish Profiles with Microsoft’s Web Deploy technology. Both the website and WCF service projects contain individual profiles for local development (‘LocalMachine’), as well as deployment to Azure (‘AzureVM’). The ‘AzureVM’ profiles contain all the configuration information AppVeyor needs to connect to the Azure VM and deploy the website and WCF service.

The easiest way to create a profile is by right-clicking on the project and selecting the ‘Publish…’ and ‘Publish Web Site’ menu items. Using the Publish Web wizard, you can quickly build and validate a profile.

Publish Web Profile Tab

Publish Web Profile Tab

Each profile in the above Profile drop-down, represents a ‘.pubxml’ file. The Publish Web wizard is merely a visual interface to many of the basic configurable options found in the Publish Profile’s ‘.pubxml’ file. The .pubxml profile files can be found in the Project Explorer. For the website, profiles are in the ‘App_Data’ directory (i.e. ‘Restaurant\RestaurantDemoSite\App_Data\PublishProfiles\AzureVM.pubxml’). For the WCF service, profiles are in the ‘Properties’ directory (i.e. ‘Restaurant\RestaurantWcfService\Properties\PublishProfiles\AzureVM.pubxml’).

As an example, below are the contents of the ‘LocalMachine’ profile for the WCF service (‘LocalMachine.pubxml’). This is about as simple as a profile gets. Note since we are deploying locally, the profile is configured to open the main page of the website in a browser, after deployment; a helpful time-saver during development.

<?xml version="1.0" encoding="utf-8"?>
<!--
This file is used by the publish/package process of your Web project.
You can customize the behavior of this process by editing this MSBuild file.
In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121.
-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <WebPublishMethod>FileSystem</WebPublishMethod>
        <LastUsedBuildConfiguration>Debug</LastUsedBuildConfiguration>
        <LastUsedPlatform>Any CPU</LastUsedPlatform>
        <SiteUrlToLaunchAfterPublish>http://localhost:9250/RestaurantService.svc/help</SiteUrlToLaunchAfterPublish>
        <LaunchSiteAfterPublish>True</LaunchSiteAfterPublish>
        <ExcludeApp_Data>True</ExcludeApp_Data>
        <publishUrl>C:\MenuWcfRestService</publishUrl>
        <DeleteExistingFiles>True</DeleteExistingFiles>
    </PropertyGroup>
</Project>

A key change we will make is to use environment variables in place of sensitive configuration values in the ‘AzureVM’ Publish Profiles. The Web Publish wizard does not allow this change. To do this, we must edit the ‘AzureVM.pubxml’ file for both the website and the WCF service. We will replace the hostname of the server where we will deploy the projects with a variable (i.e. AZURE_VM_HOSTNAME = ‘MyAzurePublicServer.net’). We will also replace the username and password used to access the deployment destination. This way, someone accessing the Solution’s source code, won’t be able to obtain any sensitive information, which would give them the ability to hack your site. Note the use of the ‘AZURE_VM_HOSTNAME’ and ‘AZURE_VM_USERNAME’ environment variables, show below.

<?xml version="1.0" encoding="utf-8"?>
<!--
This file is used by the publish/package process of your Web project.
You can customize the behavior of this process by editing this MSBuild file.
In order to learn more about this please visit http://go.microsoft.com/fwlink/?LinkID=208121.
-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <WebPublishMethod>MSDeploy</WebPublishMethod>
        <LastUsedBuildConfiguration>AppVeyor</LastUsedBuildConfiguration>
        <LastUsedPlatform>Any CPU</LastUsedPlatform>
        <SiteUrlToLaunchAfterPublish />
        <LaunchSiteAfterPublish>False</LaunchSiteAfterPublish>
        <ExcludeApp_Data>True</ExcludeApp_Data>
        <MSDeployServiceURL>https://$(AZURE_VM_HOSTNAME):8172/msdeploy.axd</MSDeployServiceURL>
        <DeployIisAppPath>MenuWcfRestService</DeployIisAppPath>
        <RemoteSitePhysicalPath />
        <SkipExtraFilesOnServer>False</SkipExtraFilesOnServer>
        <MSDeployPublishMethod>WMSVC</MSDeployPublishMethod>
        <EnableMSDeployBackup>True</EnableMSDeployBackup>
        <UserName>$(AZURE_VM_USERNAME)</UserName>
        <_SavePWD>False</_SavePWD>
        <_DestinationType>AzureVirtualMachine</_DestinationType>
    </PropertyGroup>
</Project>

The downside of adding environment variables to the ‘AzureVM’ profiles, the Publish Profile wizard feature within Visual Studio will no longer allow us to deploy, using the ‘AzureVM’ profiles. As demonstrated below, after substituting variables for actual values, the ‘Server’ and ‘User name’ values will no longer display properly. We can confirm this by trying to validate the connection, which fails. This does not indicate your environment variable values are incorrect, only that Visual Studio can longer correctly parse the ‘AzureVM.pubxml’ file and display it properly in the IDE. No big deal…

Publish Web Connection Tab - Failed Validation

Publish Web Connection Tab – Failed Validation

We can use the command line or PowerShell to deploy with the ‘AzureVM’ profiles.  AppVeyor accepts both command line input, as well as PowerShell for most tasks. All examples in this post and in the GitHub repository use PowerShell.

To build and deploy (publish) to Azure from the command line or PowerShell, we will use MSBuild. Below are the MSBuild commands used by AppVeyor to build our Solution, and then deploy our Solution to Azure. The first two MSBuild commands build the WCF service and the website. The second two deploy them to Azure. There are several ways you could construct these commands to successfully build and deploy this Solution. I found these commands to be the most succinct. I have split the build and the deploy functions so that the AppVeyor can run the automated unit tests, in between. If the tests don’t pass, we don’t want to deploy the code.

# Build WCF service
# (AppVeyor config ignores website Project in Solution)
msbuild Restaurant\Restaurant.sln `
 /p:Configuration=AppVeyor /verbosity:minimal /nologo

# Build website
msbuild Restaurant\RestaurantDemoSite\website.publishproj `
 /p:Configuration=Release /verbosity:minimal /nologo

Write-Host "*** Solution builds complete."
# Deploy WCF service
# (AppVeyor config ignores website Project in Solution)
msbuild Restaurant\Restaurant.sln `
 /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=AppVeyor `
 /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD `
 /verbosity:minimal /nologo

# Deploy website
msbuild Restaurant\RestaurantDemoSite\website.publishproj `
 /p:DeployOnBuild=true /p:PublishProfile=AzureVM /p:Configuration=Release `
 /p:AllowUntrustedCertificate=true /p:Password=$env:AZURE_VM_PASSWORD `
 /verbosity:minimal /nologo

Write-Host "*** Solution deployments complete."

Below is the output from AppVeyor showing the WCF Service and website’s deployment to Azure. Deployment is the last step in the continuous delivery process. At this point, the Solution was already built and the automated unit tests completed, successfully.

AppVeyor Output from Deployments to Azure.

AppVeyor Output from Deployments to Azure.

Below is the final view of the sample Solution’s WCF service and web site deployed to IIS 8.5 on the Azure VM.

Final View of IIS Sites Running on Azure VM

Final View of IIS Sites Running on Azure VM

Links

 

, , , , , , , , , , , , , , , , , ,

1 Comment

Windows PowerShell 4.0 for .NET Developers, Book Review

A brief review of ‘Windows PowerShell 4.0 for .NET Developers’, a fast-paced PowerShell guide, enabling you to efficiently administer and maintain your development environment.

Windows PowerShell 4.0 for .NET Developers

Introduction

Recently, I had the opportunity to review ‘Windows PowerShell 4.0 for .NET Developers‘, published by Packt Publishing. According to its author, Sherif Talaat, the book is ‘a fast-paced PowerShell guide, enabling you to efficiently administer and maintain your development environment.‘ Working in a large and complex software development organization, technologies such as PowerShell, which enable increased speed and automation, are essential to our success. Having used PowerShell on a regular basis as a .NET developer for the past few years, I was excited to see what Sherif’s newest book offered.

Requirements

The book recommends the following minimal software configuration to work through the code samples:

  • Windows Server 2012 R2 (includes PowerShell 4.0 and .NET 4.5)
  • SQL Server 2012
  • Visual Studio 2012/2013
  • Visual Studio Team Foundation Server (TFS) 2012/2013

To test the book’s samples, I provisioned a fresh VM, and using my MSDN subscription, installed the required Windows Server, SQL Server, and Team Foundation Server. I worked directly on the VM, as well as remotely from a Windows 7 Enterprise-based development machine with Visual Studio 2012 installed. The code samples worked fairly well, with only a few minor problems I found. There is still no errata published for the book as of the time of review.

A key aspect many authors do not address, is the complexities of using PowerShell in a corporate environment. Working individually or on a small network, developers don’t always experience the added burden of restrictive network security, LDAP, proxy servers, proxy authentication, XML gateways, firewalls, and centralized computer administration. Any code that requires access to remote servers and systems, often requires additional coding to work within a corporate environment. It can be frustrating to debug and extend simple examples to work successfully within an enterprise setting.

Contents

Windows PowerShell 4.0 for .NET Developers, at 115 pages in length, is divided into five chapters:

  • Chapter 1: Getting Started with Windows PowerShell
  • Chapter 2: Unleashing Your Development Skills with PowerShell
  • Chapter 3: PowerShell for Your Daily Administration Tasks
  • Chapter 4: PowerShell and Web Technologies
  • Chapter 5: PowerShell and Team Foundation Server

Chapter 1 provides a brief introduction to PowerShell. At a scant 30 pages, I would not recommend this book as a way to learn PowerShell for the beginner. For learning PowerShell, I recommend Instant Windows PowerShell, by Vinith Menon, also published by Packt Publishing. Alternatively, I recommend a few books by Manning Publications, including Learn Windows PowerShell in a Month of Lunches, Second Edition.

Chapter 2 discusses PowerShell in relationship to several key Microsoft technologies, including Windows Management Instrumentation (WMI), Common Information Model (CIM), Component Object Model (COM) and Extensible Markup Language (XML). As a .NET developer, it’s almost impossible not to have worked with one, or all of these technologies. Chapter 2 discusses how PowerShell works with .NET objects, and extend the .NET framework. The chapter also includes an easy-to-follow example of creating, importing, and calling a PowerShell binary module (compiled .NET class library), using Visual Studio.

Chapter 3 explores areas where .NET developer can start leveraging PowerShell for daily administrative tasks. In particular, I found the sections on PowerShell Remoting and administering IIS and SQL Server particularly useful. Being able to easily connect to remote web, application, and database servers from the command line (or, PowerShell prompt) and do basic system administration is a huge time savings in an agile development environment.

Chapters 4 focuses on how PowerShell interfaces with SOAP and REST based services, web requests, and JSON. Windows Communication Foundation (WCF) based service-oriented application development has been a trend for the last few years. Being able to manage, test, and monitor SOAP and RESTful services and HTTP requests/responses is important to .NET developers. PowerShell can often quicker and easier than writing and compiling service utilities in Visual Studio, or using proprietary third-party applications.

Chapter 5 is dedicated to Visual Studio Team Foundation Server (TFS), Microsoft’s end-to-end, Application Lifecycle Management (ALM) solution. Chapter 5 details the installation and use of TFS Power Tools and TFS PowerShell snap-in. Having held the roles of lead developer and Scrum Master, I have personally found some of the best uses for PowerShell in automating various aspects of TFS. Managing TFS often requires repetitive tasks, the place where PowerShell excels. You will need to explore additional resources beyond the scope of this book to really start automating TFS with PowerShell.

Conclusion

Overall, I enjoyed the book and felt it was well worth the time to explore. I applaud Sherif for targeting a PowerShell book specifically to developers. Due to its short length, the book did leave me wanting more information on a few subjects that were barely skimmed. I also found myself expecting guidance on a few subjects the book did not touch upon, such as PowerShell for cloud-based development (Azure), test automation, and build and deployment automation. For more information on some of those subjects, I recommend Sherif’s other book, also published by Packt Publishing, PowerShell 3.0 Advanced Administration Handbook.

, , , , , , , ,

Leave a comment

Using the WCF Web HTTP Programming Model with Entity Framework 5

Build a IIS-hosted WCF Service using the WCF Web HTTP Programming Model. Use basic HTTP Methods with the WCF Service to perform CRUD operations on a SQL Server database using a Data Access Layer, built with Entity Framework 5 and the Database First Development Model.

You can download a complete copy of this Post’s source code from DropBox.

Introduction

In the two previous Posts, we used the new Entity Framework 5 to create a Data Access Layer, using both the Code First and Database First Development Models. In this Post, we will create a Windows Communication Foundation (WCF) Service. The service will sit between the client application and our previous Post’s Data Access Layer (DAL), built with an ADO.NET Entity Data Model (EDM). Using the WCF Web HTTP Programming Model, we will expose the WCF Service’s operations to a non-SOAP endpoint, and call them using HTTP Methods.

Why use the WCF Web HTTP Programming Model? WCF is a well-established, reliable, secure, enterprise technology. Many large, as well as small, organizations use WCF to build service-oriented applications. However, as communications become increasingly Internet-enabled and mobile, the WCF Web HTTP Programming Model allows us to add the use of simple HTTP methods, such as POST, GET, DELETE, and PUT, to existing WCF services. Adding a web endpoint to an existing WCF service extends its reach to modern end-user platforms with minimal effort. Lastly, using the WCF Web HTTP Programming Model allows us to move toward the increasingly popular RESTful Web Service Model, so many organizations are finally starting to embrace in the enterprise.

Creating the WCF Service

The major steps involved in this example are as follows:

  1. Create a new WCF Service Application Project;
  2. Add the Entity Framework package via NuGet;
  3. Add a Reference the previous Post’s DAL project;
  4. Add a Connection String to the project’s configuration;
  5. Create the WCF Service Contract;
  6. Create the operations the service will expose via a web endpoint;
  7. Configure the service’s behaviors, binding, and web endpoint;
  8. Publish the WCF Service to IIS using VS2012’s Web Project Publishing Tool;
  9. Test the service’s operations with Fiddler.

The WCF Service Application Project

Solution Explorer View of New Solution

Solution Explorer View of New Solution for Reference

Start by creating a new Visual Studio 2012 WCF Service Application Project, named ‘HealthTracker.WcfService’. Add it to a new Solution, named ‘HealthTracker’. The WCF Service Application Project type is specifically designed to be hosted by Microsoft’s Internet Information Services (IIS).

Create New WCF Service Application Project

Create New WCF Service Application Project

Once the Project and Solution are created, install Entity Framework (‘System.Data.Entity’) into the Solution by right-clicking on the Solution and selecting ‘Manage NuGet Packages for Solution…’ Install the ‘EntityFramework’ package. If you haven’t discovered the power of NuGet for Visual Studio, check out their site.

Manage NuGet Packages - Add Entity Framework to Solution

Manage NuGet Packages – Add Entity Framework to Solution

Next, add a Reference in the new Project, to the previous ‘HealthTracker.DataAccess.DbFirst’ Project. When the WCF Service Application Project is built, a copy of the ‘HealthTracker.DataAccess.DbFirst.dll’ assembly will be placed into the ‘bin’ folder of the ‘HealthTracker.WcfService’ Project.

Adding Reference to Previous EF5 Database First Project

Add a Reference to Previous EF5 Database First Project

Next, copy the connection string from the previous project’s ‘App.Config file’ and paste into the new WCF Service Application Project’s ‘Web.config’ file. The connection is required by the ‘HealthTracker.DataAccess.DbFirst.dll’ assembly. The connection string should look similar to the below code.

<connectionStrings>
    <add name="HealthTrackerEntities" connectionString="metadata=res://*/HealthTracker.csdl|res://*/HealthTracker.ssdl|res://*/HealthTracker.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=[Your_Server]\[Your_SQL_Instance];initial catalog=HealthTracker;persist security info=True;user id=DemoLogin;password=[Your_Password];MultipleActiveResultSets=True;App=EntityFramework&quot;" providerName="System.Data.EntityClient" />
</connectionStrings>

The WCF Service

Delete the default ‘Service.svc’ and ‘IService.cs’ created by the Project Template. You can also delete the default ‘App_Data’ folder. Add a new WCF Service, named ‘HealthTrackerWcfService.svc’. Adding a new service creates both the WCF Service file (.svc), as well as a WCF Service Contract file (.cs), an Interface, named ‘IHealthTrackerWcfService.cs’. The ‘HealthTrackerWcfService’ class implements the ‘IHealthTrackerWcfService’ Interface class (‘public class HealthTrackerWcfService : IHealthTrackerWcfService’).

Add New WCF Service to Project

Add New WCF Service to Project

The WCF Service file contains public methods, called service operations, which the service will expose through a web endpoint. The second file, an Interface class, is referred to as the Service Contract. The Service Contract contains the method signatures of all the operations the service’s web endpoint expose. The Service Contract contains attributes, part of the ‘System.ServiceModel’ and ‘System.ServiceModel.Web’ Namespaces, describing how the service and its operation will be exposed. To create the Service Contract, replace the default code in the file, ‘IHealthTrackerWcfService.cs’, with the following code.

using System.Collections.Generic;
using System.ServiceModel;
using System.ServiceModel.Web;
using HealthTracker.DataAccess.DbFirst;

namespace HealthTracker.WcfService
{
    [ServiceContract]
    public interface IHealthTrackerWcfService
    {
        [OperationContract]
        [WebInvoke(UriTemplate = "GetPersonId?name={personName}",
            Method = "GET")]
        int GetPersonId(string personName);

        [OperationContract]
        [WebInvoke(UriTemplate = "GetPeople",
            Method = "GET")]
        List<Person> GetPeople();

        [OperationContract]
        [WebInvoke(UriTemplate = "GetPersonSummaryStoredProc?id={personId}",
            Method = "GET")]
        List<GetPersonSummary_Result> GetPersonSummaryStoredProc(int personId);

        [OperationContract]
        [WebInvoke(UriTemplate = "InsertPerson",
            Method = "POST")]
        bool InsertPerson(Person person);

        [OperationContract]
        [WebInvoke(UriTemplate = "UpdatePerson",
            Method = "PUT")]
        bool UpdatePerson(Person person);

        [OperationContract]
        [WebInvoke(UriTemplate = "DeletePerson?id={personId}",
            Method = "DELETE")]
        bool DeletePerson(int personId);
        
        [OperationContract]
        [WebInvoke(UriTemplate = "UpdateOrInsertHydration?id={personId}",
            Method = "POST")]
        bool UpdateOrInsertHydration(int personId);

        [OperationContract]
        [WebInvoke(UriTemplate = "InsertActivity",
            Method = "POST")]
        bool InsertActivity(Activity activity);

        [OperationContract]
        [WebInvoke(UriTemplate = "DeleteActivity?id={activityId}",
            Method = "DELETE")]
        bool DeleteActivity(int activityId);

        [OperationContract]
        [WebInvoke(UriTemplate = "GetActivities?id={personId}",
            Method = "GET")]
        List<ActivityDetail> GetActivities(int personId);

        [OperationContract]
        [WebInvoke(UriTemplate = "InsertMeal",
            Method = "POST")]
        bool InsertMeal(Meal meal);

        [OperationContract]
        [WebInvoke(UriTemplate = "DeleteMeal?id={mealId}",
            Method = "DELETE")]
        bool DeleteMeal(int mealId);

        [OperationContract]
        [WebInvoke(UriTemplate = "GetMeals?id={personId}",
            Method = "GET")]
        List<MealDetail> GetMeals(int personId);

        [OperationContract]
        [WebInvoke(UriTemplate = "GetPersonSummaryView?id={personId}",
            Method = "GET")]
        List<PersonSummaryView> GetPersonSummaryView(int personId);
    }
}

The service’s operations use a variety of HTTP Methods, including GET, POST, PUT, and DELETE. The operations take a mix of primitive data types, as well as complex objects as arguments. The operations also return the same variety of simple data types, as well as complex objects. Note the operation ‘InsertActivity’ for example. It takes a complex object, an ‘Activity’, as an argument, and returns a Boolean. All the CRUD operations dealing with inserting, updating, or deleting data return a Boolean, indicating success or failure of the operation’s execution. This makes unit testing and error handling on the client-side easier.

Next, we will create the WCF Service. Replace the existing contents of the ‘HealthTrackerWcfService.svc’ file with the following code.

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.ServiceModel;
using HealthTracker.DataAccess.DbFirst;

namespace HealthTracker.WcfService
{
    [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]
    public class HealthTrackerWcfService : IHealthTrackerWcfService
    {
        private readonly DateTime _today = DateTime.Now.Date;

        #region Service Operations
        /// <summary>
        /// Example of Adding a new Person.
        /// </summary>
        /// <param name="person">New Person Object</param>
        /// <returns>True if successful</returns>
        public bool InsertPerson(Person person)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    dbContext.People.Add(new DataAccess.DbFirst.Person { Name = person.Name });
                    dbContext.SaveChanges();
                    return true;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return false;
            }
        }

        /// <summary>
        /// Example of Updating a Person.
        /// </summary>
        /// <param name="person">New Person Object</param>
        /// <returns>True if successful</returns>
        public bool UpdatePerson(Person person)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    var personToUpdate = dbContext.People.First(p => p.PersonId == person.PersonId);
                    if (personToUpdate == null) return false;
                    personToUpdate.Name = person.Name;
                    dbContext.SaveChanges();
                    return true;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return false;
            }
        }

        /// <summary>
        /// Example of deleting a Person.
        /// </summary>
        /// <param name="personId">PersonId</param>
        /// <returns>True if successful</returns>
        public bool DeletePerson(int personId)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    var personToDelete = dbContext.People.First(p => p.PersonId == personId);
                    if (personToDelete == null) return false;
                    dbContext.People.Remove(personToDelete);
                    dbContext.SaveChanges();
                    return true;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return false;
            }
        }

        /// <summary>
        /// Example of finding a Person's Id.
        /// </summary>
        /// <param name="personName">Name of the Person to find</param>
        /// <returns>Person's unique Id (PersonId)</returns>
        public int GetPersonId(string personName)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    var personId = dbContext.People
                                            .Where(person => person.Name == personName)
                                            .Select(person => person.PersonId)
                                            .First();
                    return personId;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return -1;
            }
        }

        /// <summary>
        /// Returns a list of all People.
        /// </summary>
        /// <returns>List of People</returns>
        public List<Person> GetPeople()
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    var people = (dbContext.People.Select(p => p));
                    var peopleList = people.Select(p => new Person
                                                            {
                                                                PersonId = p.PersonId,
                                                                Name = p.Name
                                                            }).ToList();

                    return peopleList;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return null;
            }
        }
        /// <summary>
        /// Example of adding a Meal.
        /// </summary>
        /// <param name="meal">New Meal Object</param>
        /// <returns>True if successful</returns>
        public bool InsertMeal(Meal meal)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    dbContext.Meals.Add(new DataAccess.DbFirst.Meal
                                            {
                                                PersonId = meal.PersonId,
                                                Date = _today,
                                                MealTypeId = meal.MealTypeId,
                                                Description = meal.Description
                                            });
                    dbContext.SaveChanges();
                    return true;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return false;
            }
        }

        /// <summary>
        /// Example of deleting a Meal.
        /// </summary>
        /// <param name="mealId">MealId</param>
        /// <returns>True if successful</returns>
        public bool DeleteMeal(int mealId)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    var mealToDelete = dbContext.Meals.First(m => m.MealTypeId == mealId);
                    if (mealToDelete == null) return false;
                    dbContext.Meals.Remove(mealToDelete);
                    dbContext.SaveChanges();
                    return true;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return false;
            }
        }

        /// <summary>
        /// Return all Meals for a Person.
        /// </summary>
        /// <param name="personId">PersonId</param>
        /// <returns></returns>
        public List<MealDetail> GetMeals(int personId)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    var meals = dbContext.Meals.Where(m => m.PersonId == personId)
                                         .Select(m => new MealDetail
                                                          {
                                                              MealId = m.MealId,
                                                              Date = m.Date,
                                                              Type = m.MealType.Description,
                                                              Description = m.Description
                                                          }).ToList();
                    return meals;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return null;
            }
        }

        /// <summary>
        /// Example of adding an Activity.
        /// </summary>
        /// <param name="activity">New Activity Object</param>
        /// <returns>True if successful</returns>
        public bool InsertActivity(Activity activity)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    dbContext.Activities.Add(new DataAccess.DbFirst.Activity
                                                 {
                                                     PersonId = activity.PersonId,
                                                     Date = _today,
                                                     ActivityTypeId = activity.ActivityTypeId,
                                                     Notes = activity.Notes
                                                 });
                    dbContext.SaveChanges();
                    return true;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return false;
            }

        }

        /// <summary>
        /// Example of deleting a Activity.
        /// </summary>
        /// <param name="activityId">ActivityId</param>
        /// <returns>True if successful</returns>
        public bool DeleteActivity(int activityId)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    var activityToDelete = dbContext.Activities.First(a => a.ActivityId == activityId);
                    if (activityToDelete == null) return false;
                    dbContext.Activities.Remove(activityToDelete);
                    dbContext.SaveChanges();
                    return true;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return false;
            }
        }

        /// <summary>
        /// Return all Activities for a Person.
        /// </summary>
        /// <param name="personId">PersonId</param>
        /// <returns>List of Activities</returns>
        public List<ActivityDetail> GetActivities(int personId)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    var activities = dbContext.Activities.Where(a => a.PersonId == personId)
                                              .Select(a => new ActivityDetail
                                                               {
                                                                   ActivityId = a.ActivityId,
                                                                   Date = a.Date,
                                                                   Type = a.ActivityType.Description,
                                                                   Notes = a.Notes
                                                               }).ToList();
                    return activities;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return null;
            }
        }

        /// <summary>
        /// Example of updating existing Hydration count.
        /// Else adding new Hydration if it doesn't exist.
        /// </summary>
        /// <param name="personId">PersonId</param>
        /// <returns>True if successful</returns>
        public bool UpdateOrInsertHydration(int personId)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    var existingHydration = dbContext.Hydrations.First(
                        hydration => hydration.PersonId == personId
                                     && hydration.Date == _today);

                    if (existingHydration != null && existingHydration.HydrationId > 0)
                    {
                        existingHydration.Count++;
                        dbContext.SaveChanges();
                        return true;
                    }

                    dbContext.Hydrations.Add(new Hydration
                                                 {
                                                     PersonId = personId,
                                                     Date = _today,
                                                     Count = 1
                                                 });
                    dbContext.SaveChanges();
                    return true;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return false;
            }
        }

        /// <summary>
        /// Return a count of all Meals, Hydrations, and Activities for a Person.
        /// Based on a Database View (virtual table).
        /// </summary>
        /// <param name="personId">PersonId</param>
        /// <returns>Summary for a Person</returns>
        public List<PersonSummaryView> GetPersonSummaryView(int personId)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    var personView = (dbContext.PersonSummaryViews
                                               .Where(p => p.PersonId == personId))
                                               .ToList();
                    return personView;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return null;
            }
        }

        /// <summary>
        /// Return a count of all Meals, Hydrations, and Activities for a Person.
        /// Based on a Stored Procedure.
        /// </summary>
        /// <param name="personId">PersonId</param>
        /// <returns>Summary for a Person</returns>
        public List<GetPersonSummary_Result> GetPersonSummaryStoredProc(int personId)
        {
            try
            {
                using (var dbContext = new HealthTrackerEntities())
                {
                    var personView = (dbContext.GetPersonSummary(personId)
                                               .Where(p => p.PersonId == personId))
                                               .ToList();
                    return personView;
                }
            }
            catch (Exception exception)
            {
                Debug.WriteLine(exception);
                return null;
            }
        }

        #endregion
    }

    #region POCO Classes

    public class Person
    {
        public int PersonId { get; set; }
        public string Name { get; set; }
    }

    public class Meal
    {
        public int PersonId { get; set; }
        public int MealTypeId { get; set; }
        public string Description { get; set; }
    }

    public class MealDetail
    {
        public int MealId { get; set; }
        public DateTime Date { get; set; }
        public string Type { get; set; }
        public string Description { get; set; }
    }

    public class Activity
    {
        public int PersonId { get; set; }
        public int ActivityTypeId { get; set; }
        public string Notes { get; set; }
    }

    public class ActivityDetail
    {
        public int ActivityId { get; set; }
        public DateTime Date { get; set; }
        public string Type { get; set; }
        public string Notes { get; set; }
    }

    #endregion
}

Each method instantiates an instance of ‘HeatlthTrackerEntities’, Referenced by the project and accessible to the class via the ‘using HealthTracker.DataAccess.DbFirst;’ statement, ‘HeatlthTrackerEntities’ implements ‘System.Data.Entity.DBContext’. Each method uses LINQ to Entities to interact with the Entity Data Model, through the ‘HeatlthTrackerEntities’ object.

In addition to the methods (service operations) contained in the HealthTrackerWcfService class, there are several POCO classes. Some of these POCO classes, such as ‘NewMeal’ and ‘NewActivity’, are instantiated to hold data passed in the operation’s arguments by the client Request message. Other POCO classes, such as ‘MealDetail’ and ‘ActivityDetail’, are instantiated to hold data passed back to the client by the operations, in the Response message. These POCO instances are serialized to and deserialized from JSON or XML.

The WCF Service’s Configuration

The most complex and potentially the most confusing part of creating a WCF Service, at least for me, is always the service’s configuration. Due in part to the flexibility of WCF Services to accommodate many types of client, server, network, and security situations, the configuration of the services takes an in-depth understanding of bindings, behaviors, endpoints, security, and associated settings. The best books I’ve found on configuring WCF Services is Pro WCF 4: Practical Microsoft SOA Implementation, by Nishith Pathak. The book goes into great detail on all aspects of configuring WCF Services to meet your particular project’s needs.

Since we are only using the WCF Web HTTP Programming Model to build and expose our service, the ‘webHttpBinding’ binding is the only binding we need to configure. I have made an effort to strip out all the unnecessary boilerplate settings from our service’s configuration.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <configSections>
        <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
    </configSections>
    <appSettings>
        <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" />
    </appSettings>
    <system.web>
        <compilation debug="true" targetFramework="4.5" />
        <httpRuntime targetFramework="4.5" />
    </system.web>
    <system.serviceModel>
        <serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
        <behaviors>
            <endpointBehaviors>
                <behavior name="webHttpBehavior">
                    <webHttp helpEnabled="true" defaultOutgoingResponseFormat="Json"
                             defaultBodyStyle="Bare" automaticFormatSelectionEnabled="true"/>
                </behavior>
            </endpointBehaviors>
        </behaviors>
        <services>
            <service name="HealthTracker.WcfService.HealthTrackerWcfService">
                <endpoint address="web" binding="webHttpBinding" behaviorConfiguration="webHttpBehavior"
                          contract="HealthTracker.WcfService.IHealthTrackerWcfService" />
            </service>
        </services>
    </system.serviceModel>
    <system.webServer>
        <modules runAllManagedModulesForAllRequests="true" />
        <directoryBrowse enabled="false" />
    </system.webServer>
    <entityFramework>
        <defaultConnectionFactory type="System.Data.Entity.Infrastructure.SqlConnectionFactory, EntityFramework" />
    </entityFramework>
    <connectionStrings>
        <add name="HealthTrackerEntities" connectionString="metadata=res://*/HealthTracker.csdl|res://*/HealthTracker.ssdl|res://*/HealthTracker.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=gstafford-windows-laptop\DEVELOPMENT;initial catalog=HealthTracker;persist security info=True;user id=DemoLogin;password=DemoLogin123;MultipleActiveResultSets=True;App=EntityFramework&quot;" providerName="System.Data.EntityClient" />
    </connectionStrings>
</configuration>

Some items to note in the configuration:

  • Line 4: Entity Framework – The Entity Framework 5 reference you added earlier via NuGet.
  • Line 18: Help – This enables an automatically generated Help page, displaying all the service’s operations for the endpoint, with details on how to call each operation.
  • Lines 18-19: Request and Response Message Formats – Default settings for message format and body style of Request and Response messages. In this case, JSON and Bare. Setting defaults saves lots of time, not having to add attributes to each individual operation.
  • Line 25-26: Endpoint – The service’s single endpoint, with a single binding and behavior. For this Post, we are only using the ‘webHttpBinding’ binding type.
  • Line 38: Connection String – The SQL Server Connection String you copied from the previous Post’s Project. Required by the DAL Project Reference you added, earlier.

Deploying the Service to IIS

Now that the service is complete, we will deploy and host it in IIS. There are many options when it comes to creating and configuring a new website – setting up domain names, choosing ports, configuring firewalls, defining bindings, setting permissions, and so forth. This Post will not go into that level of detail. I will demonstrate how I chose to set up my website and publish my WCF Service.

We need a physical location to deploy the WCF Service’s contents. I recommend a location outside of the IIS root directory, such as ‘C:\HealthTrackerWfcService’. Create this folder on the server where you will be running IIS, either locally or remotely. This folder is where we will publish the service’s contents to from Visual Studio, next.

Create a new website in IIS to host the service. Name the site ‘HealthTracker’. You can configure and use a domain name or a specific port, from which to call the service. I chose to configure a domain name on my IIS Server, ”WcfService.HealthTracker.com’. If you are unsure how to setup a new domain name on your network, then a local, open port is probably easier for you. Pick any random port, like 15678.

Create New Website in IIS to Host Service

Create New Website in IIS to Host Service

Publish the WCF Service to the deployment location, explained above, using Visual Studio 2012’s Web Project Publishing Tool. Exactly how and where you set-up your website, and any security considerations, will affect the configuration of the Publishing Tool’s Profile. My options will not necessarily work for your specific environment.

Testing the WCF Service

Congratulations, your service is deployed. Now, let’s see if it works. Before we test the individual operations, we will ensure the service is being hosted correctly. Open the service’s Help page. This page automatically shows details on all operations of a particular endpoint. The address should follow the convention of http://%5Byour_domain%5D:%5Byour_port%5D/%5Byour_service%5D/%5Byour_endpoint_address%5D/help. In my case ‘http://wcfservice.healthtracker.com/HealthTrackerWcfService.svc/web/help&#8217;. If this page displays, then the service is deployed correctly and it’s web endpoint is responding as expected.

WCF Service Operations Displayed at Help URL

WCF Service Help Page – Service Endpoint Operations

While on the Help page, click on any of the HTTP Methods to see a further explanation of that particular operation. This page is especially useful for copying the URL of the operation for use in Fiddler. It is even more useful for grabbing the sample JSON or XML Request messages. Just substitute your test values for the default values, in Fiddler. It saves a lot of typing and many potential errors.

WCF Service Help Page - Example Request Body

WCF Service Help Page – Example Request Body

Fiddler

The easiest way to test each of the service’s operations is Fiddler. Download and install Fiddler, if you don’t already have it. Using Fiddler, construct a Request message and call the operations by executing the operation’s associated HTTP Method. Below is an example of calling the ‘InsertActivity’ operation. This CRUD operation accepts a new Activity object as an argument, inserts into the database via the Entity Data Model, and returns a Boolean value indicating success.

To call the ‘InsertActivity’ operation, 1) select the ‘POST’ HTTP method, 2) input the URL for the ‘InsertActivity’ operation, 3) select a version of HTTP (1.2), 4) input the Content-Type (JSON or XML) in the Request Headers section, 5) input the body of the Request, a new ‘Activity’ as JSON, in the Request Body section, and 6) select ‘Execute’. The 7) Response should appear in the Web Sessions window.

Fiddler Example - InsertActivity Operation Request

Fiddler Example – InsertActivity Operation Request

Executing the 1) Request (constructed above), should result in a 2) Response in the Web Sessions window. Double clicking on the Web Session should result in the display of the 3) Response message in the lower righthand window. The operation returns a Boolean indicating if the operation succeeded or failed. In this case, we received a value of ‘true’.

Fiddler Example - InsertActivity Operation Response

Fiddler Example – InsertActivity Operation Response

To view the Activity we just inserted, we need to call the ‘GetActivities’ operation, passing it the same ‘PersonId’ argument. In Fiddler, 1) select the ‘GET’ HTTP method, 2) input the URL for the ‘GetActivities’ operation including a value for the ‘PersonId’ argument, 3) select the desired version of HTTP (1.2), 4) input a Content-Type (JSON or XML) in the Request Headers section, and 5) select ‘Execute’. Same as before, the 6) Response should appear in the Web Sessions window. This time there is no Request body content.

Fiddler Example - GetActivities Operation Request

Fiddler Example – GetActivities Operation Request

As before, executing the 1) Request should result in a 2) Response in the Web Sessions window. Doubling clicking on the Web Session should result in the display of the 3) Response in the lower left window. This method returns a JSON payload with each Activity, associated with the PersonId argument.

Fiddler Example - GetActivities Operation Response

Fiddler Example – GetActivities Operation Response

You can use this same process to test all the other operations at the WCF Service’s endpoint. You can also save the Request message or complete Web Sessions in Fiddler should you need to re-test.

Conclusion

We now have a WCF Service deployed and running in IIS, and tested. The service’s operations can be called from any application capable of making an HTTP call. Thank you for taking the time to read this Post. I hope you found it beneficial.

, , , , , , , , , , , , , , , , ,

1 Comment

Database First Development with Entity Framework 5 in Visual Studio 2012

Build and test a Data Access Layer (DAL) using Entity Framework 5 and Database First Development in Visual Studio 2012. Use the Entity Framework Designer to build an ADO.NET Entity Data Model containing database tables, views, stored procedures, and scalar-valued functions. An updated version of this project’s source code, using EF6 is now available on GitHub. The GitHub repository contains all three Entity Framework blog posts.

HealthTracker EDMX Diagram

Introduction

In the last post, we explored Microsoft’s new Entity Framework 5 with Code First Development. In this post, we will explore Entity Framework 5 with Database First Development. We will be using the same data model as before. However, this time instead of POCOs, we will start with a SQL Server 2008 R2 database and use the Entity Framework Designer to build an ADO.NET Entity Data Model (EDM). In addition to database tables, we will look at Entity Framework’s ability to support database views (virtual tables), stored procedures, and scalar-valued functions.

Download a complete copy of the post’s source code, with SQL scripts to create the database objects and populate the database with sample data, from DropBox.

Entity Framework’s Code First and Model First development offer many great options for .NET developers. However, in my experience, most enterprise-level application developers work with a Database First Development model. Using Database First Development, Entity Framework 5 (EF5) provides the ability to construct a powerful yet easy-to-implement data access layer (DAL) between the database and the business logic.

The steps involved in this example are as follows:

  1. Create the new SQL Server database;
  2. Create the database objects;
  3. Create a new C# Class Library Project in Visual Studio 2012 Solution;
  4. Add a new ADO.NET Entity Data Model to project;
  5. Create a new Database Connection;
  6. Import the database objects into the EDM;
  7. Modify the EDM to accommodate the scalar-valued functions;
  8. Populate the database with sample data;
  9. Validate the EDM using a Unit Test Project;

Below is a final view of the entire Solution for reference as you work through the post.

Solution Explorer View of Final Solution

Solution Explorer View of Final Solution

The Database

Using SQL Server 2008 R2 Management Studio (SSMS), Toad for SQL, or similar application, create a new database, named ‘HealthTracker’. I left all the default database settings unchanged for this post.

Create the New Health Tracker Database

Create the New Health Tracker Database

Next, execute the supplied sql script to populate the HealthTracker database with the necessary database objects. The script should insert the following objects: (6) tables, (1) view, (1) stored procedure, (3) scalar-valued functions, and all the necessary table relationships. All objects will be members of the default ‘dbo’ schema.

Database in SQL Server Object Viewer

Database in SQL Server Object Viewer

Barring a few minor changes, this data model is identical to the one we built in the last post using Code First Development with POCOs. The below Database Diagram illustrates the one-to-many relationships between the tables. The tables are pluralized in the database, as opposed to singular in the ADO.NET Entity Data Model (Meals vs. Meal, People vs. Person, etc.). This is a common pattern with Entity Framework.

Database Diagram of Table Relationships

Database Diagram of Table Relationships

Optional: Setting Up Database Credentials

For security and simplicity, I choose to add a new Login, User, and Role to the database. This step is not necessary for this post. However, it is good to get into the habit of securing your database, using database Logins, Users, Roles, and Permissions. In addition, if you are planning to deploy the database and the DAL to other environments such as Test or Production, don’t tie your Solution to personal credentials, a machine-specific account, or to an administrative role in the database with overly broad permissions.

The Database User, DemoUser, is associated with the Login, DemoLogin. DemoUser is a member of the Database Role, DemoRole. I will use the DemoLogin account to connect the EDM to the database. DemoRole has the minimal required database permissions the Entity will need to function: Alter, Insert, Delete, Execute, Select, Update, and View definition. DemoUser only needs Connect permission. Again, this step is optional. You can use your own credentials if you choose.

Included with the downloadable code is a third sql script that should create the User, Role, Login, and required Permissions, if you choose to use them to follow along with the post.

Database Permissions for User and Role

Database Permissions for User and Role

The Data Access Layer

Following good software design principles, we will separate our concerns between Projects. We want to create a Data Access Layer (DAL), to act as an interface between our database and our business logic. We don’t want to interact with the data directly in our DAL Project. By separating the DAL into its own project, we can reference that project’s assembly (.dll) from any other project, be it another class library (our business logic), a WCF service, WPF, Silverlight or console application, or an ASP.NET site. To start, create a new Visual C# Class Library. Name it ‘HealthTracker.DataAccess.DbFirst’. Create a new Solution for the Project in the same dialog box, named ‘HealthTracker’.

New Visual C# Windows Class Library Project

New Visual C# Windows Class Library Project

First, install Entity Framework (System.Data.Entity namespace classes) into the Solution by right clicking on the Solution and selecting ‘Manage NuGet Packages for Solution…’. Install the ‘EntityFramework’ package. If you haven’t discovered the power of NuGet with Visual Studio, check out their site.

Manage NuGet Packages - Install EntityFramework Package

Manage NuGet Packages – Install EntityFramework Package

Next, add a new ‘ADO.NET Entity Data Model’ item, named ‘HealthTracker.edmx’, to the HealthTracker.DataAccess.DbFirst project. According to Microsoft, an .edmx file also contains information used by the ADO.NET Entity Data Model Designer (Entity Designer) to render a model graphically. An .edmx file is the combination of three metadata files: the conceptual schema definition language (CSDL), store schema definition language (SSDL), and mapping specification language (MSL) files. For more information, see .edmx File Overview (Entity Framework).

Adding the ADO.NET Entity Data Model to Project

Adding the ADO.NET Entity Data Model to Project

Adding the ADO.NET Entity Data Model item will start the Entity Data Model Wizard. Since we are exploring Database First Development, select ‘Generate from Database’.

Entity Data Model Wizard - Generate from Database

Entity Data Model Wizard – Generate from Database

Next, we will be prompted to choose a data connection. Since this is the first time we are accessing our newly created HealthTracker database, we need to create a new data connection. Select ‘New Connection…’

Entity Data Model Wizard - Choose Your Data Connection

The options you chose in the ‘Connection Properties’ dialog window, such as the server and instance name, will depend on your own SQL Server configuration and the method you chose to log onto the server. As mentioned before, I will use the ‘DemoLogin’ account. The connection string will reside in the project’s app.config file. Make sure to always chose ‘Test Connection’ to verify you have configured the Data Connection properly.

New Connection - Database Connection Properties

New Connection – Database Connection Properties

Once the data connection is established, we are prompted to add the database objects to the EDM. Only add the objects that we created earlier with the sql script.

Entity Data Model Wizard - Choose Your Database Objects

Entity Data Model Wizard – Choose Your Database Objects

When the import is complete, the EDM should look like the following in the Entity Designer. You should see the six table entities, with one-to-many associations between them, as well as the one view entity, ‘PersonSummaryView’. Each database object you imported is referred to as an entity. Drag the entities into any position you want on the Design surface.

HealthTracker Entity Data Model Diagram

HealthTracker Entity Data Model Diagram

Similarly, when the import is complete, the EDM should look like the following in the Model Browser.

Model Browser - View the of the Entity Data Model

Model Browser – View of the Entity Data Model

Stored Procedures

You recall we imported a stored procedure, ‘GetPersonSummary’. What happened to that object? In the Model Browser, double-click on the GetPersonSummary item under the Function Imports. The stored procedure was imported into the EDM by EF. The results the procedure returns from the database is associated with a new complex object type, ‘GetPersonSummary_Result’.

Function Import - Stored Procedure

Function Import – Stored Procedure

Scalar Functions

If you view the sql code for the above stored procedure, ‘GetPersonSummary’, you will note it calls three scalar-valued functions. These three happen to be the three functions we imported into the EDM. Each function takes a single input parameter, ‘personId’, and returns an integer value – the count of Meals, Activities, and Hydrations for a that Person, based on their Id.

We can also call the scalar-valued functions directly. Unfortunately, in my experience, working scalar-valued functions in Entity Framework is still not as easy as tables, views, and stored procedures. I have found two methods to work with scalar-valued functions. The first method is a bit of hack in my opinion, but it works. The method is documented in several Internet posts, including this one on Stack Overflow.

This method requires some minor changes of the .edmx file’s xml, directly. To do so, right-click on the .edmx file and select ‘Open With…’, ‘XML (Text) Editor’. This is how the functions looks in the .edmx file before changes:

<Function Name="CountActivities" ReturnType="int" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo">
    <Parameter Name="personId" Type="int" Mode="In" />
</Function>
<Function Name="CountHydrations" ReturnType="int" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="true" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo">
    <Parameter Name="personId" Type="int" Mode="In" />
</Function>
<Function Name="CountMeals" ReturnType="int" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="true" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo">
    <Parameter Name="personId" Type="int" Mode="In" />
</Function>

Remove the ‘ReturnType’ attribute from the <Function /> element. Then, add a <CommandText /> element to each of the <Function /> elements. See the modified .edmx file below for the contents of the <CommandText /> elements.

<Function Name="CountActivities" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo">
    <CommandText>
        SELECT [dbo].[CountActivities] (@personId)
    </CommandText>
    <Parameter Name="personId" Type="int" Mode="In" />
</Function>
<Function Name="CountHydrations" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="true" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo">
    <CommandText>
        SELECT [dbo].[CountHydrations] (@personId)
    </CommandText>
    <Parameter Name="personId" Type="int" Mode="In" />
</Function>
<Function Name="CountMeals" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="true" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo">
    <CommandText>
        SELECT [dbo].[CountMeals] (@personId)
    </CommandText>
    <Parameter Name="personId" Type="int" Mode="In" />
</Function>

Next, in the Model Browser, right-click on the ‘Function Imports’ folder and select ‘Add Function Import…’ This brings up the ‘Add Function Import’ dialog window. We will import the ‘CountActivities’ scalar-valued function to show this method. Enter the following information in the dialog window and select Save:

Function Import - Scalar-valued Function

Function Import – Scalar-valued Function

You can do the same for the other two scalar-valued functions, if you choose. We will only test the ‘CountActivities’ function, next with our Unit Tests. The downside of this method is the edits to the .edmx file will be lost when you update the EDM from the database. You will have to re-edit the .edmx file each time. This not a great solution.

The second method to call a scalar-valued function uses a feature of Entity Framework 5, the ‘Database.SqlQuery Method (String, Object[])’ Method. According to Microsoft, an instance of this class is obtained from an DbContext object and can be used to manage the actual database backing a DbContext or connection. Using ‘SqlQuery’ method, a raw SQL query that will return elements of the given generic type is created. The type can be any type that has properties that match the names of the columns returned from the query, or can be a simple primitive type.

Below is an example of the method’s use, similar to code in the Unit Test Project we will create next, to test our EDM.

using (HealthTrackerEntities context = new HealthTrackerEntities())
{
    string sqlQuery = "SELECT [dbo].[CountMeals] ({0})";
    Object[] parameters = { 1 };
    int activityCount = db.Database.SqlQuery<int>(sqlQuery, parameters).FirstOrDefault();
}

This method allows us to call the scalar-valued function directly from the database, just as we could any other object using a sql query. The downside of this method is that we are not really taking advantage of the EDM we constructed. It is easier than the first method and it doesn’t need continued changes if we update the EDM. Undoubtedly, there are better methods out there than I have presented here.

Testing the Entity Data Model

To confirm that the EDM is functioning properly, we will create and execute a series of Unit Tests. In reality, although we will be using Visual Studio’s Unit Test Project type, the tests are more like functional tests than true unit tests. This is especially true because we are writing the tests after we have completed development our DAL’s EDM.

We will perform minimal testing of the EDM’s tables, view, stored procedure, and scalar-valued functions, with a series of several simple tests. The tests are only meant to demonstrate the type of tests you could use across all entities in the Model to confirm various functions.

Sample Data

In SSMS or VS2012, execute the supplied sql script that populates the database with test data. The script contains a variety of Meal Types, Activity Types, People, Meals, Activities, and Hydrations table records. Note the script deletes all existing data from those tables. Below is a sample of the Meal table’s sample data.

Sample Meal Data

Sample Meal Data

The Unit Test Project

After adding the sample data to the HealthTracker database, add a new Visual Studio 2012 Unit Test Project, named ‘HealthTracker.UnitTests’, to the ‘HealthTracker’ Solution.

Add New Unit Test Project to Solution

Add New Unit Test Project to Solution

Next, add a Reference to the Unit Test Project from our DAL, the ‘HealthTracker.DataAccess.DbFirst’ Project. This step adds the ‘HealthTracker.DataAccess.DbFirst.dll’ assembly to our Unit Test Project.

Add Entity Data Model Project Reference to Unit Test Project

Add Entity Data Model Project Reference to Unit Test Project

Next, we need to add the same Database Connection we used in the ‘HealthTracker.DataAccess.DbFirst’ Project, to this Project. I always forget this step and end up with a database connection error the first time I try to run a new project. Right-click on the Unit Test Project and select ‘Add New Item…’ Add an ‘Application Configuration File’ item, named ‘app.config’, to the Unit Test Project.

Add Application Configuration File to Unit Test Project

Add Application Configuration File to Unit Test Project

Open the corresponding Application Configuration File in the ‘HealthTracker.DataAccess.DbFirst’ Project and copy the <connectionStrings /> element to our Unit Test Project’s app.config file. The file’s contents should look similar to the following when complete (note, your ‘connectionString’ attribute will have different values).

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <connectionStrings>
        <add name="HealthTrackerEntities" connectionString="metadata=res://*/HealthTracker.csdl|res://*/HealthTracker.ssdl|res://*/HealthTracker.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=gstafford-windows-laptop\DEVELOPMENT;initial catalog=HealthTracker;persist security info=True;user id=DemoLogin;password=DemoLogin123;MultipleActiveResultSets=True;App=EntityFramework&quot;" providerName="System.Data.EntityClient" />
    </connectionStrings>
</configuration>

Lastly, rename the default ‘UnitTest’ class in the Unit Test Project to ‘HealthTrackerUnitTests’. Enter or copy and paste the contents of the supplied HealthTrackerUnitTests.cs file to this file. The supplied file contains all the unit tests.

using System.Linq;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using HealthTracker.DataAccess.DbFirst;

namespace HealthTracker.UnitTests
{
    [TestClass]
    public class HealthTrackerUnitTests
    {
        private const string PersonOriginal = "John Doe";
        private const string PersonNew = "New Person";
        private const string PersonNameUpdated = "Updated Name";

        /// <summary>
        /// Delete any non-sample People from the database created by previous tests
        /// </summary>
        [TestInitialize]
        public void RemoveNonSamplePeople()
        {
            using (var db = new HealthTrackerEntities())
            {
                var peopleToDelete = db.People
                    .Where(person => person.PersonId > 4);

                foreach (var personToDelete in peopleToDelete)
                {
                    db.People.Remove(personToDelete);
                }
                db.SaveChanges();
            }
        }

        /// <summary>
        /// Return the count of People in the database, which should be 4.
        /// </summary>
        [TestMethod]
        public void PersonCountTest()
        {
            using (var db = new HealthTrackerEntities())
            {
                var personCount = (db.People.Select(p => p)).Count();
                Assert.IsTrue(personCount > 0);
            }
        }

        /// <summary>
        /// Return the PersonId of 'John Doe', which should be is 1.
        /// </summary>
        [TestMethod]
        public void PersonIdTest()
        {
            using (var db = new HealthTrackerEntities())
            {
                var personId = db.People
                    .Where(person => person.Name == PersonOriginal)
                    .Select(person => person.PersonId)
                    .First();
                Assert.AreEqual(1, personId);
            }
        }
        /// <summary>
        /// Insert a new Person into the database.
        /// </summary>
        [TestMethod]
        public void PersonAddNewTest()
        {
            using (var db = new HealthTrackerEntities())
            {
                // Setup test
                db.People.Add(new Person { Name = PersonNew });
                db.SaveChanges();

                // Test 1
                var personCount = (db.People.Select(p => p)).Count();
                Assert.AreEqual(5, personCount);

                // Test 2
                var newPersonFound = db.People.FirstOrDefault(
                    person => person.Name == PersonNew);
                Assert.IsNotNull(newPersonFound);
            }
        }

        /// <summary>
        /// Update a Person's name in the database.
        /// </summary>
        [TestMethod]
        public void PersonUpdateNameTest()
        {
            using (var db = new HealthTrackerEntities())
            {
                // Setup test
                var personToUpdate = db.People.FirstOrDefault(
                    person => person.Name == PersonOriginal);

                if (personToUpdate != null) personToUpdate.Name = PersonNameUpdated;
                db.SaveChanges();

                // Test
                var updatedPerson = db.People.FirstOrDefault(
                    person => person.Name == PersonNameUpdated);
                Assert.IsNotNull(updatedPerson);

                // Tear down test
                var personToRevert = db.People.FirstOrDefault(
                    person => person.Name == PersonNameUpdated);

                if (personToRevert != null) personToRevert.Name = PersonOriginal;
                db.SaveChanges();
            }
        }

        /// <summary>
        /// Return the Meal count from PersonSummaryViews database view, which should be 21.
        /// </summary>
        [TestMethod]
        public void PersonSummaryViewTest()
        {
            using (var db = new HealthTrackerEntities())
            {
                var mealCount = (db.PersonSummaryViews
                    .Where(p => p.PersonId == 1)
                    .Select(p => p.MealsCount))
                    .First();
                Assert.AreEqual(21, mealCount);
            }
        }

        /// <summary>
        /// Call CountActivities scalar-valued function directly from in the database.
        /// </summary>
        [TestMethod]
        public void ActivtyCountFunctionFromDatabaseTest()
        {
            using (var db = new HealthTrackerEntities())
            {
                object[] parameters = { 1 };
                var activityCount = db.Database.SqlQuery<int>(
                    "SELECT [dbo].[CountActivities] ({0})",
                    parameters).FirstOrDefault();
                Assert.AreEqual(7, activityCount);
            }
        }

        /// <summary>
        /// Call CountActivities scalar-valued function from the Entity Data Model.
        /// </summary>
        [TestMethod]
        public void ActivtyCountFunctionFromEntityTest()
        {
            using (var db = new HealthTrackerEntities())
            {
                var activityCount = db.CountActivities(1).First();
                if (activityCount != null) activityCount = activityCount.Value;
                Assert.AreEqual(7, activityCount);
            }
        }
    }
}

When complete, build the Solution. Then finally, from the Test menu in the top Visual Studio menu bar, or by right clicking on the ‘HealthTrackerUnitTests’, run all Unit Tests. The test results should look like the following.

Test Explorer Showing Results of Unit Tests

Test Explorer Showing Results of Unit Tests

Conclusion

Congratulations, we have built and tested a Data Access Layer using Entity Framework 5. The DAL can now be referenced from a middle-tier business assemble, WCF Service, or directly from a client application.

, , , , , , , , , , , , , , , , , , ,

13 Comments

Convert VS 2010 Database Project to SSDT and Automate Publishing with Jenkins – Part 3/3

Objectives of 3-Part Series:

Part I: Setting up the Example Database and Visual Studio Projects

  • Setup and configure a new instance of SQL Server 2008 R2
  • Setup and configure a copy of Microsoft’s Adventure Works database
  • Create and configure both a Visual Studio 2010 server project and Visual Studio 2010 database project
  • Test the project’s ability to deploy changes to the database

Part II: Converting the Visual Studio 2010 Database and Server Projects to SSDT

  • Convert the Adventure Works Visual Studio 2010 database and server projects to SSDT projects
  • Create a second Solution configuration and SSDT publish profile for an additional database environment
  • Test the converted database project’s ability to publish changes to multiple database environments

Part III: Automate the Building and Publishing of the SSDT Database Project Using Jenkins

  • Automate the build and delivery of a sql change script artifact, for any database environment, to a designated release location using a parameterized build.
  • Automate the build and publishing of the SSDT project’s changes directly to any database environment using a parameterized build.

Part III: Automate the Building and Publishing of the SSDT Database Project Using Jenkins

In this last post we will use Jenkins to publishing of changes from the Adventure Works SSDT database project to the Adventure Works database. Jenkins, formally Hudson, is the industry-standard, java-based open-source continuous integration server.

Jenkins

If you are unfamiliar with Jenkins, I recommend an earlier post, Automated Deployment to GlassFish Using Jenkins and Ant. That post goes into detail on Jenkins and its associated plug-in architecture. Jenkins’ website provides excellent resources for installing and configuring Jenkins on Windows. For this post, I’ll assume that you have Jenkins installed and running as a Windows Service.

The latest available version of Jenkins, at the time of this post is 1.476. To follow along with the post, you will need to install and configure the following (4) plug-ins:

User Authentication

In the first two posts, we connected to the Adventure Works database with the ‘aw_dev’ SQL Server user account, using SQL Authentication. This account was used to perform schema comparisons and publish changes from the Visual Studio project. Although SQL Authentication is an acceptable means of accessing SQL Server, Windows Authentication is more common in corporate and enterprise software environments, especially where Microsoft’s Active Directory is used. Windows Authentication with Active Directory (AD) provides an easier, centralized user account security model. It is considered more secure.

With Windows Authentication, we associate a SQL Server Login with an existing Windows user account. The account may be local to the SQL Server or part of an Active Directory domain. For this post, instead using SQL Authentication, passing the ‘aw_dev’ user’s credentials to SQL Server in database project’s connection strings, we will switch to Windows Authentication. Using Windows Authentication will allow Jenkins to connect directly to SQL Server.

Setting up the Jenkins Windows User Account

Let’s outline the process of creating a Jenkins Windows user account and using Windows Authentication with our Adventure Works project:

  1. Create a new ‘jenkins’ Windows user account.
  2. Change the Jenkins Windows service Log On account to the ‘jenkins’ Windows account.
  3. Create a new ‘jenkins’ SQL Server Login, associated with the ‘jenkins’ Windows user account, using Windows Authentication.
  4. Provide privileges in SQL Server to the ‘jenkins’ user identical to the ‘aw_dev’ user.
  5. Change the connection strings in the publishing profiles to use Windows Authentication.

First, create the ‘jenkins’ Windows user account on the computer where you have SQL Server and Jenkins installed. If they are on separate computers, then you will need to install the account on both computers, or use Active Directory. For this demonstration, I have both SQL Server and Jenkins installed on the same computer. I gave the ‘jenkins’ user administrative-level rights on my machine, by assigning it to the Administrators group.

Create New Jenkins User

Create New Jenkins User

Next, change the ‘Log On’ Windows user account for the Jenkins Windows service to the ‘jenkins’ Windows user account. Restart the Jenkins Windows service to apply the change. If the service fails to restart, it is likely you did not give enough rights to the user. I suggest adding the user to the Administrators group, to check if the problem you have encountering is permissions-related.

Jenkins Windows Service

Jenkins Windows Service

Set Log On Account for Jenkins Windows Service

Set Log On Account for Jenkins Windows Service

Log On Account for Jenkins Windows Service

Log On Account for Jenkins Windows Service

Log On Account for Jenkins Windows Service Granted

Log On Account for Jenkins Windows Service Granted

Setting up the Jenkins SQL Server Login

Next, to use Windows Authentication with SQL Server, create a new ‘jenkins’ Login for the Production instance of SQL Server and it with the ‘jenkins’ Windows user account. Replicate the ‘aw_dev’ SQL user’s various permissions for the ‘jenkins’ user. The ‘jenkins’ account will be performing similar tasks to ‘aw_dev’, but this time initiated by Jenkins, not Visual Studio. Repeat this process for the Development instance of SQL Server.

Jenkins Login Added to Development Instance

Jenkins Login Added to Development Instance

Jenkins User Any Definition on Production Instance

Jenkins User Any Definition on Production Instance

Jenkins User View Definition on Production Instance Database

Jenkins User View Definition on Production Instance Database

SSMS View of Jenkins User

SSMS View of Jenkins User

Windows Authentication with the Publishing Profile

In Visual Studio, switch the connection strings in the Development and Production publishing profiles in both the server project and database projects to Windows Authentication with Integrated Security. They should look similar to the code below. Substitute your server name and SQL instance for each profile.

Data Source=[SERVER NAME]\[INSTANCE NAME];Integrated Security=True;Pooling=False

Important note here, once you switch the profile’s connection string to Windows Authentication, the Windows user account that you logged into your computer with, is the account that Visual Studio will now user to connect to the database. Make sure your Windows user account has at least the same level of permissions as the ‘aw_dev’ and ‘jenkins’ accounts. As a developer, you would likely have greater permissions than these two accounts.

Configuring Jenkins for Delivery of Script to Release

In many production environments, delivering or ‘turning over’ release-ready code to another team for deployment, as opposed to deploying the code directly, is common practice. A developer starts or ‘kicks off a build’ of the job in Jenkins, which generates artifact(s). Artifacts are usually logical collections of deployable code and other associated components and files, constituting the application being built. Artifacts are often separated by type, such as database, web, Windows services, web services, configuration files, and so forth. Each type may be deployed by a different team or to a different location. Our project will only have one artifact to deliver, the sql change script.

This first Jenkins job we create will just generate the change script, which will then be delivered to a specific remote location for later release. We start by creating what Jenkins refers to as a parameterized build job. It allows us to pass parameters to each build of our job. We pass the name of the configuration (same as our environment name) we want our build to target. With this single parameter, ‘TARGET_ENVIRONMENT’, we can use a single Jenkins job to target any environment we have configured by simply passing its name to the build; a very powerful, time-saving feature of Jenkins.

Step 1 - Parameterized Build Parameter

Step 1 – Parameterized Build Parameter

Let’s outline the steps we will configure our Jenkins job with, to deliver a change script for release:

  1. Copy the Solution from its current location to the Jenkins job’s workspace.
  2. Accept the target environment as a parameterized build parameter (ex. ‘Production’ or ‘Development’).
  3. Build the database project and its dependencies based on the environment parameter.
  4. Generate the sql change script based on the environment parameter.
  5. Compress and name the sql change script based on the environment parameter and build id.
  6. Deliver the compressed script artifact to a designated release location for deployment.
  7. Notify release team that the artifact is ready for release.
  8. Archive the build’s artifact(s).

Copy the Solution to Jenkins

I am not using a revision control system, such as TFS or Subversion, for our example. The Adventure Works Solution resides in a file directory, on my development machine. To copy the entire Solution from its current location into job’s workspace, we add a step in the Jenkins job to execute a simple xcopy command. With source control, you would replace the xcopy step with a similar step to retrieve the project from a specific branch)within the revision control system, using one of many Jenkins’ revision control plug-ins.

Step 2 - Copy Solution to Jenkins Workspace

Step 2 – Copy Solution to Jenkins Workspace

echo 'Copying Adventure Works Solution to Jenkins workspace...'
xcopy "[Path to your Project]\AdventureWorks2008" "%WORKSPACE%" /S /E /H /Y /R /EXCLUDE:[Path to exclude file]\[name of exclude file].txt

echo 'Deleting artifacts from previous builds...'
del "%WORKSPACE%\*_publish.zip" /F /Q

Excluding Solution files from Jenkins job’s workspace that are unnecessary for the job to succeed is good practice. Excluding files saves time during the xcopy and can make troubleshooting build problems easier. To exclude unneeded Solution files, use the xcopy command’s ‘exclude’ parameter. To use exclude, we must first create an exclude text file, listing the directories we don’t need copied, and call it using with the exclude parameter with the xcopy command. Make sure to change the path shown above to reflect the location and name of your exclude file. Here is a list of the directories I chose to exclude. They are either unused by the build, or created as part of the build, for example the sql directories and there subdirectories.

\AdventureWorks2008\sql\
\AdventureWorks2008\Sandbox\
\AdventureWorks2008\_ConversionReport_Files\
\Development\sql\
\Development\Sandbox\
\Development\_ConversionReport_Files\

Build the Solution with Jenkins

Once the Solution’s files are copied into the Jenkins job’s workspace, we perform a build of the database project with an MSBuild build step, using the Jenkins MSBuild Plug-in. Jenkins executes the same MSBuild command Visual Studio would execute to build the project. Jenkins calls MSBuild, which in turn calls the MSBuild ‘Build’ target with parameters that specify the Solution configuration and platform to target.

Generate the Script with Jenkins

After Building the database project, in the same step as the build, we perform a publish of the database project. MSBuild calls the new SSDT’s ‘Publish’ target with parameters that specify the Solution configuration, target platform, publishing profile to use, and whether to only generate a sql change script, or publish the project’s changes directly to the database. In this first example, we are only generating a script. Note the use of the build parameter (%TARGET_ENVIRONMENT%) and environmental variables (%WORKSPACE%) in the MSBuild command. Again, a very powerful feature of Jenkins.

Step 3 - Build and Publish Project

Step 3 – Build and Publish Project

"%WORKSPACE%\AdventureWorks2008\AdventureWorks2008.sqlproj"
/p:Configuration=%TARGET_ENVIRONMENT%
/p:Platform=AnyCPU
/t:Build;Publish
/p:SqlPublishProfilePath="%WORKSPACE%\AdventureWorks2008\%TARGET_ENVIRONMENT%.publish.xml"
/p:UpdateDatabase=False

Compressing Artifacts with Apache Ant

To streamline the delivery, we will add a step to compress the change script using Jenkins Apache Ant Plug-in. Many consider Ant strictly a build tool for Java development. To the contrary, there are many tasks that can be automated for .NET developers with Ant. One particularly nice feature of Ant is its built-in support for zip compression.

Step 4 - Invoke Ant to Compress Artifact

Step 4 – Invoke Ant to Compress Artifact

configuration=$TARGET_ENVIRONMENT
buildNo=$BUILD_NUMBER

The Ant plug-in calls Ant, which in turn calls an Ant buildfile, passing it the properties we give. First, create an Ant buildfile with a single task to zip the change script. To avoid confusion during release, Ant will also append the configuration name and unique Jenkins job build number to the filename. For example, ‘AdventureWorks.publish.sql’  becomes ‘AdventureWorks_Production_123_publish.zip’. This is accomplished by passing the configuration name (Jenkins parameterized build parameter) and the build number (Jenkins environmental variable), as parameters to the buildfile (shown above). The parameters, in the form of key-value-pairs, are treated as properties within the buildfile. Using Ant to zip and name the script literally took us one line of Ant code. The contents of the build.xml buildfile is shown below.

<?xml version="1.0" encoding="utf-8"?>
<project name="AdventureWorks2008" basedir="." default="default">
<description>SSDT Database Project Type ZIP Example</description>
<!-- Example configuration ant call with parameter:
ant -Dconfiguration=Development -DbuildNo=123 -->
<target name="default" description="ZIP sql deployment script">
<echo>$${basedir}=${basedir}</echo>
<echo>$${configuration}=${configuration}</echo>
<echo>$${buildNo}=${buildNo}</echo>
<zip basedir="AdventureWorks2008/sql/${configuration}"
destfile="AdventureWorks_${configuration}_${buildNo}_publish.zip"
includes="*.publish.sql" />
</target>
</project>

Delivery of Artifacts

Lastly, we add a step to deliver the zipped script artifact to a ‘release’ location. Ideally, another team would retrieve and execute the change script against the database. Delivering the artifact to a remote location is easily accomplished using the Jenkins Artifact Deployer Plug-in. First, if it doesn’t already exist, create the location where you will deliver the scripts. Then, ensure Jenkins has permission to manage the location’s contents. In this example, the ‘release’ location is a shared folder I created. In order for Jenkins to access the ‘release’ location, give the ‘jenkins’ Windows user Read/Write (Change) permissions to the shared folder. With the deployment plug-in, you also have the option to delete the previous artifact(s) each time there is a new deployment, or leave them to accumulate.

Sharing Folder for Released Artifacts

Sharing Folder for Released Artifacts

Jenkins User Permissions for Shared Folder

Jenkins User Permissions for Shared Folder

Permissions for Shared Folder

Permissions for Shared Folder

Step 5 - Deploy Artifact to Release Location

Step 5 – Deploy Artifact to Release Location

Multiple Zipped Artifacts in Release Folder

Multiple Zipped Artifacts in Release Folder

Email Notification

Lastly, we want to alert the right team that artifacts have been turned-over for release. There are many plug-ins Jenkins to communicate with end-users or other system. We will use the Jenkins Email Extension Plug-in to email the release team. Configuring dynamic messages to include the parameterized build parameters and Jenkins’ environmental variables is easy with this plug-in. My sample message includes several variables in the body of the message, including target environment, target database, artifact name, and Jenkins build URL.

I had some trouble passing the Jenkins’ parameterized build parameter (‘TARGET_ENVIRONMENT’) to the email plug-in, until I found this post. The format required by the plug-in for the type of variable is a bit obscure as compared to Ant, MSBuild, or other plug-ins.

Step 6 - Email Notification

Artifact: AdventureWorks_${ENV,var="TARGET_ENVIRONMENT"}_${BUILD_NUMBER}_publish.zip
Environment: ${ENV,var="TARGET_ENVIRONMENT"}
Database: AdventureWorks
Jenkins Build URL: ${BUILD_URL}
Please contact Development for questions or problems regarding this release.
Release Request Notification Email Message

Release Request Notification Email Message

Publishing Directly to the Database

As the last demonstration in this series of posts, we will publish the project changes directly to the database. Good news, we have done 95% of the work already. We merely need to copy the Jenkins job we already created, change one step, remove three others steps, and we’re publishing! Start by creating a new Jenkins job by copying the existing script delivery job. Next, drop the Invoke Ant, Artifact Deployer, and Archive Artifacts steps from the job’s configuration. Lastly, set the last parameter of the MSBuild task, ‘UpdateDatabase’, to True from False. That’s it! Instead of creating the script, compressing it, and sending it to a location to be executed later, the changes are generated and applied to the database in a single step.

Hybrid Solution

If you are not comfortable with the direct approach, there is a middle ground between only generating a script and publishing directly to the database. You can keep a record of the changes made to the database as part of publishing. To do so, change the ‘UpdateDatabase’ parameter to True, and only drop the Artifact Deployer step; leave the Invoke Ant and Archive Artifacts steps. The resulting job generates the change script, publishes the changes to the database, and compresses and archives the script. You now have a record of the changes made to the database.

Conclusion

In this last of three posts we demonstrated the use of Jenkins and its plug-ins to created three jobs, representing three possible SSDT publishing workflows. Using the parameterized build feature of Jenkins, each job capable of being executed against any database environment that we have a configuration and publishing profile defined for. Hopefully, one of these three workflows may fit your particular release methodology.

Jenkins SSDT Jobs View

Jenkins SSDT Jobs View

, , , , , , , , , , , , , , , , , , , ,

7 Comments

Convert VS 2010 Database Project to SSDT and Automate Publishing with Jenkins – Part 2/3

Objectives of 3-Part Series:

Part I: Setting up the Example Database and Visual Studio Projects

  • Setup and configure a new instance of SQL Server 2008 R2
  • Setup and configure a copy of Microsoft’s Adventure Works database
  • Create and configure both a Visual Studio 2010 server project and Visual Studio 2010 database project
  • Test the project’s ability to deploy changes to the database

Part II: Converting the Visual Studio 2010 Database and Server Projects to SSDT

  • Convert the Adventure Works Visual Studio 2010 database and server projects to SSDT projects
  • Create a second Solution configuration and SSDT publish profile for an additional database environment
  • Test the converted database project’s ability to publish changes to multiple database environments

Part III: Automate the Building and Publishing of the SSDT Database Project Using Jenkins

  • Automate the build and delivery of a sql change script artifact, for any database environment, to a designated release location using a parameterized build.
  • Automate the build and publishing of the SSDT project’s changes directly to any database environment using a parameterized build.

Part II: Converting the Visual Studio 2010 Database and Server Projects to SSDT

Picking up where part one of this three-part series left off, we are ready to convert the Adventure Works VS 2010 database project and associated server project to the new SSDT project type. Once converted, we will create an additional Solution configuration for Production. Finally, we will publish (vs. deploy) changes to database project’s schema to both the Development and Production environments. Note that Microsoft refers to the new format as either SSDT project type or SQL Server Database Project. I chose the prior in this post, it seemed clearer.

Convert the Projects to SSDT

Microsoft could not have made the conversion to the new SSDT project-type any simpler. Right-click on the Development server project and select ‘Convert to SQL Server Database project’. Select ‘Yes’, select ‘Backup originals for converted files’, and click ‘OK’. The conversion process should take only a minute or two. Following that, you are presented with a Conversion Report when the process is complete. The report should show the successful conversion to the SSDT project type.

Convert Project to SQL Server Database Project

Convert Project to SQL Server Database Project

Server Project Conversion Report

Server Project Conversion Report

Repeat this process for the AdventureWorks2008 database project. Again, you see a Conversion Report when complete. It should also not contain any errors, nor files marked as ‘Not converted’.

Database Project Conversion Report

Database Project Conversion Report

The New Project File Format

Reviewing the Conversion Report for the databae project, note the change to the primary project file. This is the first key difference between the VS 2010 project types and the new SSDT project types. The project file was converted from ‘AdventureWorks2008.dbproj’ to ‘AdventureWorks2008.sqlproj’ (see Conversion Report screen grab, above). Although the earlier project file with the ‘.dbproj’ file suffix is still in the project’s file directory, the Visual Studio Solution is now associated with the new ‘.sqlproj’ project file. This is the same for the server project. The ‘.dbproj’ files are no longer needed. You can drop then from the project’s file directory or from your source control system. This will prevent any confusion going forward.

Publishing Profiles

The second change you will note after the conversion is in the Solution Explorer tab. Each project has three items with the file suffix ‘.publish.xml’. These are publishing profiles. There are profiles for the each Solution configuration – Debug, Release, and Development. A publishing profile has all the settings necessary to publish changes made to the SSDT database project to a specific database environment. As part of the conversion to SSDT, all existing project settings migrated into the current project. Portions of the configuration-specific settings stay in the converted Solution configurations, while publish-specific settings are in the publishing profiles. Publishing profiles, like pre- and post-deployment scripts, are not part of the build. Select a profile in the Project Explorer tab. Note the ‘Build Action’ property in the Properties tab is set to ‘None’.

Solution Explorer View of Converted Projects

Solution Explorer View of Converted Projects

Additional Project and Profile Settings

There are also new settings in the converted projects. They support newer technologies like SSDT, SQL 2012, and Azure. As part of our first major conversion to SSDT, took the opportunity to review all project and publish settings with our database developers and DBAs. We stove to understand all the setting’s purpose and make sure they were correctly configured and documented for each of our many database environments.

Development Publishing Profile Advanced Settings

Development Publishing Profile Advanced Settings

Testing the Converted Projects

To test the successful conversion of the both project to the SSDT project types, select the Development Solution configuration and perform a Rebuild on the Solution. In the Build section of the Output tab, you should see both projects built successfully.

Initial Build Results of SSDT Projects

Initial Build Results of SSDT Projects

Development Publishing Profile

Right-click on the ‘Development.publish.xml’ file in the AdventureWorks2008 project and select ‘Publish…’ Wait for the project to build. Selecting Publish or opening a publishing profile causes the project to build. Afterwards, you should see the ‘Publish Database’ window appear. Here is where you change the settings of Development profile. When converting the Adventure Works project to SSDT, I’ve found the database connection information does not migrate to the profile. Setup the ‘Target database connection’ information in the ‘Connection Properties’ pop-up window by clicking ‘Edit…’. When finished, click ‘OK’ to return to the ‘Publish Database’ window. Finally, save the revised Development publishing profile by clicking ‘Save Profile As…’. I will not cover the specific profile settings, accessed by clicking ‘Advanced…’. Many of these settings will be specific to your environments and workflows. They can be left as default for this demonstration.

Development Publishing Profile

Development Publishing Profile

Development Publishing Profile Connection Properties

Development Publishing Profile Connection Properties

Saving Development Publishing Profile Settings for Server

Saving Development Publishing Profile Settings for Server

Generate Script for Adventure Works Database

Without leaving the ‘Publish Database’ window, click ‘Generate Script’. As in the first post, this action will initiate a schema comparison resulting the generation of a script that aggregates all the schema changes to the project, not already reflected in the database. The script represents the schema ‘delta’ (the difference) between the project and the database. The script will automatically open in Visual Studio’s main window after being created. In the ‘Data Tools Operations’ tab you should see messages indicating generation of the script was successful.

Generate Script Results

Generate Script Results

Also included in the script, along with the schema changes, are any pre- and post-deployment scripts. You should see the single post-deployment script that we created in part one of this series. Pre- and post-deployment scripts are always included in the script, whether or not they have already been executed. This is why it is imperative that pre- and post-deployment scripts are re-runnable. Scripts must have the ability to be executed more than once without producing unintended changes to the database.

Publishing to the Development Database

Next, right-click on the ‘Development.publish.xml’ file, and select ‘Publish…’. This will return you to the same window you were just in to generate the script. Click ‘Publish’. Again, in the ‘Data Tools Operations’ tab you should see messages that the publish operation completed successfully.

Successful Script Generation and Deployment to Development

Successful Script Generation and Deployment to Development

Congratulations, you have completed and tested conversion of the Adventure Works database project to SSDT project type.

Note with SSDT, the term ‘Deploy’, which refers to a specific MSBuild target, is replaced with ‘Publish’, a SSDT-specific build target. Instead of deploying changes to the database, like we did in the first post, we will publish changes with SSDT. To understand how MSBuild is able to call the new SSDT Publish target, open the AdventureWorks2008.sqlproj file by right-clicking on the project and selecting ‘Edit Project File’. In the project file’s xml you will find an ‘Import’ tag that imports the SSDT targets into the project, making them accessible to MSBuild.


<!--Import the settings-->
MSBuildExtensionsPath)\Microsoft\VisualStudio\v$(VisualStudioVersion)\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" />

New Production Environment

Between posts, I installed another instance of SQL Server 2008 R2, named ‘Production’. Into this instance I installed another copy of the Adventure Works database. I added the same ‘aw_dev’ user that we used in Development, with the same permissions. This SQL Server instance and Adventure Works database simulates a second database environment, Production. Normally, this instance would be installed on to separate server, but for simplicity sake I installed the Production instance on the same physical server as the Development instance. It makes no difference for the purposes of this post.

If you wish to follow all examples presented in the next two posts, you will need to install and configure the Production instance of SQL Server. Otherwise, you can disregard the portions of the two posts on publishing to Production, and just stick with the single Development environment. The conversion to SSDT doesn’t require the added Production environment.

New Production Configuration and Publish Profile

Next, we will create a new configuration in the SSDT project’s Solution and configure the resulting publishing profile, targeting the Production environment. We will use this to publish changes from the project to the Production environment. Using the Solution’s Configuration Manager, create a new Solution configuration. This process is unchanged from the VS 2010 database project-type.

New Production Solution Configuration

New Production Solution Configuration

Solution Configuration Manager

Solution Configuration Manager

New Production Configuration Build Settings for Database Project

New Production Configuration Build Settings for Database Project

Right click on the AdventureWorks2008 project and select ‘Publish…’. This will return us to the ‘Publish Database’ window. Like before, with the Development publishing profile, complete the connection string information, this time targeting the Production instance. Change the ‘Publish script name’ setting to ‘Production.sql’. Click ‘Save Profile As…’, and save this profile configuration into the project file path as ‘Production.publish.xml’. Repeat this process for the Development SSDT server project.

Database Project

Production Publishing Profile Settings for Database

Production Publishing Profile Settings for Database

Production Publishing Profile Connection Properties for Database

Production Publishing Profile Connection Properties for Database

Saving Production Publishing Profile Settings for Database

Saving Production Publishing Profile Settings for Database

Server Project

Production Publishing Profile Settings for Server

Production Publishing Profile Settings for Server

Saving Development Publishing Profile Settings for Server

Saving Development Publishing Profile Settings for Server

We now have a new Production Solution configuration and corresponding publishing profiles in each of our two projects.

Solution Explorer View of Production Publishing Profiles

Solution Explorer View of Production Publishing Profiles

We can now target two different database environments from our AdventureWorks2008 project, Development and Production. In a typical production workflow, as a developer, you would make changes to the database project directly, or copy using a source control system like TFS. After testing your changes locally, you execute the publish task to send your schema changes and/or pre- and post-deployment scripts to the Development database instance. This process is be repeated by other developers in your department.

After successfully testing your application(s) against the Development database, you are ready to release the database changes to Testing, or in this example, directly to Production. You execute the Publish task again, this time choosing the Production Solution configuration and Production publishing profile. The schema changes and any pre- and post-deployment scripts are now executed against the Production database. You would follow the same process for other environments, such as Testing or Staging.

Making Schema Changes to Multiple Environments

For this test, we will make schema changes to the ‘Employee’ table, part of the ‘HumanResources’ schema. In Visual Studio, open the Employee table and add two new columns to the end of the table, as shown below. If you have not worked with the SSDT project type before, the view of the table will look very different to you. Microsoft has changed the earlier table view to include a friendlier design view as seen in SSMS, versus the earlier sql create statement only view. There is also a window which details all the key, indexes, and triggers associated with the table. I consider this light years better in term of usability from the developer’s standpoint. Save the changes to the table object and close it.

New SSDT Table Object View

New SSDT Table Object View

Select the Development Solution configuration. Right-click on the Development profile in the AdventureWorks2008 project and click ‘Publish…’ Wait for the project to build. When the ‘Publish Database’ window appears, click ‘Publish’. You have just deployed the Employee table schema changes to the Development instance of the database.

Schema Changes to Employee Table in Script

Schema Changes to Employee Table in Script

Repeat this same process for Production. Don’t forget to switch to the Production Solution configuration and select the Production publish profile. You have now applied the same schema changes to the Production environment. Your customer will be happy they can now track the drug testing of their employees.

Successful Script Generation and Deployment to Development

Successful Script Generation and Deployment to Development

There are other methods available with SSDT to deploy changes to the database. Using a script is the method I have chosen to show in this post.

Conclusion

In this post we converted the Adventure Works database project and Development server project to SSDT project-types. We created a new Solution Configuration and publishing profiles, targeting Production. We made schema changes to the SSDT database project. Finally, we deployed those changes to both the Development and Production database environments.

In Part III of this series, I will show how to use Jenkins CI Server to automate building, testing, delivering scripts, and publishing to a database from the SSDT database project.

, , , , , , , , , , , , , , , , , ,

7 Comments

Convert VS 2010 Database Project to SSDT and Automate Publishing with Jenkins – Part 1/3

Objectives of 3-Part Series:

Part I: Setting up the Example Database and Visual Studio Projects

  • Setup and configure a new instance of SQL Server 2008 R2
  • Setup and configure a copy of Microsoft’s Adventure Works database
  • Create and configure both a Visual Studio 2010 server project and Visual Studio 2010 database project
  • Test the project’s ability to deploy changes to the database

Part II: Converting the Visual Studio 2010 Database and Server Projects to SSDT

  • Convert the Adventure Works Visual Studio 2010 database and server projects to SSDT projects
  • Create a second Solution configuration and SSDT publish profile for an additional database environment
  • Test the converted database project’s ability to publish changes to multiple database environments

Part III: Automate the Building and Publishing of the SSDT Database Project Using Jenkins

  • Automate the build and delivery of a sql change script artifact, for any database environment, to a designated release location using a parameterized build.
  • Automate the build and publishing of the SSDT project’s changes directly to any database environment using a parameterized build.

Background

Microsoft’s Visual Studio 2010 (VS 2010) IDE has been available to developers since April, 2010. Microsoft’s SQL Server 2008 R2 (SQL 2008 R2) has also been available since April, 2010. If you are a modern software development shop or in-house corporate development group, using the .NET technology stack, you probably use VS 2010 and SQL 2008 R2. Moreover, odds are pretty good that you’ve implemented a Visual Studio 2010 Database Project SQL Server Project to support what Microsoft terms a Database Development Life Cycle (DDLC).

Now, along comes SSDT. Recently, along with the release of SQL Server 2012, Microsoft released SQL Server Data Tools (SSDT). Microsoft refers to SSDT as “an evolution of the existing Visual Studio Database project type.” According to Microsoft, SSDT offers an integrated environment within Visual Studio 2010 Professional SP1 or higher for developers to carry out all their database design work for any SQL Server platform (both on and off premise). The term ‘off premises’ refers to SSDT ‘s ability to handle development in the Cloud – SQL Azure. SSDT supports all current versions of SQL Server, including SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012 and SQL Azure.

SSDT offers many advantages over the VS 2010 database project type. SSDT provides a more database-centric user-experience, similar to SQL Server Management Studio (SSMS). Anyone who has used VS 2010 database project knows the Visual Studio experience offered a less sophisticated user-interface than SSMS or similar database IDE’s, like Toad for SQL. Installing Microsoft’s free SSDT component, after making sure you have SP1 installed for VS 2010 Professional or higher, you can easily convert you VS 2010 database projects to the new SSDT project type. The conversion offers a better development and deployment experience, and prepare you for an eventual move to SQL Server 2012.

Part I: Setting up the Example Database and Visual Studio Projects

Setting up the Example

To avoid learning SSDT with a copy of your client’s or company’s database, I suggest taking the same route I’ve taken in this post. To demonstrate how to convert from a VS 2010 database project to SSDT, I am using a copy of Microsoft’s Adventure Works 2008 database. Installed, the database only takes up about 180 MBs of space, but is well designed and has enough data to use as a good training tool it, as Microsoft intended. There are several version of the AdventureWorks2008 database available for download depending on your interests – OLTP, SSRS, Analysis Services, Azure, or SQL 2012. I chose to download the full database backup of the AdventureWorks2008R2 without filestream for this post.

Creating the SQL Server 2008 R2 Instance and Database

Before installing the database, I used the SQL Server Installation Center to install a new instance of SQL Server 2008 R2, which I named ‘Development’. This represents a development group’s database environment. Other environments in the software release life-cycle commonly include Testing, Staging, and Production. Each environment usually has its own servers with their own instances of SQL Server, with its own copy of the databases. Environments can also include web servers, application servers, firewalls, routers, and so forth.

After installing the Development instance, I logged into it as an Administrator and created a new empty database, which I named ‘AdventureWorks’. I then restored the downloaded backup copy of Adventure Works 2008 R2, which I downloaded, to the newly created Adventure Works database.

Creating the Database 01

Creating the New Database

Creating the Database 02

Creating the New Database

Creating the Database 03

Creating the Database: Restoring the Full Backup

Creating the Database 04

Creating the Database: Restoring the Full Backup

Creating the Database 05

Creating the Database: Restoring the Full Backup

You may note some differences between the configuration settings displayed below in the screen grabs and the default configuration of the Adventure Works database. This post is not intended to recommend configuration settings for your SQL Server databases or database projects. Every company is different in the way it configures its databases. The key is to make sure that the configuration settings in your database align with the equivalent configuration settings in the database project in Visual Studio 2010. If not, when you initially publish changes to the database from the database project, the project will script the differences and change the database to align to the project.

Creating the Database 06

Creating the Database: Database Properties

Creating the Server Login and Database User

Lastly in SSMS, I added a new login account to the Development SQL Server instance and a user to the Adventure Works database, named ‘aw_dev’. This user represents a developer who will interact with the database through the SSDT database project in VS 2010. For simplicity, I used SQL authentication for this user versus Windows authentication. I gave the user the minimal permissions necessary for this example. Depending on the types of interactions you have with the database, you may need to extend the rights of the user.

New Database Login 01

New Database Login

New Database Login 02

New Database Login

Two key, explicit permissions must be assigned for the user for SSDT to work properly. First is the ‘view any definition’ permission on the Development instance. Second is the ‘view definition’ permission on the Adventure Works database. These enable the SSDT project to perform a schema comparison to the Adventure Works database, explained later in the post. Lack of the view definition permission is one of the most common errors I’ve seen during deployments. They usually occur after adding a new database environment, database, database user, or continuous integration server.

New Database Login: ‘View any definition’ Permission

New Database Login: ‘View any definition’ Permission

New Database Login 03

New Database Login: ‘View definition’ Permission

Setting up Visual Studio Database Project

In VS 2010, I created a new SQL Server 2008 database project, named ‘AdventureWorks2008’. In the same Visual Studio Solution, I also created a new SQL Server 2008 server project, named ‘Development’. The database projects mirrors the schema of the Adventure Works database, and the server project mirrors the instance of SQL Server 2008 R2 on which the database is housed. The exact details of creating and configuring these two projects are too long for this post, but I have a set of screen grabs, hyperlinked below, to aid in creating these two projects. For the database project, I only showed the screens that are signficantly different then the server project screens to save some space.

Server Project

VS 2010 Server Project 01

VS 2010 Server Project: New Project SQL Server 2008 Wizard

VS 2010 Server Project 02

VS 2010 Server Project: New Project SQL Server 2008 Wizard

VS 2010 Server Project 03

VS 2010 Server Project: Project Properties

VS 2010 Server Project 04

VS 2010 Server Project: Set Database Options

VS 2010 Server Project 06

VS 2010 Server Project: Import Database Schema

VS 2010 Server Project 08

VS 2010 Server Project: Database Connection Properties

VS 2010 Server Project 09

VS 2010 Server Project: Configure Build and Deploy

Database Project

VS 2010 Database Project 01

VS 2010 Database Project: New Project SQL Server 2008 Wizard

VS 2010 Database Project 02

VS 2010 Database Project: Project Properties

VS 2010 Database Project 03

VS 2010 Database Project: Summary

Reference from Database Project to Server Project

After creating both projects, I created a reference (dependency) from the Adventure Works 2008 database project to the Development server project. The database project reference to the server project creates the same parent-child relationship that exists between the Development SQL Server instance and the Adventure Works database. Once both projects are created and the reference made, your Solution should look like the second screen grab, below.

Adding a Reference to the Database Project

Adding a Reference to the Database Project

Database Project Dependencies

Database Project Dependencies

Final View of VS 2010 Solution with both Projects and Reference

Final View of VS 2010 Solution with both Projects and Reference

Creating the Development Solution Configuration

Next, I created a new ‘Development’ Solution configuration. This configuration defines the build and deployment parameters for the database project when it targets the Development environment. Again, in a normal production environment, you would have several configurations, each targeting a specific database environment. In the context of this post, a database environment refers to a unique combination of  servers, server instances, databases, data, and users. For this first example we are only setting up one database environment, Development.

New Development Solution Configuration 01

New Development Solution Configuration

New Development Solution Configuration 02

New Development Solution Configuration

The configuration specifies the parameters specific to the Adventure Works database in the Development environment. The connection string containing the server, instance, and database names, user account, and connection parameters, are all specific to the Development environment. They are different in the other environments – Testing, Staging, and Production.

Testing the Development Configuration

Once the Development configuration was completed, I ran the ‘Rebuild’ command on the Solution, using the Development configuration, to make sure there are no errors or warnings. Next, with the Development configuration set to only create the deployment script, not to create and deploy the changes to the database, I ran the ‘Deploy’ command. This created a deployment script, entitled ‘AdventureWorks2008.sql’, in the ‘sql\Development’ folder of the AdventureWorks2008 database project.

Create a Deployment Script Only

Database Project Configuration Settings to Create a Deployment Script Only

Deployment Script Location in Project

Deployment Script Location in Project

Since I just created both the Adventure Works database and the database project, based on the database, there are no schema changes in the deployment script. You will see ‘filler’ code for error checking and so forth, but no real executable schema changes to the database are present at this point. If you do see initial changes included in the script, usually database configuration changes, I suggest modifying the settings of the database project and/or the database to align to one another. For example, you may see code in the script to change the database’s default cursor from global to local, or vice-versa. Or, you may also see code in the script to the databases recovery model from full to simple, or vice-versa. You should decide whether the project or the database is correct, and change the other one to match. If necessary, re-run the ‘Deploy’ command and re-check the deployment script. Optionally, you can always execute the script with the changes, thus changing the database to match the project, if the project settings are correct.

Testing Deployment

After successfully testing the development configuration and the deployment script, making any configuration changes necessary to the project and/or the database, I then tested the project’s ability to successfully execute a Deploy command against the database. I changed the Development configuration’s deploy action from ‘create a deployment script (.sql)’ to from ‘create a deployment script (.sql) and deploy to the database’. I then ran the ‘Deploy’ command again, and this time the script is created and executed against the database. Since I still had not made any changes to the project, there were no schema changes made to the database. I just tested the project’s ability to create and deploy to the database at this point. Most errors at this stage are insufficient database user permissions (see example, below).

Test Deployment to the Database

Test Deployment to the Database

View Any Definition Permission Error

View Any Definition Permission Error

Testing Changes to the Project

Finally, I tested the project’s ability to make changes to the database as part of the deployment. To do so, I created a simple post-deployment script that changes the first name of a single, existing employee. After adding the post-deployment script to the database project and adding the script’s path to the post-deployment script file, I again ran the ‘Deploy’ command again, still using the Development configuration. This time the deployment script contained the post-deployment script’s contents. When deployed, one record was affected, as indicated in VS 2010 Output tab. I verified the change was successful in the Adventure Works database table, using SSMS.

Test Changes Deployed to Database 02

Simple Post-Deployment Script

Test Changes Deployed to Database 01

Simple Post-Deployment Script

Test Changes Deployed to Database 03

Post-Deployment Script Location in Project

Test Changes Deployed to Database 04

One Row Affected by Deployment this Time

Before Post-Deployment Script was Ran

Before Post-Deployment Script was Deployed

After Post-Deployment Script was Ran

After Post-Deployment Script was Deployed

Conclusion

We now have a SQL Server 2008 R2 database instance representing a Development environment, and a copy of the Adventure Works database, being served from that instance. We have corresponding VS 2010 database and server projects. We also have a new Development Solution configuration, targeting the Development environment. Lastly, we tested the database project’s capability to successfully build and deploy a change to the database.

In Part II of this series, I will show how to convert the VS 2010 database and server projects to SSDT.

, , , , , , , , , , , , , , , , , , ,

5 Comments

Automating Work Item Creation in TFS 2010 with PowerShell, Continued

In a previous post, Automating Task Creation in Team Foundation Server with PowerShell, I demonstrated how to automate the creation of TFS Task-type Work Items using PowerShell. After writing that post, I decided to go back and further automate my own processes. I combined two separate scripts that I use on a regular basis, one that creates the initial Change Request (CR) Work Item, and a second that creates the Task Work Items associated with the CR. Since I usually run both scripts successively and both share many of the same variables, combining the scripts made sense. I now have a single PowerShell script that will create the parent Change Request and the associated Tasks in TFS. The script reduces my overall time to create the Work Items by a few minutes for each new CR. The script also greatly reduces the risk of input errors from typing the same information multiple times in Visual Studio. The only remaining manual step is to link the Tasks to the Change Request in TFS.

The Script

Similar to the previous post, for simplicity sake, I have presented a basic PowerShell script. The script could easily be optimized by wrapping the logic into a function with input parameters, further automating the process. I’ve placed a lot of comments in the script to explain what each part does, and help make customization easier. The script explicitly declares all variables, adhering to PowerShell’s Strict Mode (Set-StrictMode -Version 2.0). I feel this makes the script easier to understand and reduces the possibility of runtime errors.

#############################################################
#
# Description: Automatically creates
# (1) Change Request-type Work Item and
# (5) Task-type Work Items in TFS.
#
# Author: Gary A. Stafford
# Created: 07/18/2012
# Modified: 07/18/2012
#
#############################################################

# Clear Output Pane
clear

# Loads Windows PowerShell snap-in if not already loaded
if ( (Get-PSSnapin -Name Microsoft.TeamFoundation.PowerShell -ErrorAction SilentlyContinue) -eq $null )
{
	Add-PSSnapin Microsoft.TeamFoundation.PowerShell
}

# Set Strict Mode - optional
Set-StrictMode -Version 2.0

#############################################################

# Usually remains constant
[string] $tfsServerString = "http://[YourServerNameGoesHere]/[PathToCollection]"
[string] $areaPath = "Development\PowerShell"
[string] $workItemType = "Development\Change Request"
[string] $description = "Create Task Automation PowerShell Script"

# Usually changes for each Sprint - both specific to your environment
[string] $iterationPath = "PowerShell\TFS2010"

# Usually changes for each CR and Tasks
[string] $requestName = "Name of CR from Service Manager"
[string] $crId = "000000"
[string] $priority = "1"
[string] $totalEstimate = "10" # Total of $taskEstimateArray
[string] $assignee = "Doe, John"
[string] $testType = "Unit Test"

# Task values represent units of work, often 'man-hours'
[decimal[]] $taskEstimateArray = @(2,3,10,3,.5)
[string[]] $taskNameArray = @("Analysis", "Design", "Coding", "Unit Testing", "Resolve Tasks")
[string[]] $taskDisciplineArray = @("Analysis", "Development", "Development", "Test", $null)

#############################################################

Write-Host `n`r**** Create CR started...`n`r

# Build string of field parameters (key/value pairs)
[string] $fields = "Title=$($requestName);Description=$($description);CR Id=$($crId);"
$fields += "Estimate=$($totalEstimate);Assigned To=$($assignee);Test Type=$($testType);"
$fields += "Area Path=$($areaPath);Iteration Path=$($iterationPath);Priority=$($priority);"

#For debugging - optional console output
Write-Host `n`r $fields

# Create the CR (Work Item)
tfpt workitem /new $workItemType /collection:$tfsServerString /fields:$fields

Write-Host `n`r**** Create CR completed...`n`r

#############################################################

# Loop and create of eack of the (5) Tasks in prioritized order
[int] $i = 0

Write-Host `n`r**** Create Tasks started...`n`r

# Usually remains constant
$workItemType = "Development\Task"

while ($i -le 4) {
	# Concatenate name of task with CR name for Title and Description fields
	$taskTitle = $taskNameArray[$i] + " - " + $requestName

	 # Build string of field parameters (key/value pairs)
	 [string] $fields = "Title=$($taskTitle);Description=$($taskTitle);Assigned To=$($assignee);"
	 $fields += "Area Path=$($areaPath);Iteration Path=$($iterationPath);Discipline=$($taskDisciplineArray[$i]);Priority=$($i+1);"
	 $fields += "Estimate=$($taskEstimateArray[$i]);Remaining Work=$($taskEstimateArray[$i]);Completed Work=0"

	 #For debugging - optional console output
	 Write-Host `n`r $fields

	 # Create the Task (Work Item)
	 tfpt workitem /new $workItemType /collection:$tfsServerString /fields:$fields

	 $i++
}

Write-Host `n`r**** Create Tasks completed...`n`r

Deleting Work Items with PowerShell

Team Foundation Server Administrators know there is no delete button for Work Items in TFS. So, how do you delete (destroy, as TFS calls it) a Work Item? One way is from the command line, as demonstrated in the previous post. You can also use PowerShell, calling the witAdmin command-line tool, but this time from within PowerShell, as follows:

[string] $tfsServerString = "http://[YourServerNameGoesHere]/[PathToCollection]"
[string] $tfsWorkIemId = "00000"
$env:path += ";C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE"
witadmin destroywi /collection:$tfsServerString /id:$tfsWorkIemId /noprompt

First, use PowerShell to set your path environmental variable to include your local path to witadmin.exe. Then set your TFS Server path and the TFS Work Item ID of the Work Item you want to delete. Or, you can call witAdmin, including the full file path, avoiding setting the path environmental variable. True, you could simplify the above to a single line of code, but I feel using variables is easier to understand for readers then one long line of code.

, , , , , , , , , , ,

7 Comments

Consuming Cross-Domain WCF REST Services with jQuery using JSONP

Restaurant Menu / Order Form Preview

Restaurant Menu / Order Form Preview

Introduction

In a previous article, Interactive Form Functionality on the Client-Side Using jQuery, I demonstrated the use of HTML, JavaScript, jQuery, and jQuery’s AJAX API to create a simple restaurant menu/order form. Although the previous article effectively demonstrated the use of these client-side technologies, the source of the restaurant’s menu items, a static XML file, was not intended to represent a true ‘production-class’ data source. Nowadays, to access data and business logic across the Enterprise or across the Internet, developers are more apt to build service-oriented applications that expose RESTful web services, and client applications that consume those services. RESTful services are services which conform to the REST (Representational State Transfer) architectural pattern. More information on REST can be obtained by reading Chapter 5 and 6 of REST’s author Roy Fielding’s Doctoral Dissertation. Most modern web technologies communicate with RESTful web services, including Microsoft’s Silverlight, Web Forms, and MVC, JavaFX, Adobe Flash, PHP, Python, and Ruby on Rails.

This article will expand on the restaurant menu/order form example from the previous article, replacing the static XML file with a WCF Service. The article will demonstrate the following:

  • Use of jQuery’s AJAX API to bi-bidirectionally communicate with WCF Services
  • Cross-domain communication with WCF Services using JSONP
  • Serialization of complex, nested .NET objects into JSONP-format HTTP Response Messages
  • Deserialization of JSONP-format HTTP Request Messages into complex, nested .NET objects
  • Optimization of JavaScript and the use of caching to maximize the speed of content delivery to the Client

Source code is now available on GitHub. As of  May 2014, there is a revised version of the project on the ‘rev2014′ branch, on GitHub. The below post describes the original code on the ‘Master’ branch. All details are posted on GitHub.

Background

WCF

For .NET developers, Windows Communication Foundation (WCF), Microsoft’s platform for Service Oriented Architecture (SOA), is the current preferred choice for building service-oriented applications. According to Microsoft, WCF is part of the .NET Framework that provides a unified programming model for rapidly building service-oriented applications that communicate across the web and the enterprise.

Prior to WCF, Microsoft offered ASP.NET XML Web Service, or ASP.NET Web Services for short. ASP.NET Web Services send and receive messages using Simple Object Access Protocol (SOAP) via HTTP. Data is serialized from instances of .NET objects into XML-format SOAP messages (or, ‘XML in a SOAP envelop’ as they are also known), and vice-versus. Metadata about the ASP.NET Web Services is contained in the Web Services Description Language (WSDL). Although still prevalent, ASP.NET Web Services is now considered a legacy technology with the advent of WCF, according to Microsoft. SOAP, a protocol for accessing a Web Service, does not conform to REST architecture guidelines.

Hosted on Microsoft’s IIS (Internet Information Services) Web Server, WCF is a complex, yet robust and flexible service-oriented framework. By properly configuring WCF Services, developers can precisely expose business logic and data sources to clients in a variety of ways. WCF Services can send and receive messages as XML in a SOAP envelop, as well as RESTful formats, including POX (plain old XML), ATOM (an XML language used for web feeds), and JSON (JavaScript Object Notation).

JSON/JSONP

The example in this article uses JSON, more specifically JSONP (JSON with Padding), a specialized type of JSON, to exchange information with WCF Services. JSON is an open and text-based data exchange format that provides a standardized data exchange format better suited for AJAX-style web applications. Compared to XML, JSON-formatted messages are smaller in size. For example, the restaurant menu used in this article, formatted as XML, is 927 bytes. The same message, formatted in JSONP is only 311 bytes, about one-third the size. The savings when transmitting JSON-format messages over slow connections, to mobile devices, or to potentially millions of simultaneous web-browsers, is significant.

Since the WCF Service will be hosted in a different domain (a different port in the example) than the web site with the restaurant menu and order form, we must use JSONP. JSONP, based on JSON, that allows pages to request data from a server in a different domain, normally disallowed, due to ‘same origin policy’. The same origin policy is an important security concept for browser-side programming languages, such as JavaScript. According to Wikipedia, same origin policy permits scripts running on pages originating from the same site to access each others methods and properties with no specific restrictions, but prevents access to most methods and properties across pages on different sites. JSONP takes advantage of the open policy for HTML <script> elements.

Below is an example of the article’s restaurant menu formatted in JSONP, and returned by the WCF Service as part of the HTTP Response to the client’s HTTP Request’s GET method.

    RestaurantMenu([
    {"Description":"Cheeseburger","Id":1,"Price":3.99},
    {"Description":"Chicken Sandwich","Id":4,"Price":4.99},
    {"Description":"Coffee","Id":7,"Price":0.99},{"Description":"French Fries",
    "Id":5,"Price":1.29},{"Description":"Hamburger","Id":2,"Price":2.99},
    {"Description":"Hot Dog","Id":3,"Price":2.49},
    {"Description":"Ice Cream Cone","Id":9,"Price":1.99},
    {"Description":"Soft Drink","Id":6,"Price":1.19},{"Description":"Water",
    "Id":8,"Price":0}]);

AJAX (well, not really…)

AJAX (Asynchronous JavaScript and XML) asynchronously exchanges data between the browser and web server, avoiding page reloads, using object. Despite the name, XMLHttpRequest, AJAX can work with JSON in addition to XML message formatting. Other formats include JSONP, JavaScript, HTML, and text. Using jQuery’s AJAX API, we will make HTTP Requests to the server using the GET method. Other HTTP methods include POST, PUT, and DELETE. To access cross-domain resources, in this case the WCF Service, the client makes a HTTP Request using the GET method.

Writing this article, I discovered that using JSONP technically isn’t AJAX because it does not use the XMLHttpRequest object, a primary requirement of AJAX. JSONP-format HTTP Requests are made by inserting the HTML <script> tag into the DOM, dynamically. The Content-Type of the HTTP Response from the WCF Service, as seen with Firebug, is application/x-javascript, not application/json, as with regular JSON. I’m just happy if it all works, AJAX or not.

Using the Code

The Visual Studio 2010 Solution used in this article contains (3) projects shown below. All code for this article is available for download at on The Code Project.

  1. Restaurant – C# Class Library
  2. RestaurantWcfService – C# WCF REST Service Application
  3. RestaurantDemoSite – Existing Web Site

Restaurant Class Library

The C# Class Library Project, Restaurant, contains the primary business objects and business logic. Classes that will be instantiated to hold the restaurant menu and restaurant orders include RestaurantMenu, MenuItem, RestaurantOrder, and OrderItem. Both RestaurantMenu and RestaurantOrder inherit from System.Collections.ObjectModel.Collection<T>. RestaurantMenu contains instances of MenuItem, while RestaurantOrder contains instances of OrderItem.

The business logic for deserializing the JSON-format HTTP Request containing the restaurant order is handled by the ProcessOrder class. I struggled with deserializing the JSONP-formatted HTTP Request into an instance of RestaurantOrder with the standard .NET System.Web.Script.Serialization.JavaScriptSerializer class. I solved the deserialization issue by using Json.NET. This .NET Framework, described as a flexible JSON serializer to convert .NET objects to JSON and back again, was created by James Newton-King. It was a real lifesaver. Json.NET is available on Codeplex. Before passing the RAW JSONP-format HTTP Request to Json.NET, I still had to clean it up using the NormalizeJsonString method I wrote.

Lastly, ProcessOrder includes the method WriteOrderToFile, which writes the restaurant order to a text file. This is intended to demonstrate how orders could be sent from the client to the server, stored, and then reloaded and deserialized later, as needed. In order to use this method successfully, you need to create the ‘c:\RestaurantOrders‘ folder path and add permissions for the IUSR user account to read and write to the RestaurantOrders folder.

The ProcessOrder class (note the reference to Json.NET: Newtonsoft.Json):

using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;

namespace Restaurant
{
    public class ProcessOrder
    {
        public const string STR_JsonFilePath = @"c:\RestaurantOrders\";

        public string ProcessOrderJSON(string restaurantOrder)
        {
            if (restaurantOrder.Length &lt; 1)
            {
                return "Error: Empty message string...";
            }

            try
            {
                var orderId = Guid.NewGuid();
                NormalizeJsonString(ref restaurantOrder);

                //Json.NET: http://james.newtonking.com/projects/json-net.aspx
                var order =
                JsonConvert.DeserializeObject
                &lt;restaurantorder&gt;(restaurantOrder);
                WriteOrderToFile(restaurantOrder, orderId);

                return String.Format(
                "ORDER DETAILS{3}Time: {0}{3}Order Id: {1}{3}Items: {2}",
                DateTime.Now.ToLocalTime(), Guid.NewGuid(),
                order.Count(), Environment.NewLine);
            }
            catch (Exception ex)
            {
                return "Error: " + ex.Message;
            }
        }

        private void NormalizeJsonString(ref string restaurantOrder)
        {
            restaurantOrder = Uri.UnescapeDataString(restaurantOrder);
            int start = restaurantOrder.IndexOf("[");
            int end = restaurantOrder.IndexOf("]") + 1;
            int length = end - start;
            restaurantOrder = restaurantOrder.Substring(start, length);
        }

        private void WriteOrderToFile(string restaurantOrder, Guid OrderId)
        {
            //Make sure to add permissions for IUSR to folder path
            var fileName =
            String.Format("{0}{1}.txt", STR_JsonFilePath, OrderId);

            using (TextWriter writer = new StreamWriter(fileName))
            {
                writer.Write(restaurantOrder);
            }
        }
    }
}

Restaurant WCF Service

If you’ve built WCF Services before, you’ll be familiar with the file structure of this project. The RestaurantService.svc, the WCF Service file, contains no actual code, only a pointer to the code-behind RestaurantService.cs file. This file contains each method which will be exposed to the client through the WCF Service. The IRestaurantService.cs Interface file, defines the Service Contract between the RestaurantService class and the WCF Service. The IRestaurantService Interface also defines each Operational Contract with the class’s methods. The Operational Contract includes Operational Contract Attributes, which define how the Service Operation (a method with an Operational Contract) will operate as part of the WCF Service. Operational Contract Attributes in this example include the required invocation (HTTP method – GET), format of the HTTP Request and Response (JSON), and caching (for the restaurant menu). The WFC Service references (has a dependency on) the Restaurant Class Library.

The WCF Web Service Project, RestaurantWcfService, contains two methods that are exposed to the client. The first, GetCurrentMenu, serializes an instance of RestaurantMenu, containing nested instances of MenuItem. It returns the JSONP-format HTTP Response to the client. There are no parameters passed to the method by the HTTP Request.

The second method, SendOrder, accepts the JSONP-format order, through an input parameter of the string data type, from the client’s HTTP Request. SendOrder then passes the order to the ProcessOrderJSON method, part of the Restaurant.ProcessOrder class. ProcessOrderJSON returns a string to SendOrder, containing some order information (Order Id, date/time, and number of order items). This information is serialized and returned in the JSONP-format HTTP Response to the client. The Response verifies that the order was received and understood.

Lastly, the web.config file contains the WCF bindings, behaviors, endpoints, and caching configuration. I always find configuring this file properly to be a challenge due to the almost-infinite number of WCF configuration options. There are many references available on configuring WCF, but be careful, many were written prior to .NET Framework 4. Configuring WCF for REST and JSONP became much easier with .NET Framework 4. Make sure you refer to the latest materials from MSDN on WCF for .NET Framework 4.

The IRestaurantService.cs Interface:

using Restaurant;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
using System.ServiceModel;
using System.ServiceModel.Web;

namespace RestaurantWcfService
{
    [ServiceContract]
    public interface IRestaurantService
    {
        [OperationContract]
        [Description("Returns a copy of the restaurant menu.")]
        [WebGet(BodyStyle = WebMessageBodyStyle.Bare,
        RequestFormat = WebMessageFormat.Json,
        ResponseFormat = WebMessageFormat.Json)]
        [AspNetCacheProfile("CacheFor10Seconds")]
        RestaurantMenu GetCurrentMenu();

        [OperationContract]
        [Description("Accepts a menu order and return an order confirmation.")]
        [WebGet(BodyStyle = WebMessageBodyStyle.Bare,
        RequestFormat = WebMessageFormat.Json,
        ResponseFormat = WebMessageFormat.Json,
        UriTemplate = "SendOrder?restaurantOrder={restaurantOrder}")]
        string SendOrder(string restaurantOrder);
    }
}

The RestaurantService.cs Class (inherits from IRestaurantService.cs):

using Restaurant;
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Linq;
using System.ServiceModel.Activation;

namespace RestaurantWcfService
{
    [AspNetCompatibilityRequirements(RequirementsMode =
    AspNetCompatibilityRequirementsMode.Allowed)]
    public class RestaurantService : IRestaurantService
    {
        public RestaurantMenu GetCurrentMenu()
        {
            //Instantiates new RestaurantMenu object and
            //sorts MeuItem objects by byDescription using LINQ
            var menuToReturn = new RestaurantMenu();

            var menuToReturnOrdered = (
                from items in menuToReturn
                orderby items.Description
                select items).ToList();

            menuToReturn = new RestaurantMenu(menuToReturnOrdered);
            return menuToReturn;
        }

        public string SendOrder(string restaurantOrder)
        {
            //Instantiates new ProcessOrder object and
            //passes JSON-format order string to ProcessOrderJSON method
            var orderProcessor = new ProcessOrder();
            var orderResponse =
                orderProcessor.ProcessOrderJSON(restaurantOrder);

            return orderResponse;
        }
    }
}

The WCF Service’s web.config File:

<?xml version="1.0"?>
<configuration>
    <system.web>
        <compilation debug="false" targetFramework="4.0" />
        <caching>
            <outputCacheSettings>
                <outputCacheProfiles>
                    <add name="CacheFor10Seconds" duration="10"
                         varyByParam="none" />
                </outputCacheProfiles>
            </outputCacheSettings>
        </caching>
    </system.web>
    <system.serviceModel>
        <bindings>
            <webHttpBinding>
                <binding name="webHttpBindingWithJsonP"
                         crossDomainScriptAccessEnabled="true" />
            </webHttpBinding>
        </bindings>
        <behaviors>
            <endpointBehaviors>
                <behavior name="webHttpBehavior">
                    <webHttp helpEnabled="true"/>
                </behavior>
            </endpointBehaviors>
            <serviceBehaviors>
                <behavior>
                    <serviceMetadata httpGetEnabled="true" />
                </behavior>
            </serviceBehaviors>
        </behaviors>
        <serviceHostingEnvironment aspNetCompatibilityEnabled="true"
            multipleSiteBindingsEnabled="true" />
        <services>
            <service name="RestaurantWcfService.RestaurantService">
                <endpoint address="" behaviorConfiguration="webHttpBehavior"
                    binding="webHttpBinding"
                          bindingConfiguration="webHttpBindingWithJsonP"
                    contract="RestaurantWcfService.IRestaurantService" />
            </service>
        </services>
    </system.serviceModel>
    <system.webServer>
        <modules runAllManagedModulesForAllRequests="true"/>
    </system.webServer>
</configuration>

WCF Web HTTP Service Help

Once you have the article’s code installed and running, you can view more details about the WCF Service’s operations (methods) using the new .NET Framework 4 WCF Web HTTP Service Help Page feature. Depending on your IIS configuration, the local address should be similar to: http://localhost/MenuWcfRestService/RestaurantService.svc/Help.

.NET Framework 4 WCF Web HTTP Service Help Page

.NET Framework 4 WCF Web HTTP Service Help Page

Restaurant Demo Site

RestaurantDemoSite is a non-ASP.NET website, just HTML and JavaScript. For this article, I chose to host the RestaurantDemoSiteweb site on a different port (2929) than the WCF Service on default port 80. I did this to demonstrate the necessity of JSONP for cross-domain scripting. Hosting them on two different ports is considered hosting on two different domains. Port 2929 is a randomly-selected open port on my particular development machine. Both the WCF Service and the website were setup as Virtual Directories in IIS, and then added to the Visual Studio 2010 Solution, along with the Restaurant Class Library.

Following the format of the first article, the website contains two identical pages, each with the same restaurant menu/order form. The ‘Development’ version is optimized for debugging and demonstration. The other, ‘Production’, with the JavaScript and CSS files minified and packed, is optimized for use in production. The demo uses the latest available jQuery JavaScript Library (jquery-1.6.3.js) and the jQuery plug-in, Format Currency (jquery.formatCurrency-1.4.0.js).

The page contains the new HTML5 <!DOCTYPE> declaration. I used HTML5’s new numeric input type for inputting the number of items to order. I defined a min and max value, also a new HTML5 feature. You can these HTML features working in the latest version of Google Chrome.

All of the client-side business logic is contained in the restaurant.js JavaScript file. This file makes calls to jQuery and Format Currency. I chose the sometimes controversial, static code analysis tool JSLint to help debug and refactor my JavaScript code. Even if you don’t agree with all of JSLint’s warnings, understanding the reason for them will really enhance your overall knowledge of JavaScript. A good alternative to JSLint, which I’ve also tried, is JSHint, a fork of the JSLint project. JSHint advertises itself as a more configurable version of JSLint.

The restaurant.js JavaScript file:

var addMenuItemToOrder, calculateSubtotal, clearForm, clickRemove,
formatRowColor, formatRowCurrency, getRestaurantMenu, handleOrder,
orderTotal, populateDropdown, tableToJson, sendOrder, wcfServiceUrl;

// Populate drop-down box with JSON data (menu)
populateDropdown = function () {
    var id, price, description;
    id = this.Id;
    price = this.Price;
    description = this.Description;
    $("#select_item")
        .append($("<option></option>")
        .val(id)
        .html(description)
        .attr("title", price));
};

// Use strict for all other functions
// Based on post at:
// http://ejohn.org/blog/ecmascript-5-strict-mode-json-and-more/
(function () {
    "use strict";

    wcfServiceUrl =
        "http://localhost/MenuWcfRestService/RestaurantService.svc/";

    // Execute when the DOM is fully loaded
    $(document).ready(function () {
        getRestaurantMenu();
    });

    // Add selected item to order
    $(function () {
        $("#add_btn").click(addMenuItemToOrder);
    });

    // Place order if it contains items
    $(function () {
        $("#order_btn").click(handleOrder);
    });

    // Retrieve JSON data (menu) and loop for each menu item
    getRestaurantMenu = function () {
        $.ajax({
            cache: true,
            url: wcfServiceUrl + "GetCurrentMenu",
            data: "{}",
            type: "GET",
            jsonpCallback: "RestaurantMenu",
            contentType: "application/javascript",
            dataType: "jsonp",
            error: function () {
                alert("Menu failed!");
            },
            success: function (menu) {
                $.each(menu, populateDropdown); // must call function as var
            }
        });
    };

    // Add selected menu item to order table
    addMenuItemToOrder = function () {
        var order_item_selected_quantity, selected_item,
            order_item_selected_id, order_item_selected_description,
            order_item_selected_price, order_item_selected_subtotal;

        // Limit order quantity to between 1-99
        order_item_selected_quantity =
            parseInt($("#select_quantity").val(), 10);

        if (order_item_selected_quantity < 1 ||
        order_item_selected_quantity > 99 ||
        isNaN(order_item_selected_quantity)) {
            return;
        }

        // Can't add 'Select an Item...' to order
        if ($("#select_item").get(0).selectedIndex === 0) {
            return;
        }

        // Get values
        selected_item = $("#select_item option:selected");
        order_item_selected_id = parseInt(selected_item.val(), 10);
        order_item_selected_description = selected_item.text();
        order_item_selected_price = parseFloat(selected_item.attr("title"));

        // Calculate subtotal
        order_item_selected_subtotal =
            calculateSubtotal(order_item_selected_price,
            order_item_selected_quantity);

        // Write out menu selection to table row
        $("<tr class='order_row'></tr>").html("<td>" +
            order_item_selected_quantity +
            "</td><td class='order_item_id'>" +
            order_item_selected_id +
            "</td><td class='order_item_name'>" +
            order_item_selected_description +
            "</td><td class='order_item_price'>" +
            order_item_selected_price +
            "</td><td class='order_item_subtotal'>" +
            order_item_selected_subtotal +
            "</td><td><input type='button' value='remove' /></td>")
            .appendTo("#order_cart").hide();

        // Display grand total of order_item_selected_id
        $("#order_cart tr.order_row:last").fadeIn("medium", function () {
            // Callback once animation is complete
            orderTotal();
        });

        formatRowCurrency();
        formatRowColor();
        clickRemove();
        clearForm();
    };

    // Calculate subtotal
    calculateSubtotal = function (price, quantity) {
        return price * quantity;
    };

    // Create alternating colored rows in order table
    formatRowColor = function () {
        $("#order_cart tr.order_row:odd").css("background-color", "#FAF9F9");
        $("#order_cart tr.order_row:even").css("background-color", "#FFF");
    };

    // Format new order item values to currency
    formatRowCurrency = function () {
        $("#order_cart td.order_item_price:last").formatCurrency();
        $("#order_cart td.order_item_subtotal:last").formatCurrency();
    };

    // Bind a click event to the correct remove button
    clickRemove = function () {
        $("#order_cart tr.order_row:last input").click(function () {
            $(this).parent().parent().children().fadeOut("fast", function () {
                $(this).parent().slideUp("slow", function () { // the row (tr)
                    $(this).remove(); // the row (tr)
                    orderTotal();
                });
            });
        });
    };

    // Clear order input form and re-focus cursor
    clearForm = function () {
        $("#select_quantity").val("");
        $("#select_item option:first-child").attr("selected", "selected");
        $("#select_quantity").focus();
    };

    // Calculate new order total
    orderTotal = function () {
        var order_total = 0;

        $("#order_cart td.order_item_subtotal").each(function () {
            var amount = ($(this).html()).replace("$", "");
            order_total += parseFloat(amount);
        });

        $("#order_total").text(order_total).formatCurrency();
    };

    // Call functions to prepare order and send to WCF Service
    handleOrder = function () {
        if ($("#order_cart tr.order_row:last").length === 0) {
            alert("No items selected...");
        } else {
            var data = tableToJson();
            sendOrder(data);
        }
    };

    // Convert HTML table data into an array
    // Based on code from:
    // http://johndyer.name/post/table-tag-to-json-data.aspx
    tableToJson = function () {
        var data, headers, orderCartTable, myTableRow, rowData, i, j;

        headers = ["Quantity", "Id"];
        data = [];
        orderCartTable = document.getElementById("order_cart");

        // Go through cells
        for (i = 1; i < orderCartTable.rows.length - 1; i++) {
            myTableRow = orderCartTable.rows[i];
            rowData = {};

            for (j = 0; j < 2; j++) {
                rowData[headers[j]] = myTableRow.cells[j].innerHTML;
            }

            data.push(rowData);
        }

        return data;
    };

    // Convert array to JSON and send to WCF Service
    sendOrder = function (data) {
        var jsonString = JSON.stringify({ restaurantOrder: data });

        $.ajax({
            url: wcfServiceUrl + "SendOrder?restaurantOrder=" + jsonString,
            type: "GET",
            contentType: "application/javascript",
            dataType: "jsonp",
            jsonpCallback: "OrderResponse",
            error: function () {
                alert("Order failed!");
            },
            success: function (confirmation) {
                alert(confirmation.toString());
            }
        });
    };
} ());

Using Firebug to Look Behind the Scenes

In real life, a restaurant’s menu changes pretty infrequently. Therefore, to speed page delivery, I chose to cache the restaurant’s menu on the client-side. Caching is configured as part of the Operational Contract in IRestaurantService, as well as in the jQuery AJAX call to GetCurrentMenu in restaurant.js. In this example, I set the cache to 10 seconds, which can be confirmed by looking at the Cache-Control property in the HTTP Response Header of the call to GetCurrentMenu, using Firebug.

Below is a screen grab of initial load of the restaurant menu/order form page in Firefox with Firebug running. Note the ‘Domain’ of the AJAX call is different than the page and associated files. Also, both the ‘Status’ and ‘Remote IP’ indicate the HTTP Response to GetCurrentMenu (the restaurant’s menu) is cached, along with the page and associated files. Firebug is an invaluable tool in the development and debugging of JavaScript, especially when working with AJAX.

Firebug in Firefox - Caching

Firebug in Firefox – Caching

Points of Interest

Several things stood out to me as a result of writing this article:

  • WCF – No matter how many times I work with WCF Services, getting them configured properly seems like 90% technical knowledge and 10% luck. Ok, maybe 20% luck! Seriously, there are a lot of great resources on the web regarding WCF configuration issues. If you have a specific problem with WCF, odds are someone else already had it and has published a solution. Make sure the information is current to the .NET Framework you are working with.
  • Third-party Libraries, Plug-ins, and Frameworks – Don’t confine yourself to using the out-of-the-box .NET Framework, JavaScript, or jQuery to solve all your coding challenges. There are an endless variety of Frameworks, JavaScript Libraries, and jQuery Plug-ins, available. Being a good developer is about providing the best solution to a problem, not necessarily writing each and every line of code, yourself. A few minutes of research can be worth hours of coding!
  • Refactoring – Refactoring your code is critical. Just making it work is not good enough. Added bonus? I’ve personally gained a considerable amount of knowledge about software development through refactoring. Forcing yourself to go back and optimize code can be a tremendous learning opportunity. Using third-party refactoring tools such JSLint/JSHint, FxCop, RefactorPro!, CodeRush, ReSharper, and others is a great way to improve both your refactoring and coding skills. I use all these tools as much as possible.
  • Cross-Domain with JSONP – Using JSONP is one technique to get around the limitations imposed by the same origin policy. JSONP has its pros and cons. Spend some time to research other methods that might better benefit your project requirements.

, , , , , , , , , , , , , , , , , ,

Leave a comment

Using ImageMagick to Create Previews and Thumbnails from Uploaded Images

ImageMagick Results in Internet Explorer

ImageMagick Results in Internet Explorer

Introduction

This article and the accompanying ASP.NET Web Application demonstrate the use of ImageMagick, a well-known open-source application from ImageMagick Studio LLC. The application creates a small thumbnail image and a larger preview image, both from a single JPEG or TIFF image, uploaded by the user through a web-browser. The application was written in C# using Microsoft Visual Studio 2008 and Developer Express’ CodeRush. Developer Express’ Refactor! Pro was used to optimize and refactor the source code.

Background

Working in the graphic arts industry, much of the development I am involved with deals with the creation, processing, storage, and distribution of images and visually rich documents. One such recent project was the creation of a web-to-print solution for the Fine Art Reproduction division of our company, Lazer Incorporated. Preview the site at Lazer Incorporated Fine Art Gallery. The fine arts web-to-print web solution consists of three primary user interfaces, as follows:

  1. Public art gallery to display the works of artists who use our services
  2. Private and secure ordering system for artists to order fine art prints and other services
  3. Customer service interface to administer the artists, artwork, thumbnail/previews, and orders.

The fine arts application is an ASP.NET 2.0 Web Application, written in C# 2.0 using Microsoft Visual Studio 2005. The solution uses SQL Server 2005 for storage of artist and artwork data. The solution integrates XML Flash Slideshow v3, an Adobe Dreamweaver Flash-based extension from dwusers.com, to display the artist’s portfolio. In addition, the solution automatically creates thumbnails and larger preview images of the artwork using ImageMagick. ImageMagick is a free application from ImageMagick Studio LLC. ImageMagick is available for multiple Operating Systems and programming environments.

Using the Code

The Application

The application calls ImageMagick v6.4.5 from the command-line to create thumbnails and previews from a single uploaded image file. It duplicates the approximate functionality of the aforementioned fine art solution’s thumbnail/preview creation routine. The application is written in Visual Studio 2008 using C# 3.0. This simple application contains a single Web Form and an associated class file. The application’s workflow is as follows:

  1. User uploads a TIFF or JPEG image, less than 5 Mbs in size, through the web browser.
  2. Application verifies the image has been successfully uploaded, and is within the specs above.
  3. Application calls ImageMagick from the command-line to return information about the uploaded image (optional).
  4. Application calls ImageMagick to create a small thumbnail image from the uploaded image, no bigger than 100 pixels in height.
  5. Application calls ImageMagick to create a larger-sized preview image, no bigger than 450 pixels in width or height.
  6. Application deletes the original uploaded file from the server.
  7. Application returns a detailed status report of all the actions to the user.

Preparing to Install the Application

All code for this article is available for download at on The Code Project. Before you compile and run the source files provided, you must complete two simple tasks. First, download and install ImageMagick. As of the date of this article, v6.5.2-1 is the most current. Since the instructions to install ImageMagick are so well documented at http://www.imagemagick.org/script/binary-releases.php#windows, I won’t be redundant, herein.

Secondly, install the folder-set I have provided as part of the source code. Inside the ImageMagick parent folder of the set, there are four folders. They will hold uploaded images, thumbnails, previews, and an ICC color profile. The profile ensures optimal color accuracy during image conversion and display of images in the web-browser. The source code assumes you will install the folder-set at the root-level of your C: drive (c:\imagemagick). You can easily change this property in the source code, if necessary.

After downloading the folders, create a virtual directory within Internet Information Services (IIS) Web Server, with the local path pointed to the ‘imagemagick‘ folder (see screen capture below). This step is necessary so the application can display the preview and thumbnail in the web-browser after they are created.

Creating Virtual Directory in IIS

Creating Virtual Directory in IIS

Lastly, you must give the ASP.NET user account read and write access to the folder-set (see screen capture below). The Web Application, and subsequently ImageMagick, will need these privileges to create the new images in the appropriate folders, read the images back into the web-browser, and delete the original uploaded image.

ASP.NET User Account Permissions

ASP.NET User Account Permissions

Installing the Application

Now, you can download and install the source files and compile the web application. To do so:

  1. Create a new C# ASP.NET Web Application in VS 2008 called ‘IMPreviewDemo’.
  2. Replace the Default.aspx Web Form with the Default.aspx form provided.
  3. Install the Global.aspxImageMaker.csResult.cs files in the root directory of the IMPreviewDemo application.
  4. Optionally, open your existing Web.config file and add a <httpRuntime/> tag to the <system.web> section (see screen capture below). This will allow you to also adjust the maximum allowable uploaded image size, and also importantly, set a time limit for uploads to complete.
Modifying the Web.config File

Modifying the Web.config File Settings

The Code

The Default.aspx Web Form contains the VerifyUpload() method. When the Process button is clicked,VerifyUpload() checks that the file has successfully been uploaded, and checks that the file is either a JPEG or TIFF file. VerifyUpload() returns a value indicating whether or not the tests were passed. Only then is an instance of the ImageMaker class created, and a call placed to the ProcessImage(string uploadedFile,int detailLevel) method, part of the ImageMaker class.

using System;

namespace IMPreviewDemo
{
    public partial class Default : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            if(Request.QueryString["error"] == "filesize")
            {
                Message.CssClass = "textred";
                Message.Text = "File was to large to upload. Must be &lt; 5 Mb.";
            }
        }
        protected void Process_Click(object sender, EventArgs e)
        {
            ImagePanel.Visible = false;
            if(VerifyUpload() == Result.Success)
            {
                CreateImages();
            }
        }
        protected Result VerifyUpload()
        {
            if(ImageUpload.HasFile)
            {
                string fileExtension = System.IO.Path
                    .GetExtension(ImageUpload.FileName).ToLower();
                string[] allowedExtensions =
                    new string[] { ".jpeg", ".jpg", ".tif", ".tiff" };
                foreach(string allowedExtension in allowedExtensions)
                    if(fileExtension == allowedExtension)
                        return Result.Success;
            }
            Message.CssClass = "textred";
            Message.Text =
                "Cannot accept this image type. Must be JPEG or TIFF.";
            ImagePanel.Visible = false;
            return Result.Failure;
        }
        private void CreateImages()
        {
            ImageMaker PreviewMaker = new ImageMaker();
            string uploadedFile = String.Format("{0}{1}",
                ImageMaker.PathUploadedFiles,
                ImageUpload.FileName.Replace(" ", "_"));
            ImageUpload.PostedFile.SaveAs(uploadedFile);
            int detailLevel = Convert.ToInt16(DetailLevel.SelectedValue);
            Message.CssClass = "";
            Message.Text = PreviewMaker.ProcessImage(uploadedFile, detailLevel)
                .Replace("\n", "&lt;br /&gt;");
            Thumbnail.ImageUrl = PreviewMaker.UrlThumbnail.ToString();
            Preview.ImageUrl = PreviewMaker.UrlPreview.ToString();
            ImagePanel.Visible = true;
        }
    }
}

The latest version of this application, starting 1.1, uses the Global.aspx file to check uploaded file size. I found this technique on Vikram Lakhotia’s Blog. The Application_BeginRequest event checks the uploaded file size. If the file is bigger than the limit I have set, 5 Mb (measured in bytes), an error sent back to the Default.aspx page.

Reviewing the ImageMaker class, you will notice that all file paths are stored in a series of properties and fields at the top of the class. You can modify these to suit your own local file environment. The ImageMaker class also has two fields that identify the path to the ImageMagick application you installed. Again, adjust the fields to match your own environment.

The ImageMaker class has two primary methods. The ProcessImage(string uploadedFile, intdetailLevel) method formats the file arguments to be send to the ImageMagick application through the command-line. This method also concatenates and formats the status information returned to the user.

The second method, CallImageMagick(string fileArgs), uses the argument passed to it by ProcessImage(string uploadedFile, int detailLevel) to direct the ImageMagic application from the command-line. Depending on the argument, CallImageMagick(string fileArgs) may or may not return a response string. ImageMagick is called by instantiating the System.Diagnostics.Process class. This class is able to start and stop local system processes.

In the source code, the file arguments passed to ImageMagick are specific to our workflow. We convert the uploaded image to an 8-bit RGB file, scale it down, sharpen it, assign a color profile, and finally save it as a compressed JPEG for display in a web-browser. You can change any of the ImageMagick arguments to fit your needs, along with the file size limitations and file types used in this demo. Refer to the ImageMagick website for a complete list of all ImageMagick’s tools and options (arguments).

using System;
using System.Diagnostics;
using System.IO;
using System.Text;

namespace IMPreviewDemo
{
    public class ImageMaker
    {
        //Location of the ImageMagick applications
        private const string pathImageMagick =
            @"C:\Program Files\ImageMagick-6.4.5-Q16\";
        private const string appImageMagick = "MagickCMD.exe";

        //Names for images created by ImageMagick
        private string ticksID;
        private string previewName;
        private string thumbnailName;

        //Paths to all the file folders
        private const string pathRoot = @"C:\imagemagick";
        private const string pathPreviews = pathRoot + @"\previews\";
        private const string pathThumbnails = pathRoot + @"\thumbnails\";
        private const string pathColorProfile = pathRoot + @"\profiles\";

        private const string colorProfile =
            "sRGB_IEC61966-2-1_black_scaled.icc";

        public static string PathUploadedFiles
        { get { return String.Format(@"{0}\uploads\", pathRoot); } }

        //URLs of images created by ImageMagick
        //(assumes imagemagick folder a virtual directory in IIS)
        public Uri UrlThumbnail { get; private set; }
        public Uri UrlPreview { get; private set; }

        StringBuilder sbResults = new StringBuilder();

        public string ProcessImage(string uploadedFile, int detailLevel)
        {
            try
            {
                ticksID = DateTime.Now.Ticks.ToString();
                sbResults.AppendLine(String.Format("Start time: {0}\n",
                    DateTime.Now.ToLongTimeString()));
                if(detailLevel == 1) { GetFileInfo(uploadedFile); }
                CreateThumbnailImage(uploadedFile);
                CreatePreviewImage(uploadedFile);
                File.Delete(uploadedFile);
                sbResults.AppendLine(String.Format("Uploaded file deleted: {0}\n",
                    uploadedFile));
                sbResults.AppendLine(String.Format("Stop time: {0}\n",
                    DateTime.Now.ToLongTimeString()));
                return sbResults.ToString();
            }
            catch(Exception ex)
            {
                return ex.Message;
            }
        }
        private void GetFileInfo(string uploadedFile)
        {
            string fileArgs = String.Format("identify -verbose {0}", uploadedFile);
            sbResults.AppendLine(String.Format("Uploaded file arguments: {0}\n",
                fileArgs));
            sbResults.AppendLine(String.Format("Uploaded file info: {0}\n",
                CallImageMagick(fileArgs)));
        }
        private void CreatePreviewImage(string uploadedFile)
        {
            previewName = String.Format("preview_{0}.jpg", ticksID);
            UrlPreview =
                new Uri(String.Format(@"http://localhost/imagemagick/previews/{0}",
                previewName));
            StringBuilder sbFileArgs = new StringBuilder()
                .Append(String.Format("convert {0}", uploadedFile))
                .Append(@" -intent relative")
                .Append(String.Format(@" -profile {0}{1}",
                    pathColorProfile, colorProfile))
                .Append(@" -filter Sinc -resize 450x450&gt;")
                .Append(@" -unsharp .5x.5+.5+0")
                .Append(@" -depth 8")
                .Append(@" -strip")
                .Append(String.Format(@" -profile {0}{1}",
                    pathColorProfile, colorProfile))
                .Append(@" -quality 80 ")
                .Append(pathPreviews + previewName);
            string fileArgs = sbFileArgs.ToString();
            sbResults.AppendLine(String.Format("Thumbnail file arguments: {0}\n",
                fileArgs));
            sbResults.AppendLine(String.Format("Thumbnail created: {0}\n",
                CallImageMagick(fileArgs)));
        }
        private void CreateThumbnailImage(string uploadedFile)
        {
            thumbnailName = String.Format("thumbnail_{0}.jpg", ticksID);
            UrlThumbnail =
                new Uri(String.Format(@"http://localhost/imagemagick/thumbnails/{0}",
                thumbnailName));
            StringBuilder sbFileArgs = new StringBuilder()
                .Append(String.Format("convert {0}", uploadedFile))
                .Append(@" -intent relative")
                .Append(String.Format(@" -profile {0}{1}",
                    pathColorProfile, colorProfile))
                .Append(@" -filter Sinc -resize x100&gt;")
                .Append(@" -unsharp .5x.5+.5+0")
                .Append(@" -depth 8")
                .Append(@" -strip")
                .Append(String.Format(@" -profile {0}{1}",
                    pathColorProfile, colorProfile))
                .Append(@" -quality 60 ")
                .Append(pathThumbnails + thumbnailName);
            string fileArgs = sbFileArgs.ToString();
            sbResults.AppendLine(String.Format("Preview file arguments: {0}\n",
                fileArgs));
            sbResults.AppendLine(String.Format("Preview created: {0}\n",
                CallImageMagick(fileArgs)));
        }
        private static string CallImageMagick(string fileArgs)
        {
            ProcessStartInfo startInfo = new ProcessStartInfo
            {
                Arguments = fileArgs,
                WorkingDirectory = pathImageMagick,
                FileName = appImageMagick,
                UseShellExecute = false,
                CreateNoWindow = true,
                RedirectStandardOutput = true
            };
            using(Process exeProcess = Process.Start(startInfo))
            {
                string IMResponse = exeProcess.StandardOutput.ReadToEnd();
                exeProcess.WaitForExit();
                exeProcess.Close();
                return !String.IsNullOrEmpty(IMResponse) ? IMResponse : "True";
            }
        }
    }
}

Conclusion

ImageMagick is a powerful image creation and image-processing engine, capable of the most complex tasks. It is similar to Adobe Photoshop without the GUI in many respects. In addition to the command-line, ImageMagick can be accessed programmatically through its API. There are options available to Windows programmers using COM+ and .NET. You can read more at http://imagemagick.net/script/api.php.

, , , , ,

Leave a comment