Posts Tagged software

Effective DevOps Technical Analysis

mindmap_analysisBuilding a successful DevOps team is a challenge. Building a successful Agile DevOps team is an enormous challenge. Many Software Developers have experience working on Agile teams, often starting in college. On the contrary, most Operations Engineers and System Administrators have not participated on an Agile team. They frequently cut their teeth in highly reactive, support ticket-driven environments. Agile methodologies can often seem foreign and uncomfortable to them.

Adding to the challenge, most DevOps teams are expected to execute on a broad range of requirements. DevOps user stories can span the entire software development, delivery, and support lifecycle. Stories often include logging, monitoring, alerting, infrastructure provisioning, networking, security, continuous integration and deployment, and support.

Worst of all, many teams don’t apply the same rigor to developing high-quality user stories for DevOps-related requirements as they do for software requirements. On teams where DevOps Engineers are integrated with Software Developers, I find this is often due to a lack of understanding of DevOps requirements. On stand-alone DevOps teams, I find DevOps Engineers often believe their stories don’t need the same attention to details as their software development peers.

Successful Teams

I’ve been lucky enough to be part of multiple DevOps teams, both integrated and stand-alone. Those teams were a mix of engineers from both software development, system administration, and operations backgrounds. One of the reasons these teams were effective, was the presence of a Technical Analyst. The Technical Analyst role falls somewhere in-between an Agile Business Analyst and a traditional Systems Analyst; probably closer to the latter, on a DevOps team. The Technical Analyst ensures that user stories are Ready for Development, they ensure the stories meet the Definition of Ready.

There is a common perception that the Technical Analyst on an Agile team must be an experienced technologist. DevOps is a very broad and quickly evolving discipline. While a good understanding of Agile practices and software development are beneficial, I’ve found organization and investigation skills to be most advantageous to the Technical Analyst.

In this post, we will examine one particular methodology for optimizing the effectiveness of the Technical Analyst, through the use of a consistent approach to functional requirement analysis and story development.

Inadequate Requirements

Let’s start with a typical greenfield requirement frequently assigned to DevOps teams, centralized logging.

Title
Centralized Application Logging

User Story
As a Software Developer,
I need to view application logs,
So that I can monitor and troubleshoot running applications.

Acceptance Criterion 1
I can view the latest logs from multiple applications.

Acceptance Criterion 2
I can view application logs without having to log directly into each host on which the applications are running.

Acceptance Criterion 3
I can view the log entries in exact chronological order, based on timestamps.

I know some DevOps engineers that would gladly pick up this user story, make a handful of assumptions based on previous experiences, and manage to hack out a usable centralized application logging system. The ability to meet the end user’s expectations would be entirely based on the experience (and luck) of the engineer, certainly not the vague and incomplete user story and acceptance criteria, and lack of any technical background details.

Multi-Faceted Analysis

Consistently delivering well-defined user stories and comprehensive acceptance criteria, which provide maximum business value with minimal resource utilization, requires a systematic analytical approach.

Based on my experience, I have found certain facets of the software development, delivery, and support lifecycles, affect or are affected by DevOps-related functional requirements. Every requirement should be uniformly analyzed against those facets to improve the overall quality of user stories and acceptance criteria. In no particular order, these facets include:

  1. Logging
  2. Monitoring
  3. Alerting
  4. Ownership
  5. Support
  6. Maintenance
  7. Backup and Recovery
  8. Security and Compliance
  9. Continuous Integration and Delivery
  10. Service Level Agreements (SLAs)
  11. Configuration Management
  12. Environment Management
  13. High Availability and Fault Tolerance
  14. Release Management
  15. Life Expectancy
  16. Architectural Approval
  17. Technology Preferences
  18. Infrastructure
  19. Data Management
  20. Training and Documentation
  21. Computer Networking
  22. Licensing and Legal
  23. Cost
  24. Technical Debt
  25. Other

For simplicity, I prefer to organize these facets into five general categories: Technology, Infrastructure, Delivery, Support, and Organizational.

mindmap2_tech_analysisLet’s take a closer look at each facet, and how they could impact DevOps requirements.

Logging

Do the requirements introduce changes, which would or should produce logs? How should those logs be handled? Do those changes impact existing logging processes and systems?

Monitoring

Do the requirements introduce changes, which would or should be monitored? Do those changes impact existing monitoring processes and systems?

Alerting

Do the requirements introduce changes, which would or should produce alerts? Do the requirements introduce changes, which impact existing alerting processes and systems?

Ownership

Do the requirements introduce changes, which require an owner? Do those changes impact existing ownership models? An example of this might be regarding who owns the support and maintenance of new infrastructure.

Support

Do the requirements introduce changes, which would or should require internal, external, or vendor support? Do those changes impact existing support models? For example, adding new service categories to a support ticketing system, like ServiceNow.

Maintenance

Do the requirements introduce changes, which would or should require regular and on-going maintenance? Do those changes impact existing maintenance processes and schedules? An example of maintenance might be regarding how upgrading and patching would be completed for a new monitoring system.

Backup and Recovery

Do the requirements introduce changes, which would or should require backup and recovery? Do the requirements introduce changes, which impact existing backup and recovery processes and systems?

Security and Compliance

Do the requirements introduce changes, which have security and compliance implications? Do those changes impact existing security and compliance requirements, processes, and systems? Compliance in IT frequently includes Regulatory Compliance with HIPAA, Sarbanes-Oxley, and PCI-DSS. The Security and Compliance facet is closely associated with Licensing and Legal.

Continuous Integration and Delivery

Do the requirements introduce changes, which require continuous integration and delivery of application or infrastructure code? Do those changes impact existing continuous integration and delivery processes and systems?

Service Level Agreements (SLAs)

Do the requirements introduce changes, which are or should be covered by Service Level Agreements (SLA)? Do the requirements introduce changes, which impact existing SLAs Often, these requirements are system performance- or support-related, centered around Mean Time Between Failures (MTBF), Mean Time to Repair, and Mean Time to Recovery (MTTR).

Configuration Management

Do the requirements introduce changes, which requires the management of new configuration or multiple configurations? Do those changes affect existing configuration or configuration management processes and systems? Configuration management also includes feature management (aka feature switches). Will the changes turned on or turned off for testing and release? Configuration Management is closely associated with Environment Managment and Release Management.

Environment Management

Do the requirements introduce changes, which affect existing software environments or require new software environments? Do those changes affect existing environments? Software environments typically include Development, Test, Performance, Staging, and Production.

High Availability and Fault Tolerance

Do the requirements introduce changes, which require high availability and fault tolerant configurations? Do those changes affect existing highly available and fault-tolerant systems?

Release Management

Do the requirements introduce changes, which require releases to the Production or other client-facing environments? Do those changes impact existing release management processes and systems?

Life Expectancy

Do the requirements introduce changes, which have a defined lifespan? For example, will the changes be part a temporary fix, a POC, an MVP, pilot, or a permanent solution? Do those changes impact the lifecycles of existing processes and systems? For example, do the changes replace an existing system?

Architectural Approval

Do the requirements introduce changes, which require the approval of the Client, Architectural Review Board (ARB), Change Approval Board (CAB), or other governing bodies? Do the requirements introduce changes, which impact other existing or pending technology choices? Architectural Approval is closely related to Technology Preferences, below. Often an ARB or similar governing body will regulate technology choices.

Technology Preferences

Do the requirements introduce changes, which might be impacted by existing preferred vendor relationships or previously approved technologies? Are the requirements impacted by blocked technology, including countries or companies blacklisted by the Government, such as the Specially Designated Nationals List (SDN) or Consolidated Sanctions List (CSL)? Does the client approve of the use of Open Source technologies? What is the impact of those technology choices on other facets, such as Licensing and Legal, Cost, and Support? Do those changes impact other existing or pending technology choices?

Infrastructure

Do the requirements introduce changes, which would or should impact infrastructure? Is the infrastructure new or existing, is it physical, is it virtualized, is it in the cloud? If the infrastructure is virtualized, are there coding requirements about how it is provisioned? If the infrastructure is physical, how will it impact other facets, such as Technology Preferences, Cost, Support, Monitoring, and so forth? Do the requirements introduce changes, which impact existing infrastructure?

Data Management

Do the requirements introduce changes, which produce data or require the preservation of state? Do those changes impact existing data management processes and systems? Is the data ephemeral or transient? How is the data persisted? Is the data stored in a SQL database, NoSQL database, distributed key-value store, or flat-file? Consider how other aspects, such as Security and Compliance, Technology Preferences, and Architectural Approval, impact or are impacted by the various aspects of Data?

Training and Documentation

Do the requirements introduce changes, which require new training and documentation? This also includes architectural diagrams and other visuals. Do the requirements introduce changes, which impact training and documentation processes and practices?

Computer Networking

Do the requirements introduce changes, which require networking additions or modifications? Do those changes impact existing networking processes and systems? I have separated out Computer Networking from Infrastructure since networking is itself, so broad, involving such things as DNS, SDN, VPN, SSL, certificates, load balancing, proxies, firewalls, and so forth. Again, how do networking choices impact other aspects, like Security and Compliance?

Licensing and Legal

Do the requirements introduce changes, which requires licensing? Should or would the requirements require additional legal review? Are there contractual agreements that need authorization? Do the requirements introduce changes, which impact licensing or other legal agreements or contracts?

Cost

Do the requirements introduce changes, which requires the purchase of equipment, contracts, outside resources, or other goods which incur a cost? How and where will the funding come from to purchase those items? Are there licensing or other legal agreements and contracts required as part of the purchase? Who approves these purchases? Are they within acceptable budgetary expectations? Do those changes impact existing costs or budgets?

Technical Debt

Do the requirements introduce changes, which incur technical debt? What is the cost of that debt? How will impact other requirements and timelines? Do the requirements eliminate existing technical debt? Do those changes impact existing or future technical debt caused by other processes and systems?

Other

Depending on your organization, industry, and DevOps maturity, you may prefer different aspects critical to your requirements.

Closer Analysis

Let’s take a second look at our logging story, applying what we’ve learned. For each facet, we will ask the following three questions.

Is there any facet of _______ that impacts the logging requirement?
Will the logging requirement impact any facet of _______?
If yes, will the impacts be captured as part of this user story, or elsewhere?

For the sake of brevity, we will only look at the half the facets.

Logging

Although the requirement is about centralized logging, there is still analysis that should occur around Logging. For example, we should determine if there is a requirement that the centralized logging application should consume its own logs for easier supportability. If so, we should add an acceptance criterion to the original user story around the logging application’s logs being captured.

Monitoring

If we are building a centralized application logging system, we will assume there will be a centralized logging application, possibly a database, and infrastructure on which the application is running. Therefore, it is reasonable to also assume that the logging application, database, and infrastructure need to be monitored. This would require interfacing with an existing monitoring system or creating a new monitoring system. If and how the system will be monitored needs to be clarified.

Interfacing with a current monitoring system will minimally require additional acceptance criteria. Developing a new monitoring system would undoubtedly require additional user story or stories, with their own acceptance criteria.

Alerting

If we are monitoring the system, then we could assume that there should be a requirement to alert someone if monitoring finds the centralized application logging system becomes impaired. Similar to monitoring, this would require interfacing with an existing alerting system or creating a new alerting system. A bigger question, who do the alerts go to? These details should be clarified first.

Interfacing with an existing alerting system will minimally require additional acceptance criteria. Developing a new alerting system would require additional user story or stories, with their own acceptance criteria.

Ownership

Who will own the new centralized application logging system? Ownership impacts many facets, such as Support, Maintenance, and Alerting. Ownership should be clearly defined, before the requirement’s stories are played. Don’t build something that does not have an owner, it will fail as soon as it is launched.

No additional user stories or acceptance criteria should be required.

Support

We will assume that there are no Support requirements, other than what is already covered in Monitoring, Ownership, and Training and Documentation.

Maintenance

Similarly, we will assume that there are no Maintenance requirements, other than what is already covered in Ownership and Training and Documentation. Requirements regarding automating system maintenance in any way should be an additional user story or stories, with their own acceptance criteria.

Backup and Recovery

Similar to both Support and Maintenance, Backup and Recovery should have a clear owner. Regardless of the owner, integrating backup into the centralized application logging system should be addressed as part of the requirement. To create a critical support system such as logging, with no means of backup or recovery of the system or the data, is unwise.

Backup and Recovery should be an additional user story or stories, with their own acceptance criteria.

Security and Compliance

Since we are logging application activity, we should assume there is the potential of personally identifiable information (PII) and other sensitive business data being captured. Due to the security and compliance risks associated with improper handling of PII, there should be clear details in the logging requirement regarding such things as system access.

At a minimum, there are usually requirements for a centralized application logging system to implement some form of user, role, application, and environment-level authorization for viewing logs. Security and Compliance should be an additional user story or stories, with their own acceptance criteria. Any requirements around the obfuscation of sensitive data in the log entries would also be an entirely separate set of business requirements.

Continuous Integration and Delivery

We will assume that there are no Continuous Integration and Delivery requirements around logging. Any requirements to push the logs of existing Continuous Integration and Delivery systems to the new centralized application logging system would be a separate use story.

Service Level Agreements (SLAs)

A critical aspect of most DevOps user stories, which is commonly overlooked, Service Level Agreements. Most requirements should at least a minimal set of performance-related metrics associated with them. Defining SLAs also help to refine the scope of a requirement. For example, what is the anticipated average volume of log entries to be aggregated and over what time period? From how many sources will logs be aggregated? What is the anticipated average size of each log entry, 2 KB or 2 MB? What is the anticipated average volume of concurrent of system users?

Given these base assumptions, how quickly are new log entries expected to be available in the new centralized application logging system? What is the expected overall response time of the system under average load?

Directly applicable SLAs should be documented as acceptance criteria in the original user story, and the remaining SLAs assigned to the other new user stories.

Configuration Management

We will also assume that there no Configuration Management requirements around the new centralized application logging system. However, if the requirements prescribed standing up separately logging implementations, possibly to support multiple environments or applications, then Configuration Management would certainly be in scope.

Environment Management

Based on further clarification, we will assume we are only creating a single centralized application logging system instance. However, we still need to learn which application environments need to be captured by that system. Is this application only intended to support the Development and Test environments? Are the higher environments of Staging and Production precluded due to PII?

We should add an explicit acceptance criterion to the original user story around which application environments should be included and which should not.

High Availability and Fault Tolerance

The logging requirement should state whether or not this system is considered business critical. Usually, if a system is considered critical to the business, then we should assume the system requires monitoring, alerting, support, and backup. Additionally, the system should be highly available and fault tolerant.

Making the system highly available would surely have an impact on facets like Infrastructure, Monitoring, and Alerting. High Availability and Fault Tolerance could be a separate user story or stories, with its own acceptance criteria, as long as the original requirement doesn’t dictate those features.

Other Aspects

We skipped half the facets. They would certainly expand our story list and acceptance criteria. We didn’t address how the end user would view the logs. Should we assume a web browser? Which web browsers? We didn’t determine if the system should be accessible outside the internal network. We didn’t determine how the system would be configured and deployed. We didn’t investigate if there are any vendor or licensing restrictions, or if there are any required training and documentation needs. Possibly, we will need Architectural Approval before we start to develop.

Analysis Results

Based on our limited analysis, we uncovered some further needs:

Further Clarification

We need additional information and clarification regarding Logging, Monitoring, Alerting, and Security and Compliance. We should also determine ownership of the resulting centralized application logging system, before it is built, to ensure Alerting, Maintenance, and Support, are covered.

Additional Acceptance Criteria

We need to add additional acceptance criteria to the original user story around SLAs, Security and Compliance, Logging, Monitoring, and Environment Management.I prefer to write acceptance criteria in the

I prefer to write acceptance criteria in the Given-When-Then format. I find the ‘Given-When-Then’ format tends to drive a greater level of story refinement and detail than the ‘I’ format. For example, compare:

I expect to see the most current log entries with no more than a two-minute time delay.

With:

Given the centralized application logging system is operating normally and under average load,
When I view log entries for running applications, from my web browser, 
Then I should see the most current log entries with no more than a two-minute time delay.

New User Stories

Following Agile best practices, best illustrated by the INVEST mnemonic, we need to add new, quality user stories, with their own acceptance criteria, around Monitoring, Alerting, Backup and Recovery, Security and Compliance, and High Availability and Fault Tolerance.

Conclusion

Examining all possible facets of each DevOps requirement will improve the overall quality of user stories and acceptance criteria, maximizing business value and minimizing resource utilization.

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , , ,

Leave a comment

Tips for Better of Software Releases

gocd_success_pipeline

Introduction

Although the process of releasing software varies considerably from application to application, we would probably all agree there are common reasons releases tend to fail. Lack of planning and preparation, defective code, infrastructure and release tooling failures, and poor communication, all transcend technology choices and industries.

Indeed, no one can guarantee a release’s success each and every time. However, there are steps we can take to improve the odds of executing consistently successful software releases. Based on my experience with a diverse set of enterprise software organizations, I’ve collected following suggestions for improving release success.

Manage the Release, Don’t Let the Release Manage You

Engage an experienced Release Manager, someone who schedules the release, coordinates the resources, runs the release call, leads the release team through the plan, and communicates status to stakeholders. The Release Manager serves as a single point of contact (aka SPOC) for the release. Having a Release manager allows other release team members to focus on their specific tasks without needless distraction.

A Goal Without a Plan is Just a Wish

Always have an implementation plan (aka IPlan). Whether a single page of paper, or a hundred-page Microsoft Word document, script the entire release in advance. What are the required tasks? What is their order of execution? What resources are required to complete each task? What is the test plan? What is the contingency plan, if or when something goes awry?

Don’t Boil the Ocean

Releases should have specific goals, the fewer, the better. Release goals should be clearly stated at the beginning of the implementation plan. If you feel you need to compress several goals into a single release, you are probably not releasing frequently enough.

For simplicity, separate software-centric releases from infrastructure-centric changes. Don’t tie the success of either, to the success or failure of the other.

Expect Success, Plan for Failure

Channel your inner Boy Scout and always be prepared. Have a contingency plan. Plan for multiple failure scenarios. Individual task failures only lead to release failures when we don’t have a plan to correct for the individual failures quickly. Are additional resources required in the event of a failure? If so, they should be available, and aware of the release plan and the contingency plan, in advance.

Seek the Understanding and Approval of Others

This is one of those times when seeking the understanding and the approval of others is a good thing. Review the plan in advance with all required resources and stakeholders. Ensure there is a complete comprehension of the plan, goals, resource requirements, and timing. Discuss the plan’s level of risk to the organization and their customers. Seek the support and approval of stakeholders when required.

The More I Practice the Luckier I Get

Do a dry run of the plan, even if it is just a verbal walkthrough with the release team. Better yet, do a live run in a production-like staging environment. Adjust the plan if necessary. Also, don’t forget to practice your contingency plan.

Should I Pack a Lunch?

Part of the plan′s review should be a discussion of timing. Release resources should know the estimated total time for the release (aka release window) — best case and not-so-best case. Also, if individual stages are expected to take more than a few minutes to complete, those times should be understood in advance. There is nothing worse than staring endlessly at the third stage of a ten stage continuous integration pipeline for 15 minutes, with no idea of how much long the task is expected to take.

Don’t Start the Game without the Whole Team on the Field

Never start a release without all the required resources present and prepared. A quick release can turn into a long and painful experience if you are waiting on resources. In my experience, the longer a release takes to complete, the greater the risk of failure.

Make a Plan and Stick to It

You wrote the plan, you reviewed the plan, you practiced the plan, and you received the approval of your stakeholders for the plan. So, follow the plan.

If you absolutely must deviate from the plan, take the time to consider the impact and potential risks. Document the deviation for the post-mortem and for planning the next release.

Test Early, Test Often

Testing should not be the final step to confirm the release’s success. Testing should be done continuously and automatically, throughout the release process. Test after each significant stage of the plan. It’s easier to find and correct issues the early they are discovered.

Mute is Your Friend, Always be on Mute

Emotions will flare, words will uncontrollably leap from your lips on occasion, background noise makes following the release call difficult to follow, and diagnosing release issues isn’t best done in front of an audience. Keep the release call focused on the plan and free of emotion.

Are We There Yet?

Consistently communicate before, during, and after the release. Let stakeholders know in advance when the release is starting and how long it is expected to take. Keep stakeholders aware of significant deviations to the plan, especially with customer-facing impact. Let everyone know when the release finishes, and a final release status. Keep communications germane.

Ensure sure everyone understands how the release status will be communicated, be it email, IM, persistent chat, or a web-based status page.

Mistakes are Meant for Learning, not Repeating

Successful or otherwise, follow each release with a blameless post-mortem. A post-mortem might only require a quick five-minute chat. Or, the whole release team might need an hour with a therapist (that’s a joke). Discuss what went right, and what did not go as well as expected. Be keen to focus on repeat problems and problems that were not caught during the release. Most importantly, consider how to continuously improve the release process.

Releasing is a Game, Keep Score

Know your release’s Key Performance Indicators (KPIs) and your Service Level Agreements (SLAs). Understand what matters most to your stakeholders and to your customers. Always know how you are performing against your KPIs and SLAs, and what you need to do to improve them. Dashboards are great tools to display KPI and SLA performance transparently.

Is your failure rate increasing or decreasing? Is one type of task responsible for a majority of your release failures? Is your release taking longer or shorter to complete each time? Did you experience an unexpected outage during the release? For how long? What is the volume of post-release issues caused by, but not discovered during the release?

All opinions in this post are my own and not necessarily the views of my current employer or their clients.

, , , , , , ,

Leave a comment

First Impressions of Code First Development with Entity Framework 5 in Visual Studio 2012

Build a Data Access Layer (DAL) in Visual Studio 2012 using the recently-released Entity Framework 5 with Code First Development. Leverage Entity Framework 5’s eagerly anticipated enum support, as well as Code First Migrations and Lazy Loading functionality. An updated version of this project’s source code, using EF6 is now available on GitHub. The GitHub repository contains all three Entity Framework blog posts.

Introduction

In August of this year (2012), along with the release of Visual Studio 2012, Microsoft announced the release of Entity Framework 5 (EF5). According to Microsoft, “Entity Framework (EF) is an object-relational mapper that enables .NET developers to work with relational data using domain-specific objects. It eliminates the need for most of the data-access code that developers usually need to write.

EF5 offers multiple options for development, including Model First, Database First, and Code First. Code First, first introduced in EF4.1, allows you to define your model using C# or VB.Net POCO classes. Additional configuration can optionally be performed using attributes on your classes and properties or by using a fluent API. Code First will then create a database if it doesn’t exist, or Code First will use an existing empty database, adding new tables, according to Microsoft.

Exploring Code First

I started by looking at EF5’s Code First development to create a new database. I chose to create a Data Access Layer (DAL) for a health tracker application. This was loosely based on a mobile application I developed some time ago. You can track food intake, physical activities, and how much water you drink per day.

To further explore EF5’s new enum support feature, I substituted two of the database tables, containing static ‘system’ data, for enumeration classes. Added in EF5, Enumeration (enum) support, was the number one user-requested EF feature. Also added to EF5 were spatial data types and my personal favorite, table-valued functions (TVF).

Once the new database was created, I tested another newer EF feature, Code First Migrations. Code First Migrations, introduced in EF4.3, is a feature that allows a database created by Code First to be incrementally changed as your Code First model evolves.

Creating the HealthTracker DAL

Final View of Both Projects in Solution

Final View of Both Projects in Solution

Here are the steps I followed to create my EF5 Entity Framework DAL project:

1. Create a new C# Class Library, ‘HealthTracker.DataAccess’, in a Solution, ‘HealthTracker’. Target the .NET Framework 4.5.

New Class Library

New Class Library

2. Create (4) POCO (Plain Old CLR Object) classes: Person, Activity, Meal, and Hydration, shown below. Note the virtual properties. According to Microsoft, this enables the Lazy Loading feature of Entity Framework. Lazy Loading means that the contents of these properties will be automatically loaded from the database when you try to access them.

using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;

namespace HealthTracker.DataAccess.Classes
{
    public class Person
    {
        public int PersonId { get; set; }
        public string Name { get; set; }
        public virtual List<Meal> Meals { get; set; }
        public virtual List<Activity> Activities { get; set; }
        public virtual List<Hydration> Hydrations { get; set; }
    }
}
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;

namespace HealthTracker.DataAccess.Classes
{
    public class Activity
    {
        public int ActivityId { get; set; }
        public DateTime Date { get; set; }
        public ActivityType Type { get; set; }
        public string Notes { get; set; }
        public int PersonId { get; set; }
        public virtual Person Person { get; set; }
    }
}
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;

namespace HealthTracker.DataAccess.Classes
{
    public class Hydration
    {
        public int HydrationId { get; set; }
        public DateTime Date { get; set; }
        public int Count { get; set; }
        public int PersonId { get; set; }
        public virtual Person Person { get; set; }
    }
}
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;

namespace HealthTracker.DataAccess.Classes
{
    public class Meal
    {
        public int MealId { get; set; }
        public DateTime Date { get; set; }
        public MealType Type { get; set; }
        public string Description { get; set; }
        public int PersonId { get; set; }
        public virtual Person Person { get; set; }
    }
}

3. Add data annotations to the POCO classes. Annotations are used to specify validation for fields in the data model. Data annotations can include required, min, max, string length, and so forth. Annotations are part of the System.ComponentModel.DataAnnotations namespace. As an alternative to annotations, you can use the Code First Fluent API.

using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;

namespace HealthTracker.DataAccess.Classes
{
    public class Person
    {
        public int PersonId { get; set; }

        [Required]
        [StringLength(100,
        ErrorMessage = "Name must be less than 100 characters."),
        MinLength(2,
        ErrorMessage = "Name must be less than 2 characters.")]
        public string Name { get; set; }

        public virtual List<Meal> Meals { get; set; }
        public virtual List<Activity> Activities { get; set; }
        public virtual List<Hydration> Hydrations { get; set; }
    }
}
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;

namespace HealthTracker.DataAccess.Classes
{
    public class Activity
    {
        public int ActivityId { get; set; }

        [Required]
        public DateTime Date { get; set; }

        [Required]
        [EnumDataType(typeof(ActivityType))]
        public ActivityType Type { get; set; }

        [StringLength(100,
        ErrorMessage = "Note must be less than 100 characters.")]
        public string Notes { get; set; }

        [Required]
        public int PersonId { get; set; }
        public virtual Person Person { get; set; }
    }
}
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;

namespace HealthTracker.DataAccess.Classes
{
    public class Hydration
    {
        public int HydrationId { get; set; }

        [Required]
        public DateTime Date { get; set; }

        [Required]
        [Range(0, 20,
        ErrorMessage = "Hydration amount must be between 0 - 20.")]
        public int Count { get; set; }

        [Required]
        public int PersonId { get; set; }
        public virtual Person Person { get; set; }
    }
}
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;

namespace HealthTracker.DataAccess.Classes
{
    public class Meal
    {
        public int MealId { get; set; }

        [Required]
        public DateTime Date { get; set; }

        [Required]
        [EnumDataType(typeof(MealType))]
        public MealType Type { get; set; }

        [StringLength(100,
        ErrorMessage = "Description must be less than 100 characters.")]
        public string Description { get; set; }

        [Required]
        public int PersonId { get; set; }
        public virtual Person Person { get; set; }
    }
}

4. Create (2) enum classes: MealType and ActivityType, shown below.

using System;
using System.Collections.Generic;
using System.Linq;

namespace HealthTracker.DataAccess.Classes
{
    public enum ActivityType
    {
        Treadmill,
        Jogging,
        WeightTraining,
        Biking,
        Aerobics,
        Other
    }
}
using System;
using System.Collections.Generic;
using System.Linq;

namespace HealthTracker.DataAccess.Classes
{
    public enum MealType
    {
        Breakfast,
        MidMorning,
        Lunch,
        MidAfternoon,
        Dinner,
        Snack,
        Brunch,
        Other
    }
}

5. Add Entity Framework (System.Data.Entity namespace classes) to the Solution by right-clicking on the Solution and selecting ‘Manage NuGet Packages for Solution…’ Install the ‘EntityFramework’ package. If you haven’t discovered the power of NuGet with Visual Studio, check out their site.

Manage NuGet Packages - Add Entity Framework

Manage NuGet Packages – Add Entity Framework

6. Add a DbContext class, ‘HealthTrackerContext’, shown below. According to Microsoft, “DbContext represents a combination of the Unit-Of-Work and Repository patterns and enables you to query a database and group together changes that will then be written back to the store as a unit.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Data.Entity;
using HealthTracker.DataAccess.Classes;

namespace HealthTracker.DataAccess
{
    public class HealthTrackerContext : DbContext
    {
        public DbSet<Activity> Activities { get; set; }
        public DbSet<Hydration> Hydrations { get; set; }
        public DbSet<Meal> Meals { get; set; }
        public DbSet<Person> Persons { get; set; }
    }
}

7. Add ‘Code First Migrations’ by running the Enable-Migrations command in the NuGet Package Manager Console. This allowed me to make changes to POCO classes, while keeping the database schema in sync. This only needs to be run once.

Enable-Migrations -ProjectName HealthTracker.DataAccess -Force -EnableAutomaticMigrations -Verbose
Package Manager Console - Code First Migrations

Package Manager Console – Code First Migrations

This creates the following class in a new Migrations folder, called Configuration.cs

namespace HealthTracker.DataAccess.Migrations
{
    using System;
    using System.Data.Entity;
    using System.Data.Entity.Migrations;
    using System.Linq;

    internal sealed class Configuration : DbMigrationsConfiguration<HealthTracker.DataAccess.HealthTrackerContext>
    {
        public Configuration()
        {
            AutomaticMigrationsEnabled = true;
        }

        protected override void Seed(HealthTracker.DataAccess.HealthTrackerContext context)
        {
            //  This method will be called after migrating to the latest version.

            //  You can use the DbSet<T>.AddOrUpdate() helper extension method
            //  to avoid creating duplicate seed data. E.g.
            //
            //    context.People.AddOrUpdate(
            //      p => p.FullName,
            //      new Person { FullName = "Andrew Peters" },
            //      new Person { FullName = "Brice Lambson" },
            //      new Person { FullName = "Rowan Miller" }
            //    );
            //
        }
    }
}

Running the Update-Database command in Package Manager Console updates the database when you have made changes.

Update-Database -ProjectName HealthTracker.DataAccess -Verbose -Force

I found the Code First Migrations feature a bit confusing at first. It took several tries to work out the correct command to use to add Code First Migrations to my data access project, and the command to call later, to update the database based on my project.

Dependency Graph

Here is the way the class relationships should look using Visual Studio 2012’s powerful new Dependency Graph feature:

Visual Studio 2012 Dependency Graph of HealthTracker

Visual Studio 2012 Dependency Graph of HealthTracker

Testing the Project

To test the entities, I created a simple console application, which references the above EF5 class library. The application takes a person’s name as input. It then adds and/or updates Activities, Meals, and Hydrations, for that Person, depending on whether or not that Person already exists in the database.

1. Create a new C# Console Application, ‘HealthTracker.Console’.

New Console Application

New Console Application

2. Add a reference to the ‘HealthTracker.DataAccess’ project.

Add Reference to HealthTracker.DataAccess Project

Add Reference to HealthTracker.DataAccess Project

3. Create the Examples class and add the code below to the Main method class. As seen in the Examples class, using Micorsoft’s LINQ to Entities with the new .NET Framework 4.5, makes querying the entities very easy.

using System;

namespace HealthTracker.ConsoleApp
{
    class Program
    {
        private static string _newName = String.Empty;

        static void Main()
        {
            GetNameInput();
            var personId = Examples.FindPerson(_newName);

            if (personId == 0)
            {
                Examples.CreatePerson(_newName);
                personId = Examples.FindPerson(_newName);
            }

            Examples.CreateActivity(personId);
            Examples.CreateMeals(personId);
            Examples.UpdateOrAddHydration(personId);

            Console.WriteLine("Click any key to quit.");
            Console.ReadKey();
        }

        private static void GetNameInput()
        {
            Console.Write("Input Person's Name: ");
            var readLine = Console.ReadLine();
            if (readLine != null) _newName = readLine.Trim();
            if (_newName.Length > 2) return;
            Console.WriteLine("Name to short. Exiting program.");
            GetNameInput();
        }
    }
}
using System;
using System.Globalization;
using System.Linq;
using HealthTracker.DataAccess;
using HealthTracker.DataAccess.Classes;

namespace HealthTracker.ConsoleApp
{
    public class Examples
    {
        private static readonly DateTime Today = DateTime.Now.Date;

        /// <summary>
        /// Example of finding a Person in database.
        /// </summary>
        /// <param name="name">Name of the Person</param>
        /// <returns>Person's unique PersonId</returns>
        public static int FindPerson(string name)
        {
            using (var db = new HealthTrackerContext())
            {
                var personId = db.Persons.Where(person => person.Name == name)
                    .Select(person => person.PersonId).FirstOrDefault();

                if (personId == 0)
                    Console.WriteLine("Person, {0}, could not be found...", name);
                else
                    Console.WriteLine("PersonId {0} retrieved...", personId);

                return personId;
            }
        }

        /// <summary>
        /// Example of Adding a new Person
        /// </summary>
        /// <param name="name">Name of the Person</param>
        public static void CreatePerson(string name)
        {
            using (var db = new HealthTrackerContext())
            {
                // Add a new Person
                db.Persons.Add(new Person { Name = name });
                db.SaveChanges();
                Console.WriteLine("New Person, {0}, added...", name);
            }
        }

        /// <summary>
        /// Example of updating existing Hydration count.
        /// Else adding new Hydration if it doesn't exist.
        /// </summary>
        /// <param name="personId">Person's unique PersonId</param>
        public static void UpdateOrAddHydration(int personId)
        {
            using (var db = new HealthTrackerContext())
            {
                var existingHydration = db.Hydrations.FirstOrDefault(
                    hydration => hydration.PersonId == personId
                        && hydration.Date == Today);

                if (existingHydration != null && existingHydration.HydrationId > 0)
                {
                    existingHydration.Count++;
                    db.SaveChanges();
                    Console.WriteLine("Existing Hydration count increased to {0}...",
                        existingHydration.Count.ToString(CultureInfo.InvariantCulture));
                    return;
                }

                db.Hydrations.Add(new Hydration
                {
                    PersonId = personId,
                    Date = Today,
                    Count = 1
                });
                db.SaveChanges();
                Console.WriteLine("New Hydration added...");
            }
        }

        /// <summary>
        /// Example of adding two Meals.
        /// </summary>
        /// <param name="personId">Person's unique PersonId</param>
        public static void CreateMeals(int personId)
        {
            using (var db = new HealthTrackerContext())
            {
                db.Meals.Add(new Meal
                {
                    PersonId = personId,
                    Date = Today,
                    Type = MealType.Breakfast,
                    Description = "(2) slices toast, (1) glass orange juice"
                });

                db.Meals.Add(new Meal
                {
                    PersonId = personId,
                    Date = Today,
                    Type = MealType.Lunch,
                    Description = "(1) protein shake, (1) apple"
                });

                db.SaveChanges();
                Console.WriteLine("Two new Meals added...");
            }
        }

        /// <summary>
        /// Example of adding an Activity
        /// </summary>
        /// <param name="personId">Person's unique PersonId</param>
        public static void CreateActivity(int personId)
        {
            using (var db = new HealthTrackerContext())
            {
                db.Activities.Add(new Activity
                {
                    PersonId = personId,
                    Date = Today,
                    Type = ActivityType.Treadmill,
                    Notes = "30 minutes, 500 calories"
                });

                db.SaveChanges();
                Console.WriteLine("New Activity added...");
            }
        }
    }
}

The console output below demonstrates the addition a new Person, Susan Jones, with an associated Activity, two Meals, and a Hydration.

Console Output - Adding New Person

Console Output – Adding New Person

The console output below demonstrates inputting Susan Jones a second time. Observe what happened to the output.

Console Output - Updating Existing Person

Console Output – Updating Existing Person

Below is a view of the newly created SQL Express ‘HealthTracker.DataAccess.HealthTrackerContext’ database. Note the four database tables, which correspond to the four POCO classes in the project. Why did the database show up in SQL Express or LocalDB? Read the ‘Where’s My Data?‘ section of this tutorial by Microsoft. Code First uses SQL Express or LocalDB by default, but you add a connection to your project to target SQL Server.

View of HealthTracker SQL Express Database

View of HealthTracker SQL Express Database

You can switch your connection SQL Server at anytime, even after the SQL Express or LocalDB database has been created, and updated using Code First Migrations. That’s what I chose to do after completing this post’s example. I prefer doing my initial development with a temporary SQL Express or LocalDB database while the entities and database schema are still evolving. Once I have the model a fairly stable point, I create the SQL Server database, and continue from there evolving the schema.

Conclusion

Having worked with earlier editions of Entity Framework’s Database First and Model First development, in addition to other ORM packages, I was impressed with EF5’s new features as well as the Code First development methodology. Overall, I feel EF5 is another big step toward making Entity Framework a top-notch ORM, on par with other established products on the market.

, , , , , , , , , , , , , , , , , , , , , , ,

5 Comments

Easy SMTP Server for Developers

Often times in development we work on projects that require email messaging. Many enterprise applications rely on email as a primary means of communication. Messages are dynamically generated by the application based on user interactions or system events. Informational email messages are sent to the end-user. User requests and system alerts are sent to support teams. Most corporations and software development firms have  mail servers in-house or use the services of a web or email hosting service, which provides email. The protocol used by mail servers is SMTP (Simple Mail Transfer Protocol).

Developers are often restricting from using corporate mail servers to test emails during the development phases of a project. The mail server’s security settings, network configuration, or corporate security policies prevent development access. As an alternate, many developers install a mail server (SMTP server) on their local development computer. There are many options available for serving email at no a low cost or no cost. Configuring SMTP on the Windows platform with IIS is easy. Microsoft provides a fully functional SMTP service capable of managing email communications. However, if you develop on Windows, but not in .NET, Microsoft’s solution may not be your best choice.

If all you are all you are interested in is examining the email your application generates, there is an easier option – smtp4dev. This application, on CodePlex, was developed by Robert Wood. I’ve used it for the past two years for several projects, both .NET and Java, that required developing email notifications. Just download smtp4dev, unzip and place it in your programs folder, configure the port you want to use, and start it. The application is considered a ‘dummy server’, it will not actually send or receive email. It intercepts outbound messages, allowing you to view the results. It sits in the system tray. I’ve set up smpt4dev as a start-up item, it fires up each time I start my computer.

Below is a screen grab of smtp4dev running on my development computer. It contains emails I generated during the configuration and testing of Jenkins, while preparing my last post, Convert VS 2010 Database Project to SSDT and Automate Publishing with Jenkins – Part 3/3. I avoided all the issues of using fake email addresses and the potential risk of test emails actually being sent by a mail server. Raise your hand if this has happened to you!

 

smtp4dev with Test Emails

smtp4dev with Test Emails

, , , , , , ,

Leave a comment