Posts Tagged Node

Building Asynchronous, Serverless Alexa Skills with AWS Lambda, DynamoDB, S3, and Node.js

Introduction

In the following post, we will use the new version 2 of the Alexa Skills Kit, AWS Lambda, Amazon DynamoDB, Amazon S3, and the latest LTS version Node.js, to create an Alexa Custom Skill. According to Amazon, a custom skill allows you to define the requests the skill can handle (intents) and the words users say to invoke those requests (utterances).

If you want to compare the development of an Alexa Custom Skill with those of Google and Azure, in addition to this post, please read my previous two posts in this series, Building and Integrating LUIS-enabled Chatbots with Slack, using Azure Bot Service, Bot Builder SDK, and Cosmos DB and Building Serverless Actions for Google Assistant with Google Cloud Functions, Cloud Datastore, and Cloud Storage. All three of the article’s demonstrations are written in Node.js, all three leverage their cloud platform’s machine learning-based Natural Language Understanding services, and all three take advantage of NoSQL database and storage services available on their respective cloud platforms.

AWS Technologies

The final high-level architecture of our Alexa Custom Skill will look as follows.

Alexa Skill Final Architecture v2.png

Here is a brief overview of the key AWS technologies we will incorporate into our Skill’s architecture.

Alexa Skills Kit

According to Amazon, the Alexa Skills Kit (ASK) is a collection of self-service APIs, tools, documentation, and code samples that makes it possible to add skills to Alexa. The Alexa Skills Kit supports building different types of skills. Currently, Alexa skill types include Custom, Smart Home, Video, Flash Briefing, and List Skills. Each skill type makes use of a different Alexa Skill API.

AWS Serverless Platform

To create a custom skill for Alexa, you currently have the choice of using an AWS Lambda function or a web service. The AWS Lambda is part of an ecosystem of Cloud services and Developer tools, Amazon refers to as the AWS Serverless Platform. The platform’s services are designed to support the development and hosting of highly-performant, enterprise-grade serverless applications.

In this post, we will leverage three of the AWS Serverless Platform’s services, including Amazon DynamoDB, Amazon Simple Storage Service (Amazon S3), and AWS Lambda.

Node.js

AWS Lamba supports multiple programming languages, including Node.js (JavaScript), Python, Java (Java 8 compatible), and C# (.NET Core) and Go. All are excellent choices for writing modern serverless functions. For this post, we will use Node.js. According to Node.js Foundation, Node.js is an asynchronous event-driven JavaScript runtime built on Chrome’s V8 JavaScript engine.

In April 2018, AWS Lamba announced support for the Node.js 8.10 runtime, which is the current Long Term Support (LTS) version of Node.js. Node 8, also known as Project Carbon, was the first LTS version of Node to support async/await with Promises. Async/await is the new way of handling asynchronous operations in Node.js. We will make use of async/await and Promises with the custom skill.

Demonstration

To demonstrate Alexa Custom Skills we will build an informational skill that responds to the user with interesting facts about Azure¹, Microsoft’s Cloud computing platform (Alexa talking about Azure, ironic, I know). This is not an official Microsoft skill; it is only used for this demonstration and has not been published.

Source Code

All open-source code for this post can be found on GitHub. Code samples in this post are displayed as GitHub Gists, which may not display correctly on some mobile and social media browsers. Links to gists are also provided.

Important, this post and the associated source code were updated from v1.0 to v2.0 on 13 August 2018. You should clone the GitHub project again, to correspond with this revised post, if you originally cloned the project before 14 August 2018. Code changes were significant.

Objectives

This objective of the fact-based skill will be to demonstrate the following.

  • Build, deploy, and test an Alexa Custom Skill using AWS Lambda and Node.js;
  • Use DynamoDB to store and retrieve Alexa voice responses;
  • Maintain a count of user’s questions in DynamoDB using atomic counters;
  • Use Amazon S3 to store and retrieve images, used in Display Cards;
  • Log Alexa Skill activities using Amazon CloudWatch Logs;

Steps to Build

Building the Azure fact skill will involve the following steps.

  • Design the Alexa skill’s voice interaction model;
  • Design the skill’s Display Cards for Alexa-enabled products, to enhance the voice experience;
  • Create the skill’s DynamoDB table and import the responses the skill will return;
  • Create an S3 bucket and upload the images used for the Display Cards;
  • Write the Alexa Skill, which involves mapping the user’s spoken input to the intents your cloud-based service can handle;
  • Write the Lambda function, which involves responding to the user’s utterances, by building and returning appropriate voice and display card responses, from DynamoDB and S3;
  • Extend the default ASK-generated AWS IAM Role, to allow the Lambda to update DynamoDB;
  • Deploy the skill;
  • Test the skill;

Let’s explore each step in detail.

Voice Interaction Model

First, we must design the fact skill’s voice interaction model. We need to consider the way we want the user to interact with the skill. What is the user’s conversational journey? How do they invoke your skill? How will the user provide intent?

This skill will require two intent slot values, the fact the user is interested in (i.e. ‘global infrastructure’) and the user’s first name (i.e. ‘Susan’). We will train the skill to allow Alexa to query the user for each slot value, but also allow the user to provide either or both values in the initial intent invocation. We will also allow the user to request a random fact.

Shown below in the Alexa Skills Kit Development Console Test tab are three examples of interactions the skill is trained to understand and handle:

  1. The first example on the left invokes the skill with no intent (‘Alexa, load Azure Tech Facts). The user is led through a series of three questions to obtain the full intent.
  2. The center example is similar, however, the initial invocation contains a partial intent (‘Alexa, ask Azure Tech Facts for a fact about certifications’). Alexa must still ask for the user’s name.
  3. Lastly, the example on the right is a so-called ‘one-shot’ invocation (‘Alexa, ask Azure Tech Facts about Azure’s platforms for Gary’). The user’s invocation of the skill contains a complete intent, allowing Alexa to respond immediately with a fact about Azure platforms.

alexa-skill-post-020

In all cases, our skill has the ability to continue to provide the user with additional facts if they chose, or they may cancel at any time.

We also need to design how Alexa will respond. What is the persona will Alexa assume through her words, phrases, and use of Speech Synthesis Markup Language (SSML).

User Interaction Previews

Here are a few examples of interactions with the final Alexa skill using an iPhone 8 and the Alexa App. They are intended to show the rich conversational capabilities of custom skills more so the than the display, which is pretty poor on the Alexa App as compared to the Echo Show or even Echo Spot.

Example 1: Indirect Invocation

The first example shows a basic interaction with our Alexa skill. It demonstrates an indirect invocation, a user utterance without initial intent. It also illustrates several variations of user utterances (YouTube).

Example 2: Direct Invocation

The second example of an interaction our skill demonstrates a direct invocation, in which the initial user utterance contains intent. It also demonstrates the user following up with additional requests (YouTube).

Example 3: Direct Invocation, Help, Problem

Lastly, another direct invocation demonstrates the use of the Help Intent. You also see an example of when Alexa does not understand the user’s utterance.  The user is able to repeat their request, more clearly (YouTube).

Visual Interaction Model

Many Alexa-enabled devices are capable of both vocal and visual responses. Designing for a multimodal user experience is important. The instructional skill will provide vocal responses, as well as Display Cards optimized for the Amazon Echo Show. The skill contains a basic design for the Display Card shown during the initial invocation, where there is no intent uttered by the user.

alexa-skill-post-021

The fact skill also contains a Display Card, designed to present the final Alexa response to the user’s intent. The content of the vocal and visual response is returned from DynamoDB via the Lambda function. The random Azure icons, available from Microsoft, are hosted in an S3 bucket. Each fact response is unique, as well as the icon associated with the fact.

alexa-skill-post-022

The Display Cards will also work on other Alexa-enabled screen-based products. Shown below is the same card on an iPhone 8 using the Amazon Alexa app. This is the same app shown in the videos, above.

alexa-skill-post-027

DynamoDB

Next, we create the DynamoDB table used to store the facts the Alexa skill will respond with when invoked by the user. DynamoDB is Amazon’s non-relational database that delivers reliable performance at any scale. DynamoDB consists of three basic components: tables, items, and attributes.

There are numerous ways to create a DynamoDB table. For simplicity, I created the AzureFacts DynamoDB table using the AWS CLI (gist). You could also choose CloudFormation, or create the table using any of nine or more programming languages with an AWS SDK.

aws dynamodb create-table \
--table-name AzureFacts \
--attribute-definitions \
AttributeName=Fact,AttributeType=S \
--key-schema AttributeName=Fact,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=3,WriteCapacityUnits=3

The AzureFacts table’s schema has four key/value pair attributes per item: Fact, Response, Image, and Hits. The Fact attribute, a string, contains the name of the fact the user is seeking. The Fact attribute also serves as the table’s unique partition key. The Response attribute, a string, contains the conversational response Alexa will return. The Image attribute, a string, contains the name of the image in the S3 bucket displayed by Alexa. Lastly, the Hits attribute, a number, stores the number of user requests for a particular fact.

Importing Table Items

After the DynamoDB table is created, the pre-defined facts are imported into the empty table using AWS CLI (gist). The JSON-formatted data file, AzureFacts.json, is included with the source code on GitHub.

aws dynamodb batch-write-item \
--request-items file://data/AzureFacts.json

The resulting table should appear as follows in the AWS Management Console.

alexa-skill-post-004

Note the imported items shown below. The Hits counts reflect the number of times each fact has been requested.

alexa-skill-post-005

Shown below is a detailed view of a single item that was imported into the DynamoDB table.

alexa-skill-post-006

Amazon S3 Image Bucket

Next, we create the Amazon S3 bucket, which will house the images, actually Azure icons as PNGs, returned by Alexa with each fact. Again, I used the AWS CLI for simplicity (gist).

aws s3api create-bucket \
--bucket <my_bucket_name> \
--region us-east-1

The images can be uploaded manually to the bucket through a web browser, or programmatically, using the AWS CLI or SDKs. You will need to ensure the images are made public so they can be displayed by Alexa.

alexa-skill-post-007

Alexa Skill

Next, we create the actual Alexa custom skill. I have used version 2 of the Alexa Skills Kit (ASK) Software Development Kit (SDK) for Node.js and the new ASK Command Line Interface (ASK CLI) to create the skill. The ASK SDK v2 for Node.js was recently released in April 2018. If you have previously written Alexa skills using version 1 of the Node.js SDK, the creation of a new project and the format of the Lambda Node.js code is somewhat different. I strongly suggest reviewing the example skills provided by Amazon on GitHub.

With version 1, I would have likely used the Alexa Skills Kit Development Console to develop and deploy the skill, and separate IDE, like JetBrains WebStorm, to write the Lambda. The JSON-format skill would live in the Alexa Skills Kit Development Console, and my Lambda in source control. I would have used AWS Serverless Application Model (AWS SAM) or Claudia.js to handle the deployment of Lambda functions.

With version 2 of ASK, you can easily create and manage the Alexa skill’s JSON-formatted code, as well as the Lambda, all from the command-line and a single IDE or text editor. All components that comprise the skill can be kept together in source control. I now only use the Alexa Skills Kit Development Console to preview my deployed skill and for testing. I am not going to go into detail about creating a new project using the ASK CLI, I suggest reviewing Amazon’s instructional guides.

Below, I have initiated a new AWS profile for the Alexa skill using the ask init command.

alexa-skill-post-008

There are three main parts to the new skill project created by the ASK CLI: the skill’s manifest (skill.json), model(s) (en-US.json), and API endpoint, the Lambda (index.js). The skill’s manifest, skill.json, contains information (metadata) about the skill. This is the same information you find in the Distribution tab of the Alexa Skills Kit Development Console. The manifest includes publishing information, example phrases to invoke the skill, the skill’s category, distribution locales, privacy information, and the location of the skill’s API endpoint, the Lambda. An end-user would most commonly see this information in Amazon Alexa app when adding skills to their Alexa-enabled devices.

alexa-skill-post-026

Next, the skill’s model, en-US.json, is located the models sub-directory. This file defines the skill’s custom interaction model, it contains the skill’s interaction model written in JSON, which includes the invocation name, intents, standard and custom slots, sample utterances, slot values, and synonyms of those values. This is the same information you would find in the Build tab of the Alexa Skills Kit Development Console. Amazon has an excellent guide to creating your custom skill’s interaction model.

Intents and Intent Slots

The skill’s custom interaction model contains the AzureFactsIntent intent, along with the boilerplate Cancel, Help and Stop intents. The AzureFactsIntent intent contains two intent slots, myName and myQuestion. The myName intent slot is a standard AMAZON.US_FIRST_NAME slot type. According to Amazon, this slot type understands thousands of popular first names commonly used by speakers in the United States. Shown below, I have included a short list of sample utterances in the intent model, which helps improve voice recognition for Alexa (gist).

{
"name": "AzureFactsIntent",
"slots": [{
"name": "myName",
"type": "AMAZON.US_FIRST_NAME",
"samples": [
"{myName}",
"my name is {myName}",
"my name's {myName}",
"name is {myName}",
"the name is {myName}",
"name's {myName}",
"they call me {myName}"
]
}]
}

Custom Slot Types and Entities

The myQuestion intent slot is a custom slot type. According to Amazon, a custom slot type defines a list of representative values for the slot. The myQuestion slot contains all the available facts the custom instructional skill understands and can retrieve from DynamoDB. Like myName, the user can provide the fact intent in various ways (gist).

{
"name": "myQuestion",
"type": "list_of_facts",
"samples": [
"{myQuestion}",
"give me a fact about {myQuestion}",
"give me a {myQuestion} fact",
"how many {myQuestion} does Azure have",
"I'd like to hear about {myQuestion}",
"I'd like to hear more about {myQuestion}",
"tell me about {myQuestion}",
"tell me about Azure's {myQuestion}",
"tell me about Azure {myQuestion}",
"tell me a {myQuestion} fact",
"tell me another {myQuestion} fact",
"when was Azure {myQuestion}"
]
}

This slot also contains synonyms for each fact. Collectively, the slot value, it’s synonyms, and the optional ID are collectively referred to as an Entity. According to Amazon, entity resolution improves the way Alexa matches possible slot values in a user’s utterance with the slots defined in the skill’s interaction model.

An example of an entity in the myQuestion custom slot type is ‘competition’. A user can ask Alexa to tell them about Azure’s competition. The slot value ‘competition’ returns a fact about Azure’s leading competitors, as reported on the G2 Crowd website’s Microsoft Azure Alternatives & Competitors page. However, the user might also substitute the words ‘competitor’ or ‘competitors’ for ‘competition’. Using synonyms, if the user utters any of these three words in their intent, they will receive the same response from Alexa (gist).

"types": [{
"name": "list_of_facts",
"values": [{
"name": {
"value": "competition",
"synonyms": [
"competitors",
"competitor"
]
}
},
{
"name": {
"value": "certifications",
"synonyms": [
"certification",
"certification exam",
"certification exams"
]
}
}
]
}]

Lambda

Initializing a skill with the ASK CLI also creates the default API endpoint, a Lambda (index.js). The serverless Lambda function is written in Node.js 8.10. As mentioned in the Introduction, AWS recently announced support for the Node.js 8.10 runtime, in April. This is the first LTS version of Node to support async/await with Promises. Node’s async/await is the new way of handling asynchronous operations in Node.js.

The layout of the custom skill’s Lambda’s code closely follows the custom Alexa Fact Skill example. I suggest closely reviewing this example. The Lambda has four main sections: constants, setup code, intent handlers, and helper functions.

In addition to the boilerplate Help, Stop, Error, and Session intent handlers, there are the LaunchRequestHandler and the AzureFactsIntent handlers. According to Amazon, a LaunchRequestHandler fires when the Lambda receives a LaunchRequest from Alexa, in which the user invokes the skill with the invocation name, but does not provide any command mapping to an intent.

The AzureFactsIntent aligns with the custom intent we defined in the skill’s model (en-US.json), of the same name. This handler handles an IntentRequest from Alexa. This handler and the buildFactResponse function the handler calls are what translate a request for a fact from the user into a request to DynamoDB for a response.

The AzureFactsIntent handler checks the IntentRequest for both the myName and myQuestion slot values. If the values are unfulfilled, the AzureFactsIntent handler delegates responsibility back to Alexa, using a Dialog delegate directive (addDelegateDirective). Alexa then requests the slot values from the user in a conversational interaction. Alexa then calls the AzureFactsIntent handler again (gist).

const request = handlerInput.requestEnvelope.request;
let currentIntent = request.intent;
if (myNameValue === undefined) {
myNameValue = slotValue(request.intent.slots.myName);
}
if (!myNameValue) {
return handlerInput.responseBuilder
.addDelegateDirective(currentIntent)
.getResponse();
}
let myQuestionValue = slotValue(request.intent.slots.myQuestion);
if (!myQuestionValue) {
return handlerInput.responseBuilder
.addDelegateDirective(currentIntent)
.getResponse();
}

Once both slot values are received by the AzureFactsIntent handler, it calls the buildFactResponse function, passing in the myName and myQuestion slot values. In turn, the buildFactResponse function calls AWS.DynamoDB.DocumentClient.update. The DynamoDB update returns a callback. In turn, the buildFactResponse function returns a Promise, a standard built-in object type, part of the JavaScript ES2015 spec (gist).

function buildFactResponse(myName, myQuestion) {
return new Promise((resolve, reject) => {
if (myQuestion !== undefined) {
let params = {};
params.TableName = "AzureFacts";
params.Key = {"Fact": myQuestion};
params.UpdateExpression = "set Hits = Hits + :val";
params.ExpressionAttributeValues = {":val": 1};
params.ReturnValues = "ALL_NEW";
docClient.update(params, function (err, data) {
if (err) {
console.log("GetItem threw an error:", JSON.stringify(err, null, 2));
reject(err);
} else {
console.log("GetItem succeeded:", JSON.stringify(data, null, 2));
resolve(data);
}
});
}
});
}

What is unique about the DynamoDB update call in this case, is it actually performs two functions. First, it implements an Atomic Counter. According to AWS, an atomic counter is a numeric DynamoDB attribute that is incremented, unconditionally, without interfering with other write requests. The update increments the numeric Hits attribute of the requested fact by exactly one. Secondly, the update returns the DynamoDB item. We can increment the count and get the response in a single call.

The buildFactResponse function’s Promise returns the DynamoDB item, a JSON object, from the callback. An example of a JSON response payload is shown below. (gist).

"Attributes": {
"Hits": 4,
"Fact": "global",
"Image": "image-02.png",
"Response": "according to Microsoft, with 54 Azure regions, Azure has more global regions than any other cloud provider. Azure is currently available in 140 countries."
}

The AzureFactsIntent handler uses the async/await methods to perform the call to the buildFactResponse function. Note line 7 of the AzureFactsIntent handler below, where the async method is applied directly to the handler. Note line 33 where the await method is used with the call to the buildFactResponse function (gist).

const AzureFactsIntent = {
canHandle(handlerInput) {
const request = handlerInput.requestEnvelope.request;
return request.type === "IntentRequest"
&& request.intent.name === "AzureFactsIntent";
},
async handle(handlerInput) {
const request = handlerInput.requestEnvelope.request;
let currentIntent = request.intent;
if (myNameValue === undefined) {
myNameValue = slotValue(request.intent.slots.myName);
}
if (!myNameValue) {
return handlerInput.responseBuilder
.addDelegateDirective(currentIntent)
.getResponse();
}
let myQuestionValue = slotValue(request.intent.slots.myQuestion);
if (!myQuestionValue) {
return handlerInput.responseBuilder
.addDelegateDirective(currentIntent)
.getResponse();
}
if (myQuestionValue.toString().trim() === 'random') {
myQuestionValue = selectRandomFact();
}
let fact = await buildFactResponse(myNameValue, myQuestionValue);
myNameValue = Object.is(myNameValue, undefined) ? undefined : capitalizeFirstLetter(myNameValue);
let factToSpeak = `${myNameValue}, ${fact.Attributes.Response}`;
cardContent = factToSpeak;
// optional: logged to CloudWatch Logs
console.log(`myName: ${myNameValue}`);
console.log(`myQuestion: ${myQuestionValue}`);
console.log(`factToSpeak: ${factToSpeak}`);
return handlerInput
.responseBuilder
.speak(factToSpeak)
.reprompt("You can request another fact")
.withStandardCard(CARD_TITLE, cardContent,
IMAGES.smallImageUrl, `${BUCKET_URL}\/${fact.Attributes.Image}`)
.getResponse();
}
};

The AzureFactsIntent handler awaits the Promise from the buildFactResponse function. In an async function, you can await for any Promise or catch its rejection cause. If the update callback and the ensuing Promise were both returned successfully, the AzureFactsIntent handler returns both a vocal and visual response to Alexa.

AWS IAM Role

By default, an AWS IAM Role was created by ASK when the project was initialized, the ask-lambda-alexa-skill-azure-facts role. This role is automatically associated with the AWS Managed Policy, AWSLambdaBasicExecutionRole. This managed policy simply allows the skill’s Lambda function to create Amazon CloudWatch Events (gist).

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}

For the skill’s Lambda to read and write to DynamoDB, we must extend the default role’s permissions, by adding an additional policy. I have created a new AzureFacts_Alexa_Skill IAM Policy, which allows the associated role to get and update items from the AzureFacts DynamoDB table, and that is it. The role only has access to two of forty possible DynamoDB actions, and only for the AzureFacts table, and nothing else. Following the principle of Least Privilege is a cornerstone of AWS Security (gist).

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:UpdateItem"
],
"Resource": "arn:aws:dynamodb:us-east-1:931066906971:table/AzureFacts"
}
]
}

Below, we see the new IAM Policy in the AWS Management Console.

alexa-skill-post-011

Below, we see the policy being applied to the skill’s IAM Role, along with the original AWS managed policy.

alexa-skill-post-012

Deploying the Skill

Version 2 of the ASK CLI makes deploying the Alexa custom skill very easy. Using the ASK CLI’s deploy command, we can validate and deploy the skill (manifest),  model, and Lambda, all at once, as shown below. This makes DevOps automation of skill deployments with tools like Jenkins or AWS CodeDeploy straight-forward.

alexa-skill-post-009

You can verify the skill has been deployed, from the Alexa Skills Kit Development Console. You should observe the skill’s model (intents, slots, entities, and endpoints) in the Build tab. You should observe the skill’s publishing details in the Distribution tab. Note deploying the skill does not submit the skill to Amazon’s for review and publishing, you must still submit the skill separately.

alexa-skill-post-013

From the AWS Lambda Management Console, you should observe the skill’s Lambda was deployed. You should observe only the skill can trigger the Lambda. Lastly, you should observe that the correct IAM Role was applied to the Lambda, giving the Lambda access to Amazon CloudWatch Logs and Amazon DynamoDB.

alexa-skill-post-010

Testing the Skill

The ASK CLI comes with the simulate command. According to Amazon, the simulate command simulates an invocation of the skill with text-based input. Again, the ASK CLI makes DevOps test automation with tools like Jenkins or AWS CodeDeploy pretty easy (gist).

ask simulate \
--text "Load Azure Tech Facts" \
--locale "en-US" \
--skill-id "<your_skill_id>" \
--profile "default"
view raw ask-simulate.sh hosted with ❤ by GitHub

Below, are the results of simulating the invocation. The simulate command returns the expected verbal response, including any SSML, and the visual responses (the Display Card). You could easily write an automation script to run a battery of these tests on every code commit, and prior to deployment.

alexa-skill-post-024

I also like to manually test my skills from the Alexa Skills Kit Development Console Test tab. You may invoke the skill using your voice or by typing the skill invocation.

alexa-skill-post-014

The Alexa Skills Kit Development Console Test tab both shows and speaks Alexa’s response. The console also displays the request and response body (JSON input/output), as well as the Display Card for an Echo Show and Echo Spot.

alexa-skill-post-015

Lastly, the Alexa Skills Kit Development Console Test tab displays the Device Log. The log captures Alexa Directives and Events. I have found the Device Log to be very helpful in troubleshooting problems with deployed skills.

alexa-skill-post-025.png

CloudWatch Logs

By default the custom skill outputs events to CloudWatch Logs. I have added the DynamoDB callback payload, as well as the slot values of myName and myQuestion to the logs, for each successful Alexa response. CloudWatch logs, like the Device Logs above, are very helpful in troubleshooting problems with deployed skills.

alexa-skill-post-016

Conclusion

In this brief post, we have seen how to use the new ASK SDK/CLI version 2, services from the AWS Serverless Platform, and the LTS version of Node.js, to create an Alexa Custom Skill. Using the AWS Serverless Platform, we could easily extend the example to take advantage of additional serverless services, such as the use of Amazon SNS and SQS for notifications and messaging and Amazon Kinesis for analytics.

In a future post, we will extend this example, adding the capability to securely add and update our DynamoDB table’s items. We will use addition AWS services, including Amazon Cognito to authorize access to our API. We will also use AWS API Gateway to integrate with our Lambdas, producing a completely serverless API.

¹Azure is a trademark of Microsoft

All opinions expressed in this post are my own and not necessarily the views of my current or past employers or their clients.

, , , , , , , , , , , ,

7 Comments

Scaffold a RESTful API with Yeoman, Node, Restify, and MongoDB

Using Yeoman, scaffold a basic RESTful CRUD API service, based on Node, Restify, and MongoDB.

preview

Introduction

NOTE: Generator updated on 11-13-2016 to v0.2.1.

Yeoman generators reduce the repetitive coding of boilerplate functionality and ensure consistency between full-stack JavaScript projects. For several recent Node.js projects, I created the generator-node-restify-mongodb Yeoman generator. This Yeoman generator scaffolds a basic RESTful CRUD API service, a Node application, based on Node.js, Restify, and MongoDB.

According to their website, Restify, used most notably by Netflix, borrows heavily from Express. However, while Express is targeted at browser applications, with templating and rendering, Restify is keenly focused on building API services that are maintainable and observable.

Along with Node, Restify, and MongoDB, theNode application’s scaffolded by the Node-Restify-MongoDB Generator, also implements Bunyan, which includes DTrace, Jasmine, using jasmine-nodeMongoose, and Grunt.

Portions of the scaffolded Node application’s file structure and code are derived from what I consider the best parts of several different projects, including generator-express, generator-restify-mongo, and generator-restify.

Installation

To begin, install Yeoman and the generator-node-restify-mongodb using npm. The generator assumes you have pre-installed Node and MongoDB.

npm install -g yo
npm install -g generator-node-restify-mongodb

Then, generate the new project.

mkdir node-restify-mongodb
cd $_
yo node-restify-mongodb

Yeoman scaffolds the application, creating the directory structure, copying required files, and running ‘npm install’ to load the npm package dependencies.

preview

 

Using the Generated Application

Next, import the supplied set of sample widget documents into the local development instance of MongoDB from the supplied ‘data/widgets.json’ file.

NODE_ENV=development grunt mongoimport --verbose

Similar to Yeoman’s Express Generator, this application contains configuration for three typical environments: ‘Development’ (default), ‘Test’, and ‘Production’. If you want to import the sample widget documents into your Test or Production instances of MongoDB, first change the ‘NODE_ENV’ environment variable value.

NODE_ENV=production grunt mongoimport --verbose

To start the application in a new terminal window, use the following command.

npm start

The output should be similar to the example, below.

npm_start_output

To test the application, using jshint and the jasmine-node module, the sample documents must be imported into MongoDB and the application must be running (see above). To test the application, open a separate terminal window, and use the following command.

npm test

The project contains a set of jasmine-node tests, split between the ‘/widgets’ and the ‘/utils’ endpoints. If the application is running correctly, you should see the following output from the tests.

npm_test_output

Similarly, the following command displays a code coverage report, using the grunt, mocha, istanbul, and grunt-mocha-istanbul node modules.

grunt coverage

Grunt uses the grunt-mocha-istanbul module to execute the same set of jasmine-node tests as shown above. Based on those tests, the application’s code coverage (statement, line, function, and branch coverage) is displayed.

npm_coverage_output.png

You may test the running application, directly, by cURLing the ‘/widgets’ endpoints.

curl -X GET -H "Accept: application/json" "http://localhost:3000/widgets"

For more legible output, try prettyjson.

npm install -g prettyjson
curl -X GET -H "Accept: application/json" "http://localhost:3000/widgets" --silent | prettyjson
curl -X GET -H "Accept: application/json" "http://localhost:3000/widgets/SVHXPAWEOD" --silent | prettyjson

The JSON-formatted response body from the HTTP GET requests should look similar to the output, below.

curl_test_output

A much better RESTful API testing solution is Postman. Postman provides the ability to individually configure each environment and abstract that environment-specific configuration, such as host and port, from the actual HTTP requests.

Postman_Widget_example

Continuous Integration

As part of being published to both the npmjs and Yeoman registries, the generator-node-restify-mongodb generator is continuously integrated on Travis CI. This should provide an addition level of confidence to the generator’s end-users. Currently, Travis CI tests the generator against Node.js v4, v5, and v6, as well as IO.js. Older versions of Node.js may have compatibility issues with the application.

travisci_test_output

Additionally, Travis CI feeds test results to Coveralls, which displays the generator’s code coverage. Note the code coverage, shown below, is reported for the yeoman generator, not the generator’s scaffolded application. The scaffolded application’s coverage is shown above.

coveralls_results

Application Details

API Endpoints

The scaffolded application includes the following endpoints.

# widget resources
var PATH = '/widgets';
server.get({path: PATH, version: VERSION}, findDocuments);
server.get({path: PATH + '/:product_id', version: VERSION}, findOneDocument);
server.post({path: PATH, version: VERSION}, createDocument);
server.put({path: PATH, version: VERSION}, updateDocument);
server.del({path: PATH + '/:product_id', version: VERSION}, deleteDocument);

# utility resources
var PATH = '/utils';
server.get({path: PATH + '/ping', version: VERSION}, ping);
server.get({path: PATH + '/health', version: VERSION}, health);
server.get({path: PATH + '/info', version: VERSION}, information);
server.get({path: PATH + '/config', version: VERSION}, configuraton);
server.get({path: PATH + '/env', version: VERSION}, environment);

The Widget

The Widget is the basic document object used throughout the application. It is used, primarily, to demonstrate Mongoose’s Model and Schema. The Widget object contains the following fields, as shown in the sample widget, below.

{
  "product_id": "4OZNPBMIDR",
  "name": "Fapster",
  "color": "Orange",
  "size": "Medium",
  "price": "29.99",
  "inventory": 5
}

MongoDB

Use the mongo shell to access the application’s MongoDB instance and display the imported sample documents.

mongo
 > show dbs
 > use node-restify-mongodb-development
 > show tables
 > db.widgets.find()

The imported sample documents should be displayed, as shown below.

mongo_terminal_output

Environmental Variables

The scaffolded application relies on several environment variables to determine its environment-specific runtime configuration. If these environment variables are present, the application defaults to using the Development environment values, as shown below, in the application’s ‘config/config.js’ file.

var NODE_ENV   = process.env.NODE_ENV   || 'development';
var NODE_HOST  = process.env.NODE_HOST  || '127.0.0.1';
var NODE_PORT  = process.env.NODE_PORT  || 3000;
var MONGO_HOST = process.env.MONGO_HOST || '127.0.0.1';
var MONGO_PORT = process.env.MONGO_PORT || 27017;
var LOG_LEVEL  = process.env.LOG_LEVEL  || 'info';
var APP_NAME   = 'node-restify-mongodb-';

Future Project TODOs

Future project enhancements include the following:

  • Add filtering, sorting, field selection and paging
  • Add basic HATEOAS-based response features
  • Add authentication and authorization to production MongoDB instance
  • Convert from out-dated jasmine-node to Jasmine?

, , , , , , , , , , , , , ,

1 Comment

Scaffolding Modern Web Applications

Scaffold Three Full-Stack Modern Web Application Examples using Yeoman with npm, Bower, and Gruntarticle background

Introduction

The capabilities of modern web applications have quickly matched and surpassed those of most traditional desktop applications. However, with the increase in capabilities, comes an increase in architectural complexity of web applications. To help deal with the increase in complexity, developers have benefitted from a plethora of popular support libraries, frameworks, API’s, and similar tooling. Examples of these include AngularJSReact.js, Play!Node.js, Express, npmYeoman, Bower, Grunt, and Gulp.

With so many choices, selecting an optimal toolset to construct a modern web application can be overwhelming. In this post, we will examine three distinct modern web application architectures, predominantly JavaScript-based. The post will discuss how to select and install the required tools, and how to use those tools to scaffold the applications. By the end of the post, we will have three working web applications, ready for further development.

Generators

The post’s examples use Yeoman generators. What is a generator? According to Wikipedia, Yeoman’s generator concept was inspired by Ruby on Rails. Using a generator’s blueprint, Yeoman scaffolds an application’s project directory and file structure, and installs required vendor libraries and dependencies. Yeoman generators usually run interactively, guiding the developer through a series of configuration questions. The configuration choices determine the project’s physical structure and components installed by Yeoman.

Generators are inherently opinionated. They dictate a particular application architecture they feel represents current best practices. However, good generators also allow developers to select from a range of architectural choices to meet the requirements of a developer’s environment. For example, a generator might allow Grunt or Gulp for task automation, or allow either Less or Sass for styling the UI. Similar to npm, RubyGems, Bower, Docker Hub, and Puppet Forge, Yeoman provides a searchable public repository. This allows developers to choose from a variety of generators to meet their specific needs.

Preparing the Development Environment

The examples in this post were built on Mac OS X. However, all the tools discussed in the post are available on the three major platforms, Linux, Mac, and Windows. Installation and configuration will vary.

An IDE is not required to scaffold the post’s application examples. However, for further development of the applications, I strongly recommend JetBrain’s WebStorm. According to Slant, WebStorm is a popular, highly rated IDE for building modern web applications. WebStorm is available on all three major platforms. A paid license is required, but well worth the reasonable investment based on the IDE’s rich feature set. WebStorm is integrated with many popular JavaScript frameworks. Additionally, there are hundreds of plug-ins available to extend WebStorm’s functionality.

Each of the post’s examples varies, architecturally. However, each also shares several common components, which we will install. They include:

npm
We will use npm (aka Node Package Manager), a leading server-side package manager, to manage the application’s server-side JavaScript dependencies.

Node.js
We will use Node.js, a JavaScript runtime, to power the JavaScript-based web application examples.

Bower
Similar to npm, Bower, is a popular client-side package manager. We will use Bower to manage the application’s client-side JavaScript dependencies.

Yeoman
We will use Yeoman, a leading web application scaffolding tool, to quickly build the frameworks for the example applications based on best practices and tooling.

Grunt
We will use Grunt, a leading JavaScript task runner, to automate common tasks such as minification, compilation, unit-testing, linting, and packaging of applications for deployment. At least two of the three examples also offer Gulp as an alternative.

Express
We will use Express, a Node.js web application framework that provides a robust set of features for web applications development, to support our example applications.

MongoDB
We will use MongoDB, a leading open-source NoSQL document database, for all three examples. For two of the examples, you can easily substitute alternate databases, such as MySQL, when configuring the application with Yeoman. The choice of database is of secondary importance in this post.

First install Node.js, which comes packaged with npm. Then, use npm to install Bower, Yeoman, and Grunt. Make sure you run the command to fix the permissions for npm. If permissions are set correctly, you should not have to use sudo with your npm commands.

The global mode option (-g) installs packages globally. Packages are usually installed globally, only if they are used as a command line tool, such as with Bower, Yeoman, and Grunt.

The easiest way to install Node.js and npm on OS X, is with Homebrew:

# install homebrew and confirm
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew update
brew doctor
export PATH="/usr/local/bin:$PATH"
brew --version
# install node and npm with brew and confirm
brew install node
node --version; npm --version

Alternatively, install Node and npm by downloading the package installer for Mac OS X (x64) from nodejs.org. If you are not a currently a Homebrew user, I suggest this route. At the time of this post, you have the choice of Node.js version v4.2.3 LTS or v5.1.1 Stable.

Fix the potential permission problem with npm, so sudo is not required:

sudo chown -R `whoami` $(npm config get prefix)
view raw fix_npm.bash hosted with ❤ by GitHub

Then, install Yeoman, Bower, and Grunt, globally:

npm install -g bower yo grunt-cli
bower --version; yo --version; grunt --version

Lastly, install MongoDB. Similar to Node, we can use Homebrew, or download and install MongoDB from the MongoDB website. Review this page for detailed installation and configuration instructions. To install MongoDB with Homebrew, we would issue the following commands:

brew update
brew install mongod
mongod --version
sudo mkdir -p /data/db
sudo chown -R `whoami` /data/db/
mongod

Example #1: Server-Centric Express Application

For our first example, we will scaffold a server-centric JavaScript web application using Pete Cooper’s Express Generator (v2.9.2). According to the project’s GitHub site, the Express Generator is ‘An Expressjs generator for Yeoman, based on the express command line tool.’ I suggest reading the project’s documentation before continuing; it describes the generator’s functionality in greater detail.

To install, download the Express Generator with npm, and install with Yeoman, as follows:

npm install -g generator-express
yo express

As part of the Express Generator’s configuration process, Yeoman will ask a series of configuration questions. The Express generator offers several choices for scaffolding the application. For this example, we will choose the following options: MVC, Marko, Sass, MongoDB, and Grunt.

Express-Generator with Yeoman'

Using Express-Generator with Yeoman

For those not as familiar with developing full-stack JavaScript applications, some of the generator’s choices may be unfamiliar, such as view engines, css preprocessors, and build tools. For this example, we will select Marko, a highly regarded JavaScript templating engine (aka view engine), for the first application. You can compare different engines on Slant. For CSS preprocessors, you can also refer to Slant for a comparison of leading candidates. We will choose Sass.

Lastly, for a build tool (aka task runner) we will choose Grunt. Grunt and Gulp are the two most popular choices. Either is a proven tool for automation tasks such as minification, compilation, unit-testing, linting, and packaging applications for deployment.

As shown below, Yeoman creates a series of files and directories and installs JavaScript libraries with npm and Bower. Choices are based on best practices, as prescribed by the generator.

Default Express-Generator File Structure

Default Express-Generator File Structure

npm

Yeoman uses npm and bower to install the generator’s required packages. Based on our five configuration choices for the Express Generator, npm installed over 225 packages in the project’s local node_module directory. This includes primary and secondary npm package dependencies. For example, Marko, one of the choices, which npm installed, has 24 dependencies it requires. In turn, each of those packages may have more dependencies. You quickly see why npm, and other similar package managers, are invaluable to building and managing a modern web application. The npm dependencies are declared in the package.json file, in the project’s root directory.

Partial List of npm Packages Installed

Partial List of npm Packages Installed

We will still need to install a few more items. We chose Sass as an Express generator option. Sass requires Ruby, which comes preinstalled on Mac OS X. If you wish, you can upgrade your pre-installed version of Ruby with Homebrew, but it is not required. Sass is installed with RubyGems, a package manager for Ruby. To automate the Sass-related tasks with Grunt, we also need to install the Grunt plug-in for Sass, grunt-contrib-sass, using npm:

# optional install updated version of Ruby and confirm
brew install ruby
ruby --version
# install grunt-contrib-sass
npm install grunt-contrib-sass --save-dev
# install sass and confirm
gem install sass
gem list sass

The Express Generator’s test are written in Mocha. Mocha is an asynchronous JavaScript test framework running on Node.js. The website suggests installing Mocha globally with npm. Mocha can be run from the command line:

npm install -g mocha
mocha --version

Up and Running

Simply running the grunt command will start the default Generator-Express MVC application. While in development, I prefer to run Grunt with the debug option (grunt -d) and/or with the verbose option (grunt -v or grunt -dv). These options offer enhanced feedback, especially on which tasks are run by Grunt. Review the terminal output to make sure the application started properly.

Starting the Application with Grunt

Starting the Application with Grunt

To confirm the Express application started correctly, in a second terminal window, curl the application using curl -I localhost:3000. Easier yet, point your web browser to localhost:3000. You should see the following default web page.

Default Express-Generator Application

Default Express-Generator Application

Example #2: Full-Stack MEAN Application

In our second example, we will scaffold a true full-stack JavaScript web application using Tyler Henkel’s popular AngularJS Full-Stack Generator for Yeoman (v3.0.2). According to the project’s GitHub site, the generator is a ‘Yeoman generator for creating MEAN stack applications, using MongoDB, Express, AngularJS, and Node – lets you quickly set up a project following best practices.’ As with the earlier example, I suggest you read the project’s documentation before continuing.

To install theAngularJS Full-Stack Generator, download the with npm and install with Yeoman:

npm install -g generator-angular-fullstack
mkdir demo-web-app-2; cd $_
yo angular-fullstack meanapp

Similar to the Express example, Yeoman will ask a series of configuration questions. We will choose the following options: Jade, Less, ngRoute, Bootstrap, UI Bootstrap, and MongoDB with Mongoose. AngularJS, Express, and Grunt are installed by the generator, automatically. For the sake of brevity, we will not include other available options, including Babel for ES6, OAuth authentication, or socket.io.

AngularJS Full-Stack Generator Config Options

AngularJS Full-Stack Generator Config Options

PhantomJS

After generating the AngularJS Full-Stack project, I received errors regarding PhantomJS. According to several sources, this is not uncommon. The AngularJS Full-Stack project uses PhantomJS as the default browser for Karma, the popular test runner, designed by the AngularJS team. Although npm installed PhantomJS locally, as part of the project, Karma complained about missing the path to the PhantomJS binary. To eliminate the issue, I installed PhantomJS globally with npm. I then manually added the PhantomJS binary path to the $PATH environment variable:

npm install -g phantomjs
sudo vi ~/.bash_profile
# add PHANTOMJS_BIN environment variable (next two lines)
export PHANTOMJS_BIN=/usr/local/lib/node_modules/phantomjs/bin
# your file/line may vary
resetexport PATH=${PHANTOMJS_BIN}:$PATH
phantomjs --version

To test Karma, with PhantomJS, run the grunt test command. This should result in error-free output, similar to the following.

Running "karma:unit" (karma) task

Running “karma:unit” (karma) task

Client/Server Architecture

Similar to the previous example, Yeoman creates a series of files and directories, and installs JavaScript packages on the server and client sides with npm and Bower.

AngularJS Full-Stack Generator File Structure

AngularJS Full-Stack Generator File Structure

Both the Express and AngularJS examples share several common files and directories. However, one major difference between the two is the client/server oriented directory structure of the AngularJS generator. Unlike the Express example, the AngularJS example has both a client and a server directory. The server-side of the application (aka back-end) is driven primarily by Express and Node. Mongoose serves as an interface between our application’s domain model and MongoDB, on the server-side. Also, on the server-side, Jade is used for HTML templating. The client-side of the application (aka front-end) is driven primarily by AngularJS. Twitter’s Bootstrap and Bootstrap UI offer a responsive web interface for our example application.

Full-Stack Server/Client File Structure

Full-Stack Server/Client File Structure

Up and Running

Running the grunt serve command will eventually start the default AngularJS Full-Stack application, after running a series of pre-defined Grunt tasks.

AngularJS Full-Stack App Starting with Grunt

AngularJS Full-Stack App Starting with Grunt

Review the terminal output to make sure the application started properly. You may see some warnings, suggesting the installation of several dependencies globally. You may also see warnings about dependency versions being outdated. Outdated versions are one of the challenges with generators that are not constantly kept refreshed and tested with the latest package dependencies. Warnings shouldn’t prevent the application from starting, only Errors.

AngularJS Full-Stack App Started with Grunt

AngularJS Full-Stack App Started with Grunt

To confirm the application started, in a second terminal window, curl the application using curl -I localhost:9000. Easier yet, point your web browser at localhost:9000. The default web page for the AngularJS Full-Stack web application is much more elaborate than the previous Express example. This is thanks to the Bootstrap and AngularJS client-side components.

AngularJS Full-Stack App Running in Browser

AngularJS Full-Stack App Running in Browser

Additional Generator Features

The AngularJS Full-Stack generator is capable of generating more than just the default application project. The AngularJS Full-Stack generator contains a set of generators. Beyond generating the basic application framework, you may use the generator to create boilerplate code for AngularJS and Node.js components for endpoints, services, routes, models, controllers, directives, and filters. The generators also provide the ability to prepare your application for deployment to OpenStack and Heroku.

The best place to review available options for the generators is on the GitHub sites. You can display a high-level list of the generator’s features using the yo --help command. Below are the three generators used in this post.

Below, is an example of generating additional application components using the AngularJS Full-Stack generator. First scaffold a server-side Express RESTful API endpoint, called ‘user’. The single command generates a server-side directory structure and several boilerplate files, including an Express model, controller, and router, and Mocha tests.

List of Installed Yeoman Generators

List of Installed Yeoman Generators

Next, generate a client-side AngularJS service, which connects to the server-side, Express RESTful ‘user’ endpoint above. The command creates a boilerplate AngularJS service and Mocha test. Lastly, create an AngularJS route. This generator command creates a boilerplate AngularJS route and controller, Mocha test, Jade view template, and less file.

AngularJS Full-Stack Component Generators

AngularJS Full-Stack Component Generators

Example #3: Java Hipster Application

In the third and last example, we will scaffold another full-stack web application. However, this time, we will use a generator that relies on Java EE as the primary development platform on the server-side, as opposed to JavaScript. JavaScript will be relegated largely to the client-side.

Again, we will use a Yeoman generator, JHipster, built by Julien Dubois and team, to scaffold the application (v2.25.0). According to the project’s GitHub site, JHipster uses a robust server-side Java EE stack with Spring Boot and Maven. JHipster’s mobile-first front-end is enabled with AngularJS and Bootstrap. Being the most complex of the three examples, it’s important to review the project documentation.

JHipster offers three ways to install the application, which are 1) locally, 2) a Docker container, or 3) a Vagrant VM. We will install the application framework locally as not to introduce additional complexity. To install the generator, download the JHipster generator with npm, and install with Yeoman:

npm install -g generator-jhipster
mkdir demo-web-app-3; cd $_
yo jhipster

Again, Yeoman will ask a series of configuration questions on behalf of JHipster. For this example, we will choose the following options: token-based authentication, MongoDB, Maven, Grunt, LibSass (Sass), and Gatling for testing. AngularJS and Bootstrap are installed automatically. We have chosen not to include other configuration options in this example, such as Angular Translate, WebSockets, and clustered HTTP sessions.

Default JHipster Generator Options

Default JHipster Generator Options

Once Yeoman finishes scaffolding the application, you should see the following output.

JHipster Generator Install Complete

JHipster Generator Install Complete

Maven Project Structure

The file and directory structure of JHipster is very different from the previous two examples. The first two example’s project structure is typical of a JavaScript project. In contrast, the JHipster example’s structure is more typical of a Maven-based Java project. In the JHipster project, the client-side JavaScript files are in the /src/main/webapp/ directory. The presence of the webapp directory is based on part of the project’s reliance on the Spring MVC web framework. Additionally, npm has loaded required server-side JavaScript packages into the node_modules directory, in the project’s root directory.

Default JHipster File Structure

Default JHipster File Structure

Up and Running

Running the mvn command will start the default JHipster application. The URL for the JHipster application is included in the terminal output.

JHipster Application Running with Maven

JHipster Application Running with Maven

To confirm the application has started, curl the application in a second terminal window, using curl -I localhost:8080. Easier yet, point your web browser to localhost:8080. Again, thanks to Bootstrap and AngularJS, the application presents a rich client UI.

Default JHipster Application Running

Default JHipster Application Running

Conclusion

The post’s examples represent a narrow sampling of available modern web application stacks, which can be easily scaffolded with generators. The JavaScript space continues to evolve rapidly. Even within the realm of JavaScript-based solutions, we didn’t examine several other popular frameworks, such as Meteor, FaceBook’s ReactJS, Ember, Backbone, and Polymer. They are all worth exploring, along with the hundreds of popular supporting frameworks, libraries, and API’s.

Useful Links

  • Post: JavaScript Frameworks: The Best 10 for Modern Web Apps (link)
  • Post: Best Web Frameworks (link)
  • Post: State Of Web Development 2014 (link)
  • YouTube: Modern Front-end Engineering (link)
  • YouTube: WebStorm – Things You Probably Didn’t Know (link)
  • Website: MEAN Stack (link)
  • Website: Full Stack Python (link)
  • Post: Best Full Stack Web Framework (link)

, , , , , , , , , , , , ,

1 Comment

Calling Third-Party HTTP-based RESTful APIs from the MEAN Stack

Example of calling Google’s Custom Search http-based RESTful API, using Node.js with Express and Request, from a MEAN.io-generated MEAN stack application. CustomSearchExample

Introduction

Most MEAN stack articles and tutorials demonstrate how AngularJS, on the client-side, calls Node.js with Express on the server-side, via a http-based RESTful API. In turn, on the server-side, Node.js with Express, and often a ODM like Mongoose, calls MongoDB. Below is a simple, high-level sequence diagram of a typical MEAN stack request/response data flow from the client to the server to the database, and back.

Typical MEAN Stack Request/Response Data Flow

Typical MEAN Stack Request/Response Data Flow

However in many situations, applications don’t only call into their own application stack. Applications often call third-party http-based RESTful APIs, including social networks, cloud providers, e-commerce, and news aggregators. Twitter’s REST API and Facebook Graph API are two popular social network examples. Within larger enterprise environments, applications call multiple internal applications. For example, an online retailer’s storefront application accesses their own inventory control system via RESTful URIs. This is the same RESTful API the retailer’s authorized resellers use to interact with the retailer’s own inventory control system.

Calling APIs from the MEAN Stack

From the Client-Side
There are two ways to call third-party http-based APIs from a MEAN stack application. The first approach is calling directly from the client-side. AngularJS calls the third-party API, directly. All logic is on the client-side, instead of on the server-side. Node.js and Express are not involved in the process. The approach requires less moving parts than the next approach, but is less secure and places more demand on the client to handle the application’s business logic. Below is a simple, high-level sequence diagram demonstrating a request/response data flow from AngularJS on the client-side to a third-party API, and back.

Example AngularJS/Third-Party API Request/Response Data Flow

Example AngularJS/Third-Party API Request/Response Data Flow

From the Server-Side
The second approach, using Node.js and Express, on the servers-side, is slightly more complex. However, this approach is also more architecturally sound, scalable, secure, and performant. AngularJS, on the client side, calls Node.js with Express, on the server-side. Node.js with Express then calls the service and pass the response back to the client-side, to AngularJS. Below is a simple, high-level sequence diagram demonstrating a request/response data flow from the client-side to the server-side, to a third-party API, and back.

Example Node.js/Third-Party API Request/Response Data Flow

Example Node.js/Third-Party API Request/Response Data Flow

Example

MEAN.io
Using the MEAN.io ‘FullStack JS Development’ framework, I have created a basic example of calling Google’s Custom Search http-based RESTful API, from Node.js with Express and Request. MEAN.io provides an ready-made MEAN stack boilerplate framework/generator, saving a lot of coding time. Irregardless of the generator or framework you choose, you would architect this example the same.

Google Custom Search API
Google provides the Custom Search API as part of their Custom Search, one of many API’s, available through the Google Developers portal. According to Google, “the JSON/Atom Custom Search API lets you develop websites and applications to retrieve and display search results from Google Custom Search programmatically. With this API, you can use RESTful requests to get either web search or image search results in JSON or Atom format.

Google APIs Explorer - Exploring Custom Search API

Google APIs Explorer – Exploring Custom Search API

In order to use the Custom Search API, you will need to first create a Google account, API project, API keyCustom Search Engine (CSE), and CSE ID, through Google’s Developers Console. If you have previously worked with Google, FaceBook, or Twitter APIs, creating an API project, CSE, API key, and CSE ID, if very similar.

Google Custom Search - Your Search Engine ID

Google Custom Search – Your Search Engine ID

Like most of Google’s APIs, the Custom Search API pricing and quotas depend on the engine’s edition. You have a choice of two engines. According to Google, the free Custom Search Engine provides 100 search queries per day for free. If you need more, you may sign up for billing in the Developers Console. Additional requests cost $5 per 1000 queries, up to 10k queries per day. The limit of 100 is more than enough for this demonstration.

Installing and Configuring the Project

All the code for this project is available on GitHub at /meanio-custom-search. Before continuing, make sure you have the prerequisite software installed – GitNode.js with npm, and MongoDB. To install the GitHub project, follow these commands:

git clone https://github.com/garystafford/meanio-custom-search.git
cd meanio-custom-search
npm install

Alternatively, if you want to code the project yourself, these are the commands I used to set up the base MEAN.io framework, and create ‘search‘ package:

sudo npm install -g mean-cli
mean init meanio-custom-search
cd meanio-custom-search
npm install
mean package search

After creating your own CSE ID and API key, create two environmental variables, GOOGLE_CSE_ID and GOOGLE_API_KEY, to hold the values.

echo "export GOOGLE_API_KEY=<YOUR_API_KEY_HERE>" >> ~/.bashrc
echo "export GOOGLE_CSE_ID=<YOUR_CSE_ID_HERE>"   >> ~/.bashrc

The code is run from a terminal prompt with the grunt command. Then, in the browser, go to http://localhost:3000. Once on the main home page, you can navigate to the ‘Search Example’ page, and input a search term, such as ‘MEAN Stack’. All the instructions on the MEAN.io Github site, apply to this project.

The Project’s Architecture

According to MEAN.io, everything in mean.io is a ‘package’. When extending mean with custom functionality, you create a new ‘package’. In this case, I have created a ‘search’ package, with the command above, ‘mean package search‘. Below is the basic file structure of the ‘search‘ package, within the overall MEAN.io project framework. The ‘public‘ folder contains all the client-side, AngularJS code. The ‘server‘ folder contains all the server-side, Node.js/Express/Request code. Note that each ‘package’ also has its own ‘package.json‘ npm file and ‘bower.json‘ Bower file.

Folder Structure of Search Package with Callouts

The simple, high-level sequence diagram below shows the flow of the custom search request from the ‘Search Example’ view to the Google Custom Search API. The diagram also shows the response from the Google Custom Search API all the way back up the MEAN stack to the client-side view.

High-Level Custom Search API Request/Response Data Flow

High-Level Custom Search API Request/Response Data Flow

Client-Side Request/Response
If you view the network traffic in your web browser, you will see a RESTful URI call is made between AngularJS’ service factory, on the client-side, and Node.js with Express, on the server-side. The RESTful endpoint, called with $http.jsonp(), will be similar to: http://localhost:3000/customsearch/MEAN.io/10?callback=angular.callbacks._0. In actuality, the callback parameter name, the AngularJS service factory, is ‘JSON_CALLBACK‘. This is replaced by AngularJS with an incremented ‘angular.callbacks._X‘ parameter name, making the response callback name incremental and unique.

The response returned to AngularJS from Node.js is a sub-set of full response from Google’s Custom Search API. Only the search results items and a ‘200’ status code are returned to AngularJS as JavaScript, JSONP wrapped in a callback. Below is a sample response, truncated to just a single search result. I have highlighted the four fields that are displayed in the ‘Search Example’ view, using AngularJS’ ng-repeat directive.

/**/
typeof angular.callbacks._0 === 'function' && angular.callbacks._0({
    "statusCode": 200,
    "items"     : [{
        "kind"            : "customsearch#result",
        "title"           : "MEAN.IO - MongoDB, Express, Angularjs Node.js powered fullstack ...",
        "htmlTitle"       : "<b>MEAN</b>.<b>IO</b> - MongoDB, Express, Angularjs Node.js powered fullstack <b>...</b>",
        "link"            : "http://mean.io/",
        "displayLink"     : "mean.io",
        "snippet"         : "MEAN - MongoDB, ExpressJS, AngularJS, NodeJS. based fullstack js framework.",
        "htmlSnippet"     : "<b>MEAN</b> - MongoDB, ExpressJS, AngularJS, NodeJS. based fullstack js framework.",
        "cacheId"         : "_CZQNNP6VMEJ",
        "formattedUrl"    : "mean.io/",
        "htmlFormattedUrl": "<b>mean</b>.<b>io</b>/",
        "pagemap"         : {
            "cse_image"    : [{"src": "http://i.ytimg.com/vi/oUtWtSF_VNY/hqdefault.jpg"}],
            "cse_thumbnail": [{
                "width" : "259",
                "height": "194",
                "src"   : "https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSIVwPo7OcW9u_b3P3DGxv8M7rKifGZITi1Bhmpy10_I2tlUqjRUVVUBKNG"
            }],
            "metatags"     : [{
                "viewport"      : "width=1024",
                "fb:app_id"     : "APP_ID",
                "og:title"      : "MEAN.IO - MongoDB, Express, Angularjs Node.js powered fullstack web framework - MEAN.IO - MongoDB, Express, Angularjs Node.js powered fullstack web framework",
                "og:description": "MEAN  MongoDB, ExpressJS, AngularJS, NodeJS.",
                "og:type"       : "website",
                "og:url"        : "APP_URL",
                "og:image"      : "APP_LOGO",
                "og:site_name"  : "MEAN.IO",
                "fb:admins"     : "APP_ADMIN"
            }]
        }
    }]
});

Server-Side Request/Response
On the server-side, Node.js with Express and Request, calls the Google Custom Search API via a RESTful URI. The RESTful URI, called with request.get(), will be similar to: https://www.googleapis.com/customsearch/v1?cx=ed026i714398r53510g2ja1ru6741h:73&q=MEAN.io&num=10&key=jtHeNjIAtSa1NaWJzmVvBC7qoubrRSyIAmVJjpQu. Note the URI contains both the your CSE ID and API key (not my real ones, of course). The JSON response from Google’s Custom Search API has other data, which is not necessary to display the results.

Shown below is a sample response with a single search result. Like the URI above, the response from Google has your Custom Search Engine ID. Your CSE ID and API key should both be considered confidential and not visible to the client. The CSE ID could be easily intercepted in both the URI and the response object, and used without your authorization. Google has a page that suggests methods to keep your keys secure.

{
  kind: "customsearch#search",
  url: {
    type: "application/json",
    template: "https://www.googleapis.com/customsearch/v1?q={searchTerms}&num={count?}&start={startIndex?}&lr={language?}&safe={safe?}&cx={cx?}&cref={cref?}&sort={sort?}&filter={filter?}&gl={gl?}&cr={cr?}&googlehost={googleHost?}&c2coff={disableCnTwTranslation?}&hq={hq?}&hl={hl?}&siteSearch={siteSearch?}&siteSearchFilter={siteSearchFilter?}&exactTerms={exactTerms?}&excludeTerms={excludeTerms?}&linkSite={linkSite?}&orTerms={orTerms?}&relatedSite={relatedSite?}&dateRestrict={dateRestrict?}&lowRange={lowRange?}&highRange={highRange?}&searchType={searchType}&fileType={fileType?}&rights={rights?}&imgSize={imgSize?}&imgType={imgType?}&imgColorType={imgColorType?}&imgDominantColor={imgDominantColor?}&alt=json"
  },
  queries: {
    nextPage: [
      {
        title: "Google Custom Search - MEAN.io",
        totalResults: "12100000",
        searchTerms: "MEAN.io",
        count: 10,
        startIndex: 11,
        inputEncoding: "utf8",
        outputEncoding: "utf8",
        safe: "off",
        cx: "ed026i714398r53510g2ja1ru6741h:73"
      }
    ],
    request: [
      {
        title: "Google Custom Search - MEAN.io",
        totalResults: "12100000",
        searchTerms: "MEAN.io",
        count: 10,
        startIndex: 1,
        inputEncoding: "utf8",
        outputEncoding: "utf8",
        safe: "off",
        cx: "ed026i714398r53510g2ja1ru6741h:73"
      }
    ]
  },
  context: {
    title: "my_search_engine"
  },
  searchInformation: {
    searchTime: 0.237431,
    formattedSearchTime: "0.24",
    totalResults: "12100000",
    formattedTotalResults: "12,100,000"
  },
  items: [
    {
      kind: "customsearch#result",
      title: "MEAN.IO - MongoDB, Express, Angularjs Node.js powered fullstack ...",
      htmlTitle: "<b>MEAN</b>.<b>IO</b> - MongoDB, Express, Angularjs Node.js powered fullstack <b>...</b>",
      link: "http://mean.io/",
      displayLink: "mean.io",
      snippet: "MEAN - MongoDB, ExpressJS, AngularJS, NodeJS. based fullstack js framework.",
      htmlSnippet: "<b>MEAN</b> - MongoDB, ExpressJS, AngularJS, NodeJS. based fullstack js framework.",
      cacheId: "_CZQNNP6VMEJ",
      formattedUrl: "mean.io/",
      htmlFormattedUrl: "<b>mean</b>.<b>io</b>/",
      pagemap: {
        cse_image: [
          {
            src: "http://i.ytimg.com/vi/oUtWtSF_VNY/mqdefault.jpg"
          }
        ],
        cse_thumbnail: [
          {
            width: "256",
            height: "144",
            src: "https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTXm3rYwGdWs9Cx3s5VvooATKlgtrVZoP83hxfAOjGvsRMqLpMKuycVl_sF"
          }
        ],
        metatags: [
          {
            viewport: "width=1024",
            fb:app_id: "APP_ID",
            og:title: "MEAN.IO - MongoDB, Express, Angularjs Node.js powered fullstack web framework - MEAN.IO - MongoDB, Express, Angularjs Node.js powered fullstack web framework",
            og:description: "MEAN  MongoDB, ExpressJS, AngularJS, NodeJS.",
            og:type: "website",
            og:url: "APP_URL",
            og:image: "APP_LOGO",
            og:site_name: "MEAN.IO",
            fb:admins: "APP_ADMIN"
          }
        ]
      }
    }
  ]
}

The best way to understand the project’s sample code is to clone the GitHub repo, and explore the files directly associated with the search, starting in the ‘packages/custom/search‘ subdirectory.

Helpful Links

Learn REST: A RESTful Tutorial
Using an AngularJS Factory to Interact with a RESTful Service
Google APIs Client Library for JavaScript (Beta)
REST-ful URI design
Creating a CRUD App in Minutes with Angular’s $resource

, , , , , , , , , , , , , ,

9 Comments

Install Latest Node.js and npm in a Docker Container

Install the latest versions of Node.js and npm, into a Docker container, with or without the need for root access. Easily update both applications to the latest versions.

Install and Confirm Node and npm

Ubuntu and Node

Recently, I was setting up a new development laptop with Ubuntu 14.10 (Utopic Unicorn). As part of the setup, I needed to install all the several development tools, including Node.js and npm. Researching the current recommendations for installing Node.js and npm on Ubuntu, I found using the traditional ‘apt-get‘ command does not always install the latest versions of either application. Additionally, ‘apt-get’ makes updating those versions difficult.

After a lot of investigation, I created three different snippets of code to install the latest copies of Node.js and npm. Some of my code came from Isaac Z. Schlueter‘s series of installations Gists, and a post on StackOverflow by Pascal Hartig. Joyant and others recommended Isaac’s Gists for installing earlier versions of Node.js and npm. Other code was found in posts by DigitalOcean. Versions are as follows:

  • Version 1: using ‘apt-get install’
  • Version 2: using curl, make, and npmjs.org’s install script
  • Version 3: version 2 without requiring ‘sudo’ to use npm*

*There is some debate on the use of ‘sudo’ with some earlier versions of npm. It appears not to be recommended with the latest versions of npm.

Docker

Docker containers and virtual machines (VM) are ideal platforms for developing and testing applications, locally. I often create a Docker container or VirtualBox VM, to install and test new scripts, before running them within our software environments. To test this code, I created three separate Docker containers, based on the official 14.04 Ubuntu base image, located on Docker Hub. I then executed each version of code within a container. After installation testing, I chose version 2 for my laptop.

Displaying Docker Ubuntu Image and Containers

Docker Ubuntu Image and Containers

GitHub Gists

The three versions of install scripts on gist.github.com, perform the following tasks:

  • Creates Docker container
  • Updates Ubuntu system packages within container
  • Creates new ‘testuser’ account within container (‘testuser’)
  • Installs required software to install Node.js, if necessary (curl, make, etc.)
  • Installs Node.js and npm
  • Installs some common full-stack JavaScript npm packages
  • Verifies installation locations and contents correct
###############################################################################
# Version 1: using ‘apt-get install’
# Installs using apt-get
# Requires update to npm afterwards
# Harder to get latest copy of node
# Requires sudo to use npm
###############################################################################
# create new docker ubuntu container
sudo docker run -i -t ubuntu /bin/bash # drops you into container as root
# update and install all required packages (no sudo required as root)
# https://gist.github.com/isaacs/579814#file-only-git-all-the-way-sh
apt-get update -yq && apt-get upgrade -yq && \
apt-get install -yq curl git nano
# install from nodesource using apt-get
# https://www.digitalocean.com/community/tutorials/how-to-install-node-js-on-an-ubuntu-14-04-server
curl -sL https://deb.nodesource.com/setup | sudo bash - && \
apt-get install -yq nodejs build-essential
# fix npm - not the latest version installed by apt-get
npm install -g npm
# add user with sudo privileges within Docker container
# without adduser input questions
# http://askubuntu.com/questions/94060/run-adduser-non-interactively/94067#94067
USER="testuser" && \
adduser --disabled-password --gecos "" $USER && \
sudo usermod -a -G sudo $USER && \
echo "$USER:abc123" | chpasswd && \
su - $USER # switch to testuser
# install common full-stack JavaScript packages globally
# http://blog.nodejs.org/2011/03/23/npm-1-0-global-vs-local-installation
sudo npm install -g yo grunt-cli bower express
# optional, check locations and packages are correct
which node; node -v; which npm; npm -v; \
npm ls -g --depth=0
###############################################################################
# Version 2: using curl, make, and npmjs.org’s install script
# Installs using make and shell script
# Easier to update node and npm versions in future
# Requires sudo to use npm
###############################################################################
# create new docker ubuntu container
sudo docker run -i -t ubuntu /bin/bash # drops you into container as root
# update and install all required packages (no sudo required as root)
# https://gist.github.com/isaacs/579814#file-only-git-all-the-way-sh
apt-get update -yq && apt-get upgrade -yq && \
apt-get install -yq g++ libssl-dev apache2-utils curl git python make nano
# install latest Node.js and npm
# https://gist.github.com/isaacs/579814#file-node-and-npm-in-30-seconds-sh
mkdir ~/node-latest-install && cd $_ && \
curl http://nodejs.org/dist/node-latest.tar.gz | tar xz --strip-components=1 && \
make install && \ # takes a few minutes to build...
curl https://www.npmjs.org/install.sh | sh
# add user with sudo privileges within Docker container
# without adduser input questions
# http://askubuntu.com/questions/94060/run-adduser-non-interactively/94067#94067
USER="testuser" && \
adduser --disabled-password --gecos "" $USER && \
sudo usermod -a -G sudo $USER && \
echo "$USER:abc123" | chpasswd && \
su - $USER # switch to testuser
# install common full-stack JavaScript packages globally
# http://blog.nodejs.org/2011/03/23/npm-1-0-global-vs-local-installation
sudo npm install -g yo grunt-cli bower express
# optional, check locations and packages are correct
which node; node -v; which npm; npm -v; \
npm ls -g --depth=0
###############################################################################
# Version 3: version 2 without requiring ‘sudo’ to use npm
# Does NOT require sudo to use npm
###############################################################################
# create new docker ubuntu container
sudo docker run -i -t ubuntu /bin/bash # drops you into container as root
# update and install all required packages (no sudo required as root)
# https://gist.github.com/isaacs/579814#file-only-git-all-the-way-sh
apt-get update -yq && apt-get upgrade -yq && \
apt-get install -yq g++ libssl-dev apache2-utils curl git python make nano
# add user with sudo privileges within Docker container
# without adduser input questions
# http://askubuntu.com/questions/94060/run-adduser-non-interactively/94067#94067
USER="testuser" && \
adduser --disabled-password --gecos "" $USER && \
sudo usermod -a -G sudo $USER && \
echo "$USER:abc123" | chpasswd && \
su - $USER # switch to testuser
# setting up npm for global installation without sudo
# http://stackoverflow.com/a/19379795/580268
MODULES="local" && \
echo prefix = ~/$MODULES >> ~/.npmrc && \
echo "export PATH=\$HOME/$MODULES/bin:\$PATH" >> ~/.bashrc && \
. ~/.bashrc && \
mkdir ~/$MODULES
# install Node.js and npm
# https://gist.github.com/isaacs/579814#file-node-and-npm-in-30-seconds-sh
mkdir ~/node-latest-install && cd $_ && \
curl http://nodejs.org/dist/node-latest.tar.gz | tar xz --strip-components=1 && \
./configure --prefix=~/$MODULES && \
make install && \ # takes a few minutes to build...
curl https://www.npmjs.org/install.sh | sh
# install common fullstack JavaScript packages globally
npm install -g yo grunt-cli bower express
# optional, check locations and packages are correct
which node; node -v; which npm; npm -v; \
npm ls -g --depth=0

Running Code

Installing Node, npm, and New User Account

Installing Node, npm, and New User Account

Installing and Verifying npm Packages

Installing and Verifying npm Packages

, , , , , , , , , , ,

1 Comment

Retrieving and Displaying Data with AngularJS and the MEAN Stack: Part II

Explore various methods of retrieving and displaying data using AngularJS and the MEAN Stack.

Mobile View on Android Smartphone

Mobile View on Android Smartphone

Introduction

In this two-part post, we are exploring methods of retrieving and displaying data using AngularJS and the MEAN Stack. The post’s corresponding GitHub project, ‘meanstack-data-samples‘, is based on William Lepinski’s ‘generator-meanstack‘, which is in turn is based on Yeoman’s ‘generator-angular‘. As a bonus, since both projects are based on ‘generator-angular’, all the code generators work. Generators can save a lot of time and aggravation when building AngularJS components.

In part one of this post, we installed and configured the ‘meanstack-data-samples’ project from GitHub. In part two, we will we will look at five examples of retrieving and displaying data using AngularJS:

  • Function within AngularJS Controller returns array of strings.
  • AngularJS Service returns an array of simple object literals to the controller.
  • AngularJS Factory returns the contents of JSON file to the controller.
  • AngularJS Factory returns the contents of JSON file to the controller using a resource object
    (In GitHub project, but not discussed in this post).
  • AngularJS Factory returns a collection of documents from MongoDB Database to the controller.
  • AngularJS Factory returns results from Google’s RESTful Web Search API to the controller.

Project Structure

For brevity, I have tried to limit the number of files in the project. There are two main views, both driven by a single controller. The primary files, specific to data retrieval and display, are as follows:

  • Default site file (./)
    • index.html – loads all CSS and JavaScript files, and views
  • App and Routes (./app/scripts/)
    • app.js – instantiates app and defines routes (route/view/controller relationship)
  • Views (./app/views/)
    • data-bootstrap.html – uses Twitter Bootstrap
    • data-no-bootstrap.html – basically the same page, without Twitter Bootstrap
  • Controllers (./app/scripts/controllers/)
    • DataController.js (DataController) – single controller used by both views
  • Services and Factories (./app/scripts/services/)
    • meanService.js (meanService) – service returns array of object literals to DataController
    • jsonFactory.js (jsonFactory) – factory returns contents of JSON file
    • jsonFactoryResource.js (jsonFactoryResource) – factory returns contents of JSON file using resource object (new)
    • mongoFactory.js (mongoFactory) – factory returns MongoDB collection of documents
    • googleFactory.js (googleFactory) – factory call Google Web Search API
  • Models (./app/models/)
    • Components.js – mongoose constructor for the Component schema definition
  • Routes (./app/)
    • routes.js – mongoose RESTful routes
  • Data (./app/data/)
    • otherStuff.json – static JSON file loaded by jsonFactory
  • Environment Configuration (./config/environments/)
    • index.js – defines all environment configurations
    • test.js – Configuration specific to the current ‘test’ environment
  • Unit Tests (./test/spec/…)
    • Various files – all controller and services/factories unit test files are in here…
Project in JetBrains WebStorm 8.0

Project in JetBrains WebStorm 8.0

There are many more files, critical to the project’s functionality, include app.js, Gruntfile.js, bower.json, package.json, server.js, karma.conf.js, and so forth. You should understand each of these file’s purposes.

Function Returns Array

In the first example, we have the yeomanStuff() method, a member of the $scope object, within the DataController.  The yeomanStuff() method return an array object containing three strings. In JavaScript, a method is a function associated with an object.

$scope.yeomanStuff = function () {
  return [
    'yo',
    'Grunt',
    'Bower'
  ];
};
'yeomanStuff' Method of the '$scope' Object

‘yeomanStuff’ Method of the ‘$scope’ Object

The yeomanStuff() method is called from within the view by Angular’s ng-repeat directive. The directive, ng-repeat, allows us to loop through the array of strings and add them to an unordered list. We will use ng-repeat for all the examples in this post.

<ul class="list-group">
  <li class="list-group-item"
	  ng-repeat="stuff in yeomanStuff()">
	{{stuff}}
  </li>
<ul>

Method1

Although this first example is easy to implement, it is somewhat impractical. Generally, you would not embed static data into your code. This limits your ability to change the data, independent of a application’s code. In addition, the function is tightly coupled to the controller, limiting its reuse.

Service Returns Array

In the second example, we also use data embedded in our code. However, this time we have improved the architecture slightly by moving the data to an Angular Service. The meanService contains the getMeanStuff() function, which returns an array containing four object literals. Using a service, we can call the getMeanStuff() function from anywhere in our project.

angular.module('generatorMeanstackApp')
  .service('meanService', function () {
    this.getMeanStuff = function () {
      return ([
        {
          component: 'MongoDB',
          url: 'http://www.mongodb.org'
        },
        {
          component: 'Express',
          url: 'http://expressjs.com'
        },
        {
          component: 'AngularJS',
          url: 'http://angularjs.org'
        },
        {
          component: 'Node.js',
          url: 'http://nodejs.org'
        }
      ])
    };
  });

Within the DataController, we assign the array object, returned from the meanService.getMeanStuff() function, to the meanStuff object property of the  $scope object.

$scope.meanStuff = {};
try {
  $scope.meanStuff = meanService.getMeanStuff();
} catch (error) {
  console.error(error);
}
'meanStuff' Property of the '$scope' Object

‘meanStuff’ Property of the ‘$scope’ Object

The meanStuff object property is accessed from within the view, using ng-repeat. Each object in the array contains two properties, component and url. We display the property values on the page using Angular’s double curly brace expression notation (i.e. ‘{{stuff.component}}‘).

<ul class="nav nav-pills nav-stacked">
  <li ng-repeat="stuff in meanStuff">
    url}}"
       target="_blank">{{stuff.component}}
  </li>
<ul>

Method2

Promises, Promises…

The remaining methods implement an asynchronous (non-blocking) programming model, using the $http and $q services of Angular’s ng module. The services implements the asynchronous Promise and Deferred APIs. According to Chris Webb, in his excellent two-part post, Promise & Deferred objects in JavaScript: Theory and Semantics, a promise represents a value that is not yet known and a deferred represents work that is not yet finished. I strongly recommend reading Chris’ post, before continuing. I also highly recommend watching RED Ape EDU’s YouTube video, Deferred and Promise objects in Angular js. This video really clarified the promise and deferred concepts for me.

Factory Loads JSON File

In the third example, we will read data from a JSON file (‘./app/data/otherStuff.json‘) using an AngularJS Factory. The differences between a service and a factory can be confusing, and are beyond the scope of this post. Here is two great links on the differences, one on Angular’s site and one on StackOverflow.

{
  "components": [
    {
      "component": "jQuery",
      "url": "http://jquery.com"
    },
    {
      "component": "Jade",
      "url": "http://jade-lang.com"
    },
    {
      "component": "JSHint",
      "url": "http://www.jshint.com"
    },
    {
      "component": "Karma",
      "url": "http://karma-runner.github.io"
    },
    ...
  ]
}

The jsonFactory contains the getOtherStuff() function. This function uses $http.get() to read the JSON file and returns a promise of the response object. According to Angular’s site, “since the returned value of calling the $http function is a promise, you can also use the then method to register callbacks, and these callbacks will receive a single argument – an object representing the response. A response status code between 200 and 299 is considered a success status and will result in the success callback being called. ” As I mentioned, a complete explanation of the deferreds and promises, is too complex for this short post.

angular.module('generatorMeanstackApp')
  .factory('jsonFactory', function ($q, $http) {
    return {
      getOtherStuff: function () {
        var deferred = $q.defer(),
          httpPromise = $http.get('data/otherStuff.json');

        httpPromise.then(function (response) {
          deferred.resolve(response);
        }, function (error) {
          console.error(error);
        });

        return deferred.promise;
      }
    };
  });

The response object contains the data property. Angular defines the response object’s data property as a string or object, containing the response body transformed with the transform functions. One of the properties of the data property is the components array containing the seven objects. Within the DataController, if the promise is resolved successfully, the callback function assigns the contents of the components array to the otherStuff property of the $scope object.

$scope.otherStuff = {};
jsonFactory.getOtherStuff()
  .then(function (response) {
    $scope.otherStuff = response.data.components;
  }, function (error) {
    console.error(error);
  });
'otherStuff' Property of the '$scope' Object

‘otherStuff’ Property of the ‘$scope’ Object

The otherStuff property is accessed from the view, using ng-repeat, which displays individual values, exactly like the previous methods.

<ul class="nav nav-pills nav-stacked">
  <li ng-repeat="stuff in otherStuff">
    <a href="{{stuff.url}}"
       target="_blank">{{stuff.component}}</a>
  </li>
</ul>

Method3

This method of reading a JSON file is often used for configuration files. Static configuration data is stored in a JSON file, external to the actual code. This way, the configuration can be modified without requiring the main code to be recompiled and deployed. It is a technique used by the components within this very project. Take for example the bower.json files and the package.json files. Both contain configuration data, stored as JSON, used by Bower and npm to perform package management.

Factory Retrieves Data from MongoDB

In the fourth example, we will read data from a MongoDB database. There are a few more moving parts in this example than in the previous examples. Below are the documents in the components collection of the meanstack-test MongoDB database, which we will retrieve and display with this method.  The meanstack-test database is defined in the test.js environments file (discussed in part one).

'meanstack-test' Database's 'components' Collection Documents

‘meanstack-test’ Database’s ‘components’ Collection Documents

To connect to the MongoDB, we will use Mongoose. According to their website, “Mongoose provides a straight-forward, schema-based solution to modeling your application data and includes built-in type casting, validation, query building, business logic hooks and more, out of the box.” But wait, MongoDB is schemaless? It is. However, Mongoose provides a schema-based API for us to work within. Again, according to Mongoose’s website, “Everything in Mongoose starts with a Schema. Each schema maps to a MongoDB collection and defines the shape of the documents within that collection.

In our example, we create the componentSchema schema, and pass it to the Component model (the ‘M’ in MVC). The componentSchema maps to the database’s components collection.

var mongoose = require('mongoose');
var Schema = mongoose.Schema;

var componentSchema = new Schema({
  component: String,
  url: String
});

module.exports = mongoose.model('Component', componentSchema);

The routes.js file associates routes (Request URIs) and HTTP methods to Mongoose actions. These actions are usually CRUD operations. In our simple example, we have a single route, ‘/api/components‘, associated with an HTTP GET method. When an HTTP GET request is made to the ‘/api/components‘ request URI, Mongoose calls the Model.find() function, ‘Component.find()‘, with a callback function parameter. The Component.find() function returns all documents in the components collection.

var Component = require('./models/component');

module.exports = function (app) {
  app.get('/api/components', function (req, res) {
    Component.find(function (err, components) {
      if (err)
        res.send(err);

      res.json(components);
    });
  });
};

You can test these routes, directly. Below, is the results of calling the ‘/api/components‘ route in Chrome.

Response from MongoDB Using Mongoose

Response from MongoDB Using Mongoose

The mongoFactory contains the getMongoStuff() function. This function uses $http.get() to call  the ‘/api/components‘ route. The route is resolved by the routes.js file, which in turn executes the Component.find() command. The promise of an array of objects is returned by the getMongoStuff() function. Each object represents a document in the components collection.

angular.module('generatorMeanstackApp')
  .factory('mongoFactory', function ($q, $http) {
    return {
      getMongoStuff: function () {
        var deferred = $q.defer(),
          httpPromise = $http.get('/api/components');

        httpPromise.success(function (components) {
          deferred.resolve(components);
        })
          .error(function (error) {
            console.error('Error: ' + error);
          });

        return deferred.promise;
      }
    };
  });

Within the DataController, if the promise is resolved successfully, the callback function assigns the array of objects, representing the documents in the collection, to the mongoStuff property of the $scope object.

$scope.mongoStuff = {};
mongoFactory.getMongoStuff()
  .then(function (components) {
    $scope.mongoStuff = components;
  }, function (error) {
    console.error(error);
  });
'mongoStuff' Property of the '$scope' Object

‘mongoStuff’ Property of the ‘$scope’ Object

The mongoStuff property is accessed from the view, using ng-repeat, which displays individual values using Angular expressions, exactly like the previous methods.

<ul class="list-group">
  <li class="list-group-item" ng-repeat="stuff in mongoStuff">
    <b>{{stuff.component}}</b>
    <div class="text-muted">{{stuff.description}}</div>
  </li>
</ul>

Method4

Factory Calls Google Search

Post Update: the Google Web Search API is no longer available as of September 29, 2014. The post’s example post will no longer return a resultset. Please migrate to the Google Custom Search API (https://developers.google.com/custom-search/). Please read ‘Calling Third-Party HTTP-based RESTful APIs from the MEAN Stack‘ post for more information on using Google’s Custom Search API.

In the last example, we will call the Google Web Search API from an AngularJS Factory. The Google Web Search API exposes a simple RESTful interface. According to Google, “in all cases, the method supported is GET and the response format is a JSON encoded result set with embedded status codes.” Google describes this method of using RESTful access to the API, as “for Flash developers, and those developers that have a need to access the Web Search API from other Non-JavaScript environment.” However, we will access it in our JavaScript-based MEAN stack application, due to the API’s ease of implementation.

Note according to Google’s site, “the Google Web Search API has been officially deprecated…it will continue to work…but the number of requests…will be limited. Therefore, we encourage you to move to Custom Search, which provides an alternative solution.Google Search, or more specifically, the Custom Search JSON/Atom API, is a newer API, but the Web Search API is easier to demonstrate in this brief post than Custom Search JSON/Atom API, which requires the use of an API key.

The googleFactory contains the getSearchResults() function. This function uses $http.jsonp() to call the Google Web Search API RESTful interface and return the promise of the JSONP-formatted (‘JSON with padding’) response. JSONP provides cross-domain access to a JSON payload, by wrapping the payload in a JavaScript function call (callback).

angular.module('generatorMeanstackApp')
  .factory('googleFactory', function ($q, $http) {
    return {
      getSearchResults: function () {
        var deferred = $q.defer(),
          host = 'https://ajax.googleapis.com/ajax/services/search/web',
          args = {
            'version': '1.0',
            'searchTerm': 'mean%20stack',
            'results': '8',
            'callback': 'JSON_CALLBACK'
          },
          params = ('?v=' + args.version + '&q=' + args.searchTerm + '&rsz=' +
            args.results + '&callback=' + args.callback),
          httpPromise = $http.jsonp(host + params);

        httpPromise.then(function (response) {
          deferred.resolve(response);
        }, function (error) {
          console.error(error);
        });

        return deferred.promise;
      }
    };
  });

The getSearchResults() function uses the HTTP GET method to make an HTTP request the following RESTful URI:
https://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=mean%20stack&rsz=8&callback=angular.callbacks._0

Using Google Chrome’s Developer tools, we can preview the Google Web Search JSONP-format HTTP response (abridged). Note the callback function that wraps the JSON payload.

Google Web Search Results in Chrome Browser

Google Web Search Results in Chrome Browser

Within the DataController, if the promise is resolved successfully, our callback function returns the response object. The response object contains a lot of information. We are able to limit that amount of information sent to the view by only assigning the actual search results, an array of eight objects contained in the response object, to the googleStuff property of the $scope object.

$scope.googleStuff = {};
googleFactory.getSearchResults()
  .then(function (response) {
    $scope.googleStuff = response.data.responseData.results;
  }, function (error) {
    console.error(error);
  });

Below is the full response returned by the The googleFactory. Note the path to the data we are interested in: ‘response.data.responseData.results‘.

Google Search Response Object

Google Search Response Object

Below is the filtered results assigned to the googleStuff property:

'googleStuff' Property of the '$scope' Object

‘googleStuff’ Property of the ‘$scope’ Object

The googleStuff property is accessed from the view, using ng-repeat, which displays individual values using Angular expressions, exactly like the previous methods.

<ul class="list-group">
  <li class="list-group-item"
      ng-repeat="stuff in googleStuff">
    <a href="{{unescapedUrl.url}}"
       target="_blank"><b>{{stuff.visibleUrl}}</b></a>

    <div class="text-muted">{{stuff.titleNoFormatting}}</div>
  </li>
</ul>

Method5

Links

, , , , , , , , , , , , , , , , , , , , ,

6 Comments

Retrieving and Displaying Data with AngularJS and the MEAN Stack: Part I

Explore various methods of retrieving and displaying data using AngularJS and the MEAN Stack.

Mobile View of Application on Android Smartphone

Mobile View of Application on Android Smartphone

Introduction

In the following two-part post, we will explore several methods of retrieving and displaying data using AngularJS and the MEAN Stack. The post’s corresponding GitHub project, ‘meanstack-data-samples‘, is based on William Lepinski’s ‘generator-meanstack‘, which is in turn is based on Yeoman’s ‘generator-angular‘. As a bonus, since both projects are based on ‘generator-angular’, all the code generators work. Generators can save a lot of time and aggravation when building AngularJS components.

In part one of this post, we will install and configure the ‘meanstack-data-samples’ project from GitHub, which corresponds to this post. In part two, we will we will look at several methods for retrieving and displaying data using AngularJS:

  • Function within AngularJS Controller returns array of strings.
  • AngularJS Service returns an array of simple object literals to the controller.
  • AngularJS Factory returns the contents of JSON file to the controller.
  • AngularJS Factory returns the contents of JSON file to the controller using a resource object
    (In GitHub project, but not discussed in this post).
  • AngularJS Factory returns a collection of documents from MongoDB Database to the controller.
  • AngularJS Factory returns results from Google’s RESTful Web Search API to the controller.

Preparation

If you need help setting up your development machine to work with the MEAN stack, refer to my last post, Installing and Configuring the MEAN Stack, Yeoman, and Associated Tooling on Windows. You will need to install all the MEAN and Yeoman components.

For this post, I am using JetBrains’ new WebStorm 8RC to build and demonstrate the project. There are several good IDE’s for building modern web applications; WebStorm is one of the current favorites of developers.

Complexity of Modern Web Applications

Building modern web applications using the MEAN stack or comparable technologies is complex. The ‘meanstack-data-samples’ project, and the projects it is based on, ‘generator-meanstack’ and ‘generator-angular’, have dozens of moving parts. In this simple project, we have MongoDBExpressJSAngularJS, Node.js, yoGrunt, BowerGitjQueryTwitter BootstrapKarmaJSHint, jQueryMongoose, and hundreds of other components, all working together. There are almost fifty Node packages and hundreds of their dependencies loaded by npm, in addition to another dozen loaded by Bower.

Installing, configuring, and managing all the parts of a modern web application requires a basic working knowledge of these technologies. Understanding how Bower and npm install and manage packages, how Grunt builds, tests, and serves the application with ExpressJS, how Yo scaffolds applications, how Karma and Jasmine run unit tests, or how Mongoose and MongoDB work together, are all essential. This brief post will primarily focus on retrieving and displaying data, not necessarily how the components all work, or work together.

Installing and Configuring the Project

Environment Variables

To start, we need to create (3) environment variables. The NODE_ENV environment variable is used to determine the environment our application is operating within. The NODE_ENV variable determines which configuration file in the project is read by the application when it starts. The configuration files contain variables, specific to that environment. There are (4) configuration files included in the project. They are ‘development’, ‘test’, ‘production’, and ‘travis’ (travis-ci.org). The NODE_ENV variable is referenced extensively throughout the project. If the NODE_ENV variable is not set, the application will default to ‘development‘.

For this post, set the NODE_ENV variable to ‘test‘. The value, ‘test‘, corresponds to the ‘test‘ configuration file (‘meanstack-data-samples\config\environments\test.js‘), shown below.

// set up =====================================
var express          = require('express');
var bodyParser       = require('body-parser');
var errorHandler     = require('errorhandler');
var favicon          = require('serve-favicon');
var logger           = require('morgan');
var cookieParser     = require('cookie-parser');
var methodOverride   = require('method-override');
var session          = require('express-session');
var path             = require('path');
var env              = process.env.NODE_ENV || 'development';

module.exports = function (app) {
    if ('test' == env) {
        console.log('environment = test');
        app.use(function staticsPlaceholder(req, res, next) {
            return next();
        });
        app.set('db', 'mongodb://localhost/meanstack-test');
        app.set('port', process.env.PORT || 3000);
        app.set('views', path.join(app.directory, '/app'));
        app.engine('html', require('ejs').renderFile);
        app.set('view engine', 'html');
        app.use(favicon('./app/favicon.ico'));
        app.use(logger('dev'));
        app.use(bodyParser());
        app.use(methodOverride());
        app.use(cookieParser('your secret here'));
        app.use(session());

        app.use(function middlewarePlaceholder(req, res, next) {
            return next();
        });

        app.use(errorHandler());
    }
};

The second environment variable is PORT. The application starts on the port indicated by the PORT variable, for example, ‘localhost:3000’. If the the PORT variable is not set, the application will default to port ‘3000‘, as specified in the each of the environment configuration files and the ‘Gruntfile.js’ Grunt configuration file.

Lastly, the CHROME_BIN environment variable is used Karma, the test runner for JavaScript, to determine the correct path to browser’s binary file. Details of this variable are discussed in detail on Karma’s site. In my case, the value for the CHROME_BIN is ‘C:\Program Files (x86)\Google\Chrome\Application\chrome.exe'. This variable is only necessary if you will be configuring Karma to use Chrome to run the tests. The browser can be changes to any browser, including PhantomJS. See the discussion at the end of this post regarding browser choice for Karma.

You can easily set all the environment variables on Windows from a command prompt, with the following commands. Remember to exit and re-open your interactive shell or command prompt window after adding the variables so they can be used.

REM cofirm the path to Chrome, change value if necessary
setx /m NODE_ENV "test"
setx /m PORT "3000"
setx /m CHROME_BIN "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe"

Install and Configure the Project

To install and configure the project, we start by cloning the ‘meanstack-data-samples‘ project from GitHub. We then use npm and bower to install the project’s dependencies. Once installed, we create and populate the Mongo database. We then use Grunt and Karma to unit test the project. Finally, we will use Grunt to start the Express Server and run the application. This is all accomplished with only a few individual commands. Please note, the ‘npm install’ command could take several minutes to complete, depending on your network speed; the project has many direct and indirect Node.js dependencies.

# install meanstack-data-samples project
git clone https://github.com/garystafford/meanstack-data-samples.git
cd meanstack-data-samples
npm install
bower install
mongoimport --db meanstack-$NODE_ENV --collection components components.json --drop # Unix
# mongoimport --db meanstack-%NODE_ENV% --collection components components.json --drop # Windows
grunt test
grunt server

If everything was installed correctly, running the ‘grunt test’ command should result in output similar to below:

Results of Running 'grunt test' with Chrome

Results of Running ‘grunt test’ with Chrome

If everything was installed correctly, running the ‘grunt server’ command should result in output similar to below:

Results of Running 'grunt server' to Start Application

Results of Running ‘grunt server’ to Start Application

Running the ‘grunt server’ command should start the application and open your browser to the default view, as shown below:

Displaying the Application's Google Search Results on Desktop Browser

Displaying the Application’s Google Search Results on Desktop Browser

Karma’s Browser Choice for Unit Tests

The GitHub project is currently configured to use Chrome for running Karma’s unit tests in the ‘development’ and ‘test’ environments. For the ‘travis’ environment, it uses PhantomJS. If you do not have Chrome installed on your machine, the ‘grunt test’ task will fail during the ‘karma:unit’ task portion. To change Karma’s browser preference, simply change the ‘testBrowser’ variable in the ‘./karma.conf.js’ file, as shown below.

// Karma configuration
module.exports = function (config) {
// Determines Karma's browser choice based on environment
var testBrowser = 'Chrome'; // Default browser
if (process.env.NODE_ENV === 'travis') {
testBrowser = 'PhantomJS'; // Must use for headless CI (Travis-CI)
}
console.log("Karma browser: " + testBrowser);
...
// Start these browsers, currently available:
// Chrome, ChromeCanary, Firefox, Opera,
// Safari (only Mac), PhantomJS, IE (only Windows)
browsers: [testBrowser],

I recommend installing and using  PhantomJS headless WebKit, locally. Since PhantomJS is headless, Karma runs the unit tests without having to open and close browser windows. To run this project on continuous integration servers, like Jenkins or Travis-CI, you must PhantomJS. If you decide to use PhantomJS on Windows, don’t forget add the PhantomJS executable directory path to your ‘PATH’ environment variable to, after downloading and installing the application.

 

Code Generator

As I mentioned at the start of this post, this project was based on William Lepinski’s ‘generator-meanstack‘, which is in turn is based on Yeoman’s ‘generator-angular‘. Optionally, to install the ‘generator-meanstack’ npm package, globally, on our system use the following command The  ‘generator-meanstack’ code generator will allow us to generate additional AngularJS components automatically, within the project, if we choose. The ‘generator-meanstack’ is not required for this post.

npm install -g generator-meanstack

 

Part II

In part two of this post, we will explore each methods of retrieving and displaying data using AngularJS, in detail.

Links

, , , , , , , , , , , , , , , , ,

2 Comments

Installing and Configuring the MEAN Stack, Yeoman, and Associated Tooling on Windows

Configure your Windows environment for developing modern web applications using the popular MEAN Stack and Yeoman suite of utilities.

MEAN Stack Using the MEAN.io Project, Running on Windows

MEAN Stack Using the MEAN.io Project Running on Windows

Introduction

It’s an exciting time to be involved in web development. There are dozens of popular open-source JavaScript frameworks, libraries, code-generators, and associated tools, exploding on to the development scene. It is now possible to use a variety of popular technology mashups to provide a complete, full-stack JavaScript application platform.

MEAN Stack

One of the JavaScript mashups gaining a lot of traction recently is the MEAN Stack. If you’re reading this post, you probably already know MEAN is an acronym for four leading technologies: MongoDB, ExpressJS, AngularJS, and Node.js. The MEAN stack provides an end-to-end JavaScript application solution. MEAN provides a NoSQL document database (Mongo), a server-side solution for JavaScript (Node/Express), and a magic client-side MV* framework (Angular).

Depending in which MEAN stack generator or pre-build project you start with, in addition to the main four technologies, you will pull down several other smaller libraries and frameworks.  These most commonly include jQuery, Twitter Bootstrap, Karma (test runner), Jade (template engine), JSHint, Underscore.js (utility-belt library), Mongoose (MongoDB object modeling tool), Passport (authentication), RequireJS (file and module loader), BreezeJS (data entity management), and so forth.

Common Tooling

If you are involved in these modern web development trends, then you are aware there is also a fairly common set of tools used by a majority of these developers, including source control, IDE, OS, and other helper-utilities. For SCM/VCS, Git is the clear winner. For an IDE, WebStormSublime Text, and Kompozer, are heavy favorites. The platform of choice for most developers most often appears to be either Mac or Linux. It’s far less common to see a demonstration of these technologies, or tutorials built on the Microsoft Windows platform.

Yeoman

Another area of commonality is help-utilities, used to make the development, building, dependency management, and deployment of modern JavaScript applications, easier. Two popular ones are Brunch and YeomanYeoman is also an acronym for a set of popular tools: yo, Grunt, and Bower. The first, yo, is best described as a scaffolding tool. Grunt is the build tool. Bower is a tool for client-side package and dependency management. We will install Yeoman, along with the MEAN Stack,  in this post.

Windows

There is no reason Windows cannot serve as your development and hosting platform for modern web development, without specifically using Microsoft’s .NET stack. In fact, with minimal set-up, you would barely know you were using Windows as opposed to Linux or Mac. In this post, I will demonstrate how to configure your Windows machine for developing these modern web applications using the MEAN Stack and Yeoman.

Here is a list of the components we will discuss:

Installations

Git

The use of Git for source control is obvious. Git is the overwhelming choice of modern developers. Git has been integrated into most major IDEs and hosting platforms. There are hooks into Git available for most leading development tools. However, there are more benefits to using Git than just SCM. Being a Linux/Mac user, I prefer to use a Unix-like shell on Windows, versus the native Windows Command Prompt. For this reason, I use Git for Windows, available from msysGit. This package includes Git SCM, Git Bash, and Git GUI. I use the Git Bash interactive shell almost exclusively for my daily interactions requiring a command prompt. I will be using the Git Bash interactive shell for this post. OpenHatch has great post and training materials available on using Git Bash with Windows.

Using Git Bash on Windows for a Unix-like Experience

Using Git Bash on Windows for a Unix-like Experience

Git for Windows provides a downloadable Windows executable file for installation. Follow the installation file’s instructions.

Git Installation Process

To test your installation of Git for Windows, call the Git binary with the ‘–version’ flag. This flag can be used to test all the components we are installing in this post. If the command returns a value, then it’s a good indication that the component is installed properly and can be called from the command prompt:

gstafford: ~/Documents/git_repos $ git --version
git version 1.9.0.msysgit.0

You can also verify Git using the ‘where’ and ‘which’ commands. The ‘where’ command will display the location of files that match the search pattern and are in the paths specified by the PATH environment variable. The ‘which’ command tells you which file gets executed when you run a command. These commands will work for most components we will install:

gstafford: ~/Documents/git_repos $ where git
C:\Program Files (x86)\Git\bin\git.exe
C:\Program Files (x86)\Git\cmd\git.cmd
C:\Program Files (x86)\Git\cmd\git.exe

gstafford: ~/Documents/git_repos $ which git
/bin/git

Ruby

The reasons for Git are obvious, but why Ruby? Yeoman, specifically yo, requires Ruby. Installing Ruby on Windows is easy. Ruby recommends using RubyInstaller for Windows. RubyInstaller downloads an executable file, making install easy. I am using Ruby 1.9.3. I had previously installed the latest 2.0.0, but had to roll-back after some 64-bit compatibility issues with other applications.

Ruby Installation Process

To test the Ruby installation, use the ‘–version’ flag again:

gstafford: ~/Documents/git_repos $ ruby --version
ruby 1.9.3p484 (2013-11-22) [i386-mingw32]

RubyGems

Optionally, you might also to install RubyGems. RubyGems allow you to add functionality to the Ruby platform, in the form of ‘Gems’. A common Gem used with the MEAN stack is Compass, the Sass-based stylesheet framework creation and maintenance of CSS. According to their website, Ruby 1.9 and newer ships with RubyGems built-in but you may need to upgrade for bug fixes or new features.

On Windows, installation of RubyGems is as simple as downloading the .zip file from the RubyGems download site. To install, RubyGems, unzip the downloaded file. From the root of the unzipped directory, run the following Ruby command:

ruby setup.rb

To confirm your installation:

gstafford: ~/Documents/git_repos $ gem --version
2.2.2

If you already have RubyGems installed, it’s recommended you update RubyGems before continuing. Use the first command, with the ‘–system’ flag, will update to the latest RubyGems. Use the second command, without the tag, if you want to update each of your individually installed Ruby Gems:

gem update --system
gem update

MongoDB

MongoDB provides a great set of installation and configuration instructions for Windows users. To install MongoDB, download the MongoDB package. Create a ‘mongodb’ folder. Mongo recommends at the root of your system drive. Unzip the MongoDB package to ‘c:\mongodb’ folder. That’s really it, there is no installer file.

Next, make a default Data Directory location, use the following two commands:

mkdir c://data && mkdir c://data/db

Unlike most other components, to call Mongo from the command prompt, I had to manually add the path to the Mongo binaries to my PATH environment variable. You can get access to your Windows environment variables using the Windows and Pause keys. Add the path ‘c:\mongodb\bin’ to end of the PATH environment variable value.

Adding Mongo to the PATH Environment Variable

Adding Mongo to the PATH Environment Variable

To test the MongoDB installation, and that the PATH variable is set correctly, close any current interactive shells or command prompt windows. Open a new shell and use the same ‘–version’ flag for Mongo’s three core components:

gstafford: ~/Documents/git_repos
$ mongo --version; mongod --version; mongos --version
MongoDB shell version: 2.4.9

db version v2.4.9
Sun Mar 09 16:26:48.730 git version: 52fe0d21959e32a5bdbecdc62057db386e4e029c

MongoS version 2.4.9 starting: pid=15436 port=27017 64-bit 
host=localhost (--help for usage)
git version: 52fe0d21959e32a5bdbecdc62057db386e4e029c
build sys info: windows sys.getwindowsversion(major=6, minor=1, build=7601, 
platform=2, service_pack='Service Pack 1') 
BOOST_LIB_VERSION=1_49

To start MongoDB, use the ‘mongod’ or ‘start mongod’ commands. Adding ‘start’ opens a new command prompt window, versus tying up your current shell. If you are not using the default MongoDB Data Directory (‘c://data/db’) you created in the previous step, use the ‘–dbpath’ flag, for example ‘start mongod –dbpath ‘c://alternate/path’.

MongoDB Running on Windows

MongoDB Running on Windows

.

MongoDB Default Data Directory Containing New Databases

MongoDB Default Data Directory Containing New MEAN Database

Node.js

To install Node.js, download and run the Node’s .msi installer for Windows. Along with Node.js, you will get npm (Node Package Manager). You will use npm to install all your server-side components, such as Express, yo, Grunt, and Bower.

Node Installation Process

gstafford: ~/Documents/git_repos 
$ node --version && npm --version
v0.10.26
1.4.3

Express

To install Express, the web application framework for node, use npm:

npm install -g express

The ‘-g’ flag (or, ‘–global’ flag) should be used. According to Stack Overflow, ‘if you’re installing something that you want to use in your shell, on the command line or something, install it globally, so that its binaries end up in your PATH environment variable:

gstafford: ~/Documents/git_repos $ express --version
3.5.0

Yeoman – yo, Grunt, and Bower

You will also use npm to install yoGrunt, and Bower. Actually, we will use npm to install the Grunt Command Line Interface (CLI). The Grunt task runner will be installed in your MEAN stack project, locally, later. Read these instructions on the Grunt website for a better explanation. Use the same basic command as with Express:

npm install -g yo grunt-cli bower

To test the installs, run the same command as before:

gstafford: ~/Documents/git_repos 
$ yo --version; grunt --version; bower --version
1.1.2
grunt-cli v0.1.13
1.2.8

If you already had Yeoman installed, confirm you have the latest versions with the ‘npm update’ command:

gstafford: ~/Documents/git_repos/gen-angular-sample 
$ npm update -g yo grunt-cli bower
npm http GET https://registry.npmjs.org/grunt-cli
npm http GET https://registry.npmjs.org/bower
npm http GET https://registry.npmjs.org/yo
npm http 304 https://registry.npmjs.org/bower
npm http 304 https://registry.npmjs.org/yo
npm http 200 https://registry.npmjs.org/grunt-cli

All of the npm installs, including Express, are installed and called from a common location on Windows:

gstafford: ~/Documents/git_repos/gen-angular-sample 
$ where express yo grunt bower
c:\Users\gstaffor\AppData\Roaming\npm\yo
c:\Users\gstaffor\AppData\Roaming\npm\yo.cmd
c:\Users\gstaffor\AppData\Roaming\npm\grunt
c:\Users\gstaffor\AppData\Roaming\npm\grunt.cmd
c:\Users\gstaffor\AppData\Roaming\npm\bower
c:\Users\gstaffor\AppData\Roaming\npm\bower.cmd

gstafford: ~/Documents/git_repos 
$ which express; which yo; which grunt; which bower
~/AppData/Roaming/npm/express
~/AppData/Roaming/npm/yo
~/AppData/Roaming/npm/grunt
~/AppData/Roaming/npm/bower

Use the command, ‘npm list –global | less’ (or, ‘npm ls -g | less’) to view all npm packages installed globally, in a tree-view. After you have generated your project (see below), check the project-specific server-side packages with the ‘npm ls’ command from within the project’s root directory. For the client-side packages, use the ‘bower ls’ command from within the project’s root directory.

If your in a hurry, or have more Windows boxes to configure you can use one npm command for all four components, above:

npm install -g express yo grunt-cli bower

MEAN Boilerplate Generators and Projects

That’s it, you’ve installed most of the core components you need to get started with the MEAN stack on Windows. Next, you will want to download one of the many MEAN boilerplate projects, or use a MEAN code generator with npm and yo. I recommend trying one or all of the following projects. They are each slightly different architecturally, but fairly stable:

Yeoman Running MEAN Generator

Yeoman Running MEAN Generator

.

MEAN Stack Using James Cryer's 'generator-mean'

MEAN Stack Using James Cryer’s ‘generator-mean’

Links

, , , , , , , , , , , , , , , , , , ,

21 Comments