Posts Tagged HTTP
Decoupling Microservices using Message-based RPC IPC, with Spring, RabbitMQ, and AMPQ
Posted by Gary A. Stafford in DevOps, Enterprise Software Development, Java Development, Software Development on May 8, 2017
Introduction
There has been a considerable growth in modern, highly scalable, distributed application platforms, built around fine-grained RESTful microservices. Microservices generally use lightweight protocols to communicate with each other, such as HTTP, TCP, UDP, WebSockets, MQTT, and AMQP. Microservices commonly communicate with each other directly using REST-based HTTP, or indirectly, using messaging brokers.
There are several well-known, production-tested messaging queues, such as Apache Kafka, Apache ActiveMQ, Amazon Simple Queue Service (SQS), and Pivotal’s RabbitMQ. According to Pivotal, of these messaging brokers, RabbitMQ is the most widely deployed open source message broker.
RabbitMQ supports multiple messaging protocols. RabbitMQ’s primary protocol, the Advanced Message Queuing Protocol (AMQP), is an open standard wire-level protocol and semantic framework for high-performance enterprise messaging. According to Spring, ‘AMQP has exchanges, routes, and queues. Messages are first published to exchanges. Routes define on which queue(s) to pipe the message. Consumers subscribing to that queue then receive a copy of the message.’
Pivotal’s Spring AMQP project applies core Spring concepts to the development of AMQP-based messaging solutions. The project’s libraries facilitate management of AMQP resources while promoting the use of dependency injection and declarative configuration. The project provides a ‘template’ (RabbitTemplate) as a high-level abstraction for sending and receiving messages.
In this post, we will explore how to start moving Spring Boot Java services away from using synchronous REST HTTP for inter-process communications (IPC), and toward message-based IPC. Moving from synchronous IPC to messaging queues and asynchronous IPC decouples services from one another, allowing us to more easily build, test, and release individual microservices.
Message-Based RPC IPC
Decoupling services using asynchronous IPC is considered optimal by many enterprise software architects when developing modern distributed platforms. However, sometimes it is not easy or possible to get away from synchronous communications. Rightly or wrongly, often times services are architected, such that one service needs to retrieve data from another service or services, in order to process its own requests. It can be said, that service has a direct dependency on the other services. Many would argue, services, especially RESTful microservices, should not be coupled in this way.
There are several ways to break direct service-to-service dependencies using asynchronous IPC. We might implement request/async response REST HTTP-based IPC. We could also use publish/subscribe or publish/async response messaging queue-based IPC. These are all described by NGINX, in their article, Building Microservices: Inter-Process Communication in a Microservices Architecture; a must-read for anyone working with microservices. We might also implement an architecture which supports eventual consistency, eliminating the need for one service to obtain data from another service.
So what if we cannot implement asynchronous methods to break direct service dependencies, but we want to move toward message-based IPC? One answer is message-based Remote Procedure Call (RPC) IPC. I realize the mention of RPC might send cold shivers down the spine of many seasoned architected. Traditional RPC has several challenges, many which have been overcome with more modern architectural patterns.
According to Wikipedia, ‘in distributed computing, a remote procedure call (RPC) is when a computer program causes a procedure (subroutine) to execute in another address space (commonly on another computer on a shared network), which is coded as if it were a normal (local) procedure call, without the programmer explicitly coding the details for the remote interaction.’
Although still a form of RPC and not asynchronous, it is possible to replace REST HTTP IPC with message-based RPC IPC. Using message-based RPC, services have no direct dependencies on other services. A service only depends on a response to a message request it makes to that queue. The services are now decoupled from one another. The requestor service (the client) has no direct knowledge of the respondent service (the server).
RPC with RabbitMQ and AMQP
RabbitMQ has an excellent set of six tutorials, which cover the basics of creating messaging applications, applying different architectural patterns, using RabbitMQ, in several different programming languages. The sixth and final tutorial covers using RabbitMQ for RPC-based IPC, with the request/reply architectural pattern.
Pivotal recently added Spring AMPQ implementations to each RabbitMQ tutorial, based on their Spring AMQP project. If you recall, the Spring AMQP project applies core Spring concepts to the development of AMQP-based messaging solutions.
This post’s RPC IPC example is closely based on the architectural pattern found in the Spring AMQP RabbitMQ tutorial.
Sample Code
To demonstrate Spring AMQP-based RPC IPC messaging with RabbitMQ, we will use a pair of simple Spring Boot microservices. These services, the Voter and Candidate services, have been used in several previous posts, and for training and testing DevOps engineers. Both services are backed by MongoDB. The services and MongoDB, along with RabbitMQ, are all part of the Voter API project. The Voter API project also contains an HAProxy-based API Gateway, which provides indirect, load-balanced access to the two services.
All code necessary to build this post’s example is available on GitHub, within three projects. The Voter Service project repository contains the Voter service source code, along with the scripts and Docker Compose files required to deploy the project. The Candidate Service project repository and the Voter API Gateway project repository are also available on GitHub. For this post, you need only clone the Voter Service project repository.
Deploying Voter API
All components, including the two Spring services, MongoDB, RabbitMQ, and the API Gateway, are individually deployed using Docker. Each component is publicly available as a Docker Image, on Docker Hub.
The Voter Service repository contains scripts to deploy the entire set of Dockerized components, locally. The repository also contains optional scripts to provision a Docker Swarm, using Docker’s newer swarm mode, and deploy the components. We will only deploy the services locally for this post.
To clone and deploy the components locally, including the two Spring services, MongoDB, RabbitMQ, and the API Gateway, execute the following commands. If this is your first time running the commands, it may take a few minutes for your system to download all the required Docker Images from Docker Hub.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
git clone –depth 1 –branch rabbitmq \ | |
https://github.com/garystafford/voter-service.git | |
cd voter-service/scripts-services | |
sh ./stack_deploy_local.sh |
If everything was deployed successfully, you should see the following output. You should observe five running Docker containers.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
? docker ps | |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES | |
8ef4866984c3 garystafford/voter-api-gateway:rabbitmq "/docker-entrypoin…" 25 hours ago Up 25 hours 0.0.0.0:8080->8080/tcp voterstack_voter-api-gateway_1 | |
cc28d084ab17 garystafford/candidate-service:rabbitmq "java -Dspring.pro…" 25 hours ago Up 25 hours 0.0.0.0:8097->8080/tcp voterstack_candidate_1 | |
e4c22258b77b garystafford/voter-service:rabbitmq "java -Dspring.pro…" 25 hours ago Up 25 hours 0.0.0.0:8099->8080/tcp voterstack_voter_1 | |
fdb4b9f58a53 rabbitmq:management-alpine "docker-entrypoint…" 25 hours ago Up 25 hours 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp voterstack_rabbitmq_1 | |
1678227b143c mongo:latest "docker-entrypoint…" 25 hours ago Up 25 hours 0.0.0.0:27017->27017/tcp voterstack_mongodb_1 |
Using Voter API
The Voter Service and Candidate Service GitHub repositories both contain README files, which detail all the API endpoints each service exposes, and how to call them.
In addition to casting votes for candidates, the Voter service has the ability to simulate election results. By calling a /simulation
endpoint, and indicating the desired election, the Voter service will randomly generate a number of votes for each candidate in that election. This will save us the burden of casting votes for this demonstration. However, the Voter service has no knowledge of elections or candidates. To obtain a list of candidates, the Voter service depends on the Candidate service.
The Candidate service manages electoral candidates, their political affiliation, and the election in which they are running. Like the Voter service, the Candidate service also has a /simulation
endpoint. The service will create a list of candidates based on the 2012 and 2016 US Presidential Elections. The simulation capability of the service saves us the burden of inputting candidates for this demonstration.
REST HTTP Endpoint
The Voter service exposes two almost identical endpoints. Both endpoints generate random votes. However, below the covers, the two endpoints are very different. Calling the /voter/simulation/http/{election}
endpoint, prompts the Voter service to request a list of candidates from the Candidate service, based on the election parameter you input. This request is done using synchronous REST HTTP. The Voter service uses the HTTP GET method to request the data from the Candidate service. The Voter service then waits for a response.
The HTTP request is received by the Candidate service. The Candidate service responds to the Voter service with a list of candidates, in JSON format. The Voter service receives the response containing the list of candidates. The Voter service then proceeds to generate a random number of votes for each candidate. Finally, each new vote object (MongoDB document) is written back to the vote
collection in the Voter service’s voters
database.
Message-based RPC Endpoint
Similarly, calling the /voter/simulation/rpc/{election}
endpoint with a specific election prompts the Voter service to request the same list of candidates. However, this time, the Voter service (the client), produces a request message and places in RabbitMQ’s voter.rpc.requests
queue. The Voter service then waits for a response. The Voter service has no direct dependency on the Candidate service. It only depends on a response to its message request. In this way, it is still a form of synchronous IPC, but the Voter service is now decoupled from the Candidate service.
The request message is consumed by the Candidate service (the server), who is listening to that queue. In response, the Candidate service produces a message containing the list of candidates, serialized to JSON. The Candidate service (the server) sends a response back to the Voter service (the client), through RabbitMQ. This is done using the Direct reply-to feature of RabbitMQ or using a unique response queue, specified in the reply-to header of the request message, sent by the Voter Service.
According to RabbitMQ, ‘the direct reply-to feature allows RPC clients to receive replies directly from their RPC server, without going through a reply queue. (“Directly” here still means going through AMQP and the RabbitMQ server; there is no separate network connection between RPC client and RPC server.)’
According to Spring, ‘starting with version 3.4.0, the RabbitMQ server now supports Direct reply-to; this eliminates the main reason for a fixed reply queue (to avoid the need to create a temporary queue for each request). Starting with Spring AMQP version 1.4.1 Direct reply-to will be used by default (if supported by the server) instead of creating temporary reply queues. When no replyQueue is provided (or it is set with the name amq.rabbitmq.reply-to), the RabbitTemplate will automatically detect whether Direct reply-to is supported and use it, or fall back to using a temporary reply queue. When using Direct reply-to, a reply-listener is not required and should not be configured.’ We are using the latest versions of both RabbitMQ and Spring AMQP, which should support Direct reply-to.
The Voter service receives the message containing the list of candidates. The Voter service deserializes the JSON payload to Candidate objects and proceeds to generate a random number of votes for each candidate in the list. Finally, each new vote object (MongoDB document) is written back to the vote
collection in the Voter service’s voters
database.
Exploring the RPC Code
We will not examine the REST HTTP IPC code in this post. Instead, we will explore the RPC code. You are welcome to download the source code and explore the REST HTTP code pattern; it uses some advanced features of Spring Boot and Spring Data.
Spring Dependencies
In order to use RabbitMQ, we need to add a project dependency on org.springframework.boot.spring-boot-starter-amqp
. Below is a snippet from the Candidate service’s build.gradle
file, showing project dependencies. The Voter service’s dependencies are identical.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
dependencies { | |
compile group: 'org.springframework.boot', name: 'spring-boot-actuator-docs' | |
compile group: 'org.springframework.boot', name: 'spring-boot-starter-actuator' | |
compile group: 'org.springframework.boot', name: 'spring-boot-starter-amqp' | |
compile group: 'org.springframework.boot', name: 'spring-boot-starter-data-mongodb' | |
compile group: 'org.springframework.boot', name: 'spring-boot-starter-data-rest' | |
compile group: 'org.springframework.boot', name: 'spring-boot-starter-hateoas' | |
compile group: 'org.springframework.boot', name: 'spring-boot-starter-logging' | |
compile group: 'org.springframework.boot', name: 'spring-boot-starter-web' | |
compile group: 'org.webjars', name: 'hal-browser' | |
testCompile group: 'org.springframework.boot', name: 'spring-boot-starter-test' | |
} |
AMQP Configuration
Next, we need to add a small amount of RabbitMQ AMQP configuration to both services. We accomplish this by using Spring’s @Configuration
annotation on our configuration classes. Below is the configuration class for the Voter service.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
package com.voterapi.voter.configuration; | |
import org.springframework.amqp.core.DirectExchange; | |
import org.springframework.context.annotation.Bean; | |
import org.springframework.context.annotation.Configuration; | |
@Configuration | |
public class VoterConfig { | |
@Bean | |
public DirectExchange directExchange() { | |
return new DirectExchange("voter.rpc"); | |
} | |
} |
And here, the configuration class for the Candidate service.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
package com.voterapi.candidate.configuration; | |
import org.springframework.amqp.core.Binding; | |
import org.springframework.amqp.core.BindingBuilder; | |
import org.springframework.amqp.core.DirectExchange; | |
import org.springframework.amqp.core.Queue; | |
import org.springframework.context.annotation.Bean; | |
import org.springframework.context.annotation.Configuration; | |
@Configuration | |
public class CandidateConfig { | |
@Bean | |
public Queue queue() { | |
return new Queue("voter.rpc.requests"); | |
} | |
@Bean | |
public DirectExchange exchange() { | |
return new DirectExchange("voter.rpc"); | |
} | |
@Bean | |
public Binding binding(DirectExchange exchange, Queue queue) { | |
return BindingBuilder.bind(queue).to(exchange).with("rpc"); | |
} | |
} |
Candidate Service Code
With the dependencies and configuration in place, we define the method in the Voter service, which will request the candidates from the Candidate service, using RabbitMQ. Below is an abridged version of the Voter service’s CandidateListService
class, containing the getCandidatesMessageRpc
method. This method calls the rabbitTemplate.convertSendAndReceive
method (see line 5, below).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
public List<CandidateVoterView> getCandidatesMessageRpc(String election) { | |
logger.debug("Sending RPC request message for list of candidates…"); | |
String requestMessage = election; | |
String candidates = (String) rabbitTemplate.convertSendAndReceive( | |
directExchange.getName(), "rpc", requestMessage); | |
TypeReference<Map<String, List<CandidateVoterView>>> mapType = | |
new TypeReference<Map<String, List<CandidateVoterView>>>() {}; | |
ObjectMapper objectMapper = new ObjectMapper(); | |
Map<String, List<CandidateVoterView>> candidatesMap = null; | |
try { | |
candidatesMap = objectMapper.readValue(candidates, mapType); | |
} catch (IOException e) { | |
logger.info(String.valueOf(e)); | |
} | |
List<CandidateVoterView> candidatesList = candidatesMap.get("candidates"); | |
logger.debug("List of {} candidates received…", candidatesList.size()); | |
return candidatesList; | |
} |
Voter Service Code
Next, we define a method in the Candidate service, which will process the Voter service’s request. Below is an abridged version of the CandidateController
class, containing the getCandidatesMessageRpc
method. This method is decorated with Spring’s @RabbitListener
annotation (see line 1, below). This annotation marks c to be the target of a Rabbit message listener on the voter.rpc.requests
queue.
Also shown, are the getCandidatesMessageRpc
method’s two helper methods, getByElection
and serializeToJson
. These methods query MongoDB for the list of candidates and serialize the list to JSON.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
@RabbitListener(queues = "voter.rpc.requests") | |
private String getCandidatesMessageRpc(String requestMessage) { | |
logger.debug("Request message: {}", requestMessage); | |
logger.debug("Sending RPC response message with list of candidates…"); | |
List<CandidateVoterView> candidates = getByElection(requestMessage); | |
return serializeToJson(candidates); | |
} | |
private List<CandidateVoterView> getByElection(String election) { | |
Aggregation aggregation = Aggregation.newAggregation( | |
Aggregation.match(Criteria.where("election").is(election)), | |
project("firstName", "lastName", "politicalParty", "election") | |
.andExpression("concat(firstName,' ', lastName)") | |
.as("fullName"), | |
sort(Sort.Direction.ASC, "lastName") | |
); | |
AggregationResults<CandidateVoterView> groupResults | |
= mongoTemplate.aggregate(aggregation, Candidate.class, CandidateVoterView.class); | |
return groupResults.getMappedResults(); | |
} | |
private String serializeToJson(List<CandidateVoterView> candidates) { | |
ObjectMapper mapper = new ObjectMapper(); | |
String jsonInString = ""; | |
final Map<String, List<CandidateVoterView>> dataMap = new HashMap<>(); | |
dataMap.put("candidates", candidates); | |
try { | |
jsonInString = mapper.writeValueAsString(dataMap); | |
} catch (JsonProcessingException e) { | |
logger.info(String.valueOf(e)); | |
} | |
logger.debug(jsonInString); | |
return jsonInString; | |
} |
Demonstration
To demonstrate both the synchronous REST HTTP IPC code and the Spring AMQP-based RPC IPC code, we will make a few REST HTTP calls to the Voter API Gateway. For convenience, I have provided a shell script, demostrate_ipc.sh
, which executes all the API calls necessary. I have added sleep commands to slow the output to the terminal down a bit, for easier analysis. The script requires HTTPie, a great time saver when working with RESTful services.
The demostrate_ipc.sh
script does three things. First, it calls the Candidate service to generate a group of sample candidates. Next, the script calls the Voter service to simulate votes, using synchronous REST HTTP. Lastly, the script repeats the voter simulation, this time using message-based RPC IPC. All API calls are done through the Voter API Gateway on port 8080
. To understand the API calls, examine the script, below.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/sh | |
# Demostrate API calls for REST HTTP IPC and RPC IPC via API Gateway | |
# Requires HTTPie | |
# Requires all services are running | |
set -e | |
HOST=${1:-localhost:8080} | |
API_GATEWAY="http://${HOST}" | |
ELECTION="2016%20Presidential%20Election" | |
echo "Simulating candidates…" | |
http ${API_GATEWAY}/candidate/simulation && sleep 2 | |
http ${API_GATEWAY}/candidate/candidates/summary/${ELECTION} && sleep 2 | |
echo "Simulating voting using REST HTTP IPC…" | |
http ${API_GATEWAY}/voter/simulation/http/${ELECTION} && sleep 2 | |
http ${API_GATEWAY}/voter/results && sleep 4 | |
http ${API_GATEWAY}/voter/winners && sleep 2 | |
echo "Simulating voting using message-based RPC IPC…" | |
http ${API_GATEWAY}/voter/simulation/rpc/${ELECTION} && sleep 2 | |
http ${API_GATEWAY}/voter/results && sleep 4 | |
http ${API_GATEWAY}/voter/winners && sleep 2 | |
echo "Script completed…" |
Below is the list of candidates for the 2016 Presidential Election, generated by the Candidate service. The JSON payload was retrieved using the Voter service’s /voter/candidates/rpc/{election}
endpoint. This endpoint uses the same RPC IPC method as the Voter service’s /voter/simulation/rpc/{election}
endpoint.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
HTTP/1.1 200 | |
Access-Control-Allow-Credentials: true | |
Access-Control-Allow-Headers: Content-Type, Accept, X-Requested-With, remember-me | |
Access-Control-Allow-Methods: POST, GET, OPTIONS, DELETE | |
Access-Control-Max-Age: 3600 | |
Content-Type: application/json;charset=UTF-8 | |
Date: Sun, 07 May 2017 19:10:22 GMT | |
Transfer-Encoding: chunked | |
X-Application-Context: Voter Service:docker-local:8099 | |
{ | |
"candidates": [ | |
{ | |
"election": "2016 Presidential Election", | |
"fullName": "Darrell Castle", | |
"politicalParty": "Constitution Party" | |
}, | |
{ | |
"election": "2016 Presidential Election", | |
"fullName": "Hillary Clinton", | |
"politicalParty": "Democratic Party" | |
}, | |
{ | |
"election": "2016 Presidential Election", | |
"fullName": "Gary Johnson", | |
"politicalParty": "Libertarian Party" | |
}, | |
{ | |
"election": "2016 Presidential Election", | |
"fullName": "Chris Keniston", | |
"politicalParty": "Veterans Party" | |
}, | |
{ | |
"election": "2016 Presidential Election", | |
"fullName": "Jill Stein", | |
"politicalParty": "Green Party" | |
}, | |
{ | |
"election": "2016 Presidential Election", | |
"fullName": "Donald Trump", | |
"politicalParty": "Republican Party" | |
} | |
] | |
} |
Based on the list of candidates, below are the simulated election results. This JSON payload was retrieved using the Voter service’s /voter/results
endpoint.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
HTTP/1.1 200 | |
Access-Control-Allow-Credentials: true | |
Access-Control-Allow-Headers: Content-Type, Accept, X-Requested-With, remember-me | |
Access-Control-Allow-Methods: POST, GET, OPTIONS, DELETE | |
Access-Control-Max-Age: 3600 | |
Content-Type: application/json;charset=UTF-8 | |
Date: Sun, 07 May 2017 19:42:42 GMT | |
Transfer-Encoding: chunked | |
X-Application-Context: Voter Service:docker-local:8099 | |
{ | |
"results": [ | |
{ | |
"candidate": "Jill Stein", | |
"votes": 19 | |
}, | |
{ | |
"candidate": "Gary Johnson", | |
"votes": 16 | |
}, | |
{ | |
"candidate": "Hillary Clinton", | |
"votes": 12 | |
}, | |
{ | |
"candidate": "Donald Trump", | |
"votes": 12 | |
}, | |
{ | |
"candidate": "Chris Keniston", | |
"votes": 10 | |
}, | |
{ | |
"candidate": "Darrell Castle", | |
"votes": 9 | |
} | |
] | |
} |
RabbitMQ Management Console
The easiest way to observe what is happening with our messages is using the RabbitMQ Management Console. To access the console, point your web-browser to localhost
, on port 15672
. The default login credentials for the console are guest/guest.
As you successfully send and receive messages between the services through RabbitMQ, you should see activity on the Overview tab. In addition, you should see a number of Connections, Channels, Exchanges, Queues, and Consumers.
In the Queues tab, you should find a single queue, the voter.rpc.requests
queue. This queue was configured in the Candidate service’s configuration class, shown previously.
In the Exchanges tab, you should see one exchange, voter.rpc
, which we configured in both the Voter and the Candidate service’s configuration classes (aka DirectExchange
). Also, visible in the Exchanges tab, should be the routing key rpc
, which we configured in the Candidate service’s configuration class (aka Binding
).
The route binds the exchange to the voter.rpc.requests
queue. If you recall Spring’s description, AMQP has exchanges (DirectExchange
), routes (Binding
), and queues (Queue
). Messages are first published to exchanges. Routes define on which queue(s) to pipe the message. Consumers subscribing to that queue then receive a copy of the message.
In the Channels tab, you should note two connections, the single instances of the Voter and Candidate services. Likewise, there are two channels, one for each service. You can differentiate the channels by the presence of the consumer tag. The consumer tag, in this example, amq.ctag-Anv7GXs7ZWVoznO64euyjQ
, uniquely identifies the consumer. In this example, the Voter service is the consumer. For a more complete explanation of the consumer tag, check out RabbitMQ’s AMQP documentation.
Message Structure
Messages cannot be viewed directly in the RabbitMQ Management Console. One way I have found to view messages is using your IDE’s debugger. Below, I have added a breakpoint on the Candidate service’ getCandidatesMessageRpc
method, using IntelliJ IDEA. You can view the Voter service’s request message, as it is received by the Candidate service.
Note the message payload, the requested election. Note the twelve message header elements. The headers include the AMQP exchange, queue, and binding. The message headers also include the consumer tag. The message also uniquely identifies the reply-to queue to use, if the server does not support Direct reply-to (see earlier explanation).
Service Logs
In addition to the RabbitMQ Management Console, we may obverse communications between the two services, by looking at the Voter and Candidate service’s logs. I have grabbed a snippet of both service’s logs and added a few comments to show where different processes are being executed. First the Voter service logs.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# API request is made | |
2017-05-03 21:10:32.947 DEBUG 1 — [nio-8099-exec-3] s.w.s.m.m.a.RequestMappingHandlerMapping : Looking up handler method for path /simulation/rpc/2016 Presidential Election | |
2017-05-03 21:10:32.962 DEBUG 1 — [nio-8099-exec-3] s.w.s.m.m.a.RequestMappingHandlerMapping : Returning handler method [public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.String>> com.voter_api.voter.controller.VoterController.getSimulationRpc(java.lang.String)] | |
2017-05-03 21:10:32.967 DEBUG 1 — [nio-8099-exec-3] o.s.b.f.s.DefaultListableBeanFactory : Returning cached instance of singleton bean 'voterController' | |
2017-05-03 21:10:32.969 DEBUG 1 — [nio-8099-exec-3] o.s.web.servlet.DispatcherServlet : Last-Modified value for [/voter/simulation/rpc/2016%20Presidential%20Election] is: -1 | |
# Clearing out previous MongoDB data | |
2017-05-03 21:10:32.977 DEBUG 1 — [nio-8099-exec-3] o.s.data.mongodb.core.MongoDbUtils : Getting Mongo Database name=[voter] | |
2017-05-03 21:10:32.980 DEBUG 1 — [nio-8099-exec-3] o.s.data.mongodb.core.MongoTemplate : Remove using query: { } in collection: vote. | |
2017-05-03 21:10:32.985 DEBUG 1 — [nio-8099-exec-3] org.mongodb.driver.protocol.delete : Deleting documents from namespace voter.vote on connection [connectionId{localValue:2, serverValue:4}] to server mongodb:27017 | |
2017-05-03 21:10:32.990 DEBUG 1 — [nio-8099-exec-3] org.mongodb.driver.protocol.delete : Delete completed | |
# Publishing request message to queue | |
2017-05-03 21:10:32.999 DEBUG 1 — [nio-8099-exec-3] o.s.amqp.rabbit.core.RabbitTemplate : Executing callback on RabbitMQ Channel: Cached Rabbit Channel: AMQChannel(amqp://guest@172.20.0.2:5672/,1), conn: Proxy@247be51c Shared Rabbit Connection: SimpleConnection@61797757 [delegate=amqp://guest@172.20.0.2:5672/, localPort= 57908] | |
2017-05-03 21:10:33.018 DEBUG 1 — [nio-8099-exec-3] o.s.amqp.rabbit.core.RabbitTemplate : Publishing message on exchange [voter.rpc], routingKey = [rpc] | |
# Receiving response | |
2017-05-03 21:10:33.109 DEBUG 1 — [nio-8099-exec-3] o.s.amqp.rabbit.core.RabbitTemplate : Reply: (Body:'[{"fullName":"Darrell Castle","politicalParty":"Constitution Party","election":"2016 Presidential Election"},{"fullName":"Hillary Clinton","politicalParty":"Democratic Party","election":"2016 Presidential Election"},{"fullName":"Gary Johnson","politicalParty":"Libertarian Party","election":"2016 Presidential Election"},{"fullName":"Chris Keniston","politicalParty":"Veterans Party","election":"2016 Presidential Election"},{"fullName":"Jill Stein","politicalParty":"Green Party","election":"2016 Presidential Election"},{"fullName":"Donald Trump","politicalParty":"Republican Party","election":"2016 Presidential Election"}]' MessageProperties [headers={}, timestamp=null, messageId=null, userId=null, receivedUserId=null, appId=null, clusterId=null, type=null, correlationId=null, correlationIdString=null, replyTo=null, contentType=text/plain, contentEncoding=UTF-8, contentLength=0, deliveryMode=null, receivedDeliveryMode=PERSISTENT, expiration=null, priority=0, redelivered=false, receivedExchange=, receivedRoutingKey=amq.rabbitmq.reply-to.g2dkAA9yYWJiaXRAcmFiYml0bXEAAAH3AAAAAAI=.GREaYm1ow+4nMWzSClXlfQ==, receivedDelay=null, deliveryTag=1, messageCount=null, consumerTag=null, consumerQueue=null]) | |
# Inserting simulation data into MongoDB | |
2017-05-03 21:10:33.154 DEBUG 1 — [nio-8099-exec-3] o.s.data.mongodb.core.MongoTemplate : Inserting list of DBObjects containing 34 items | |
2017-05-03 21:10:33.154 DEBUG 1 — [nio-8099-exec-3] o.s.data.mongodb.core.MongoDbUtils : Getting Mongo Database name=[voter] | |
2017-05-03 21:10:33.157 DEBUG 1 — [nio-8099-exec-3] org.mongodb.driver.protocol.insert : Inserting 34 documents into namespace voter.vote on connection [connectionId{localValue:2, serverValue:4}] to server mongodb:27017 | |
2017-05-03 21:10:33.169 DEBUG 1 — [nio-8099-exec-3] org.mongodb.driver.protocol.insert : Insert completed | |
# Sending response to API call | |
2017-05-03 21:10:33.180 DEBUG 1 — [nio-8099-exec-3] o.s.b.f.s.DefaultListableBeanFactory : Returning cached instance of singleton bean 'org.springframework.boot.actuate.autoconfigure.EndpointWebMvcHypermediaManagementContextConfiguration$ActuatorEndpointLinksAdvice' | |
2017-05-03 21:10:33.182 DEBUG 1 — [nio-8099-exec-3] o.s.w.s.m.m.a.HttpEntityMethodProcessor : Written [{message=Simulation data created using RPC!}] as "application/json" using [org.springframework.http.converter.json.MappingJackson2HttpMessageConverter@387a8303] | |
2017-05-03 21:10:33.185 DEBUG 1 — [nio-8099-exec-3] o.s.web.servlet.DispatcherServlet : Null ModelAndView returned to DispatcherServlet with name 'dispatcherServlet': assuming HandlerAdapter completed request handling | |
2017-05-03 21:10:33.186 DEBUG 1 — [nio-8099-exec-3] o.s.web.servlet.DispatcherServlet : Successfully completed request |
Next, the Candidate service logs.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Listening for messages | |
2017-05-03 21:10:30.000 DEBUG 1 — [cTaskExecutor-1] o.s.a.r.listener.BlockingQueueConsumer : Retrieving delivery for Consumer@662706a7: tags=[{amq.ctag-Anv7GXs7ZWVoznO64euyjQ=voter.rpc.requests}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest@172.20.0.2:5672/,1), conn: Proxy@6f92666f Shared Rabbit Connection: SimpleConnection@6badffc2 [deleg2017-05-03 21:10:31.001 DEBUG 1 — [cTaskExecutor-1] o.s.a.r.listener.BlockingQueueConsumer : Retrieving delivery for Consumer@662706a7: tags=[{amq.ctag-Anv7GXs7ZWVoznO64euyjQ=voter.rpc.requests}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest@172.20.0.2:5672/,1), conn: Proxy@6f92666f Shared Rabbit Connection: SimpleConnection@6badffc2 [delegate=amqp://guest@172.20.0.2:5672/, localPort= 33932], acknowledgeMode=AUTO local queue size=0 | |
2017-05-03 21:10:32.003 DEBUG 1 — [cTaskExecutor-1] o.s.a.r.listener.BlockingQueueConsumer : Retrieving delivery for Consumer@662706a7: tags=[{amq.ctag-Anv7GXs7ZWVoznO64euyjQ=voter.rpc.requests}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest@172.20.0.2:5672/,1), conn: Proxy@6f92666f Shared Rabbit Connection: SimpleConnection@6badffc2 [delegate=amqp://guest@172.20.0.2:5672/, localPort= 33932], acknowledgeMode=AUTO local queue size=0 | |
2017-05-03 21:10:33.005 DEBUG 1 — [cTaskExecutor-1] o.s.a.r.listener.BlockingQueueConsumer : Retrieving delivery for Consumer@662706a7: tags=[{amq.ctag-Anv7GXs7ZWVoznO64euyjQ=voter.rpc.requests}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest@172.20.0.2:5672/,1), conn: Proxy@6f92666f Shared Rabbit Connection: SimpleConnection@6badffc2 [delegate=amqp://guest@172.20.0.2:5672/, localPort= 33932], acknowledgeMode=AUTO local queue size=0 | |
# Retrieving message | |
2017-05-03 21:10:33.044 DEBUG 1 — [pool-1-thread-5] o.s.a.r.listener.BlockingQueueConsumer : Storing delivery for Consumer@662706a7: tags=[{amq.ctag-Anv7GXs7ZWVoznO64euyjQ=voter.rpc.requests}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest@172.20.0.2:5672/,1), conn: Proxy@6f92666f Shared Rabbit Connection: SimpleConnection@6badffc2 [delegate=amqp://guest@172.20.0.2:5672/, localPort= 33932], acknowledgeMode=AUTO local queue size=0 | |
2017-05-03 21:10:33.049 DEBUG 1 — [cTaskExecutor-1] o.s.a.r.listener.BlockingQueueConsumer : Received message: (Body:'2016 Presidential Election' MessageProperties [headers={}, timestamp=null, messageId=null, userId=null, receivedUserId=null, appId=null, clusterId=null, type=null, correlationId=null, correlationIdString=null, replyTo=amq.rabbitmq.reply-to.g2dkAA9yYWJiaXRAcmFiYml0bXEAAAH3AAAAAAI=.GREaYm1ow+4nMWzSClXlfQ==, contentType=text/plain, contentEncoding=UTF-8, contentLength=0, deliveryMode=null, receivedDeliveryMode=PERSISTENT, expiration=null, priority=0, redelivered=false, receivedExchange=voter.rpc, receivedRoutingKey=rpc, receivedDelay=null, deliveryTag=14, messageCount=0, consumerTag=amq.ctag-Anv7GXs7ZWVoznO64euyjQ, consumerQueue=voter.rpc.requests]) | |
2017-05-03 21:10:33.054 DEBUG 1 — [cTaskExecutor-1] .a.r.l.a.MessagingMessageListenerAdapter : Processing [GenericMessage [payload=2016 Presidential Election, headers={amqp_receivedDeliveryMode=PERSISTENT, amqp_receivedRoutingKey=rpc, amqp_contentEncoding=UTF-8, amqp_receivedExchange=voter.rpc, amqp_deliveryTag=14, amqp_replyTo=amq.rabbitmq.reply-to.g2dkAA9yYWJiaXRAcmFiYml0bXEAAAH3AAAAAAI=.GREaYm1ow+4nMWzSClXlfQ==, amqp_consumerQueue=voter.rpc.requests, amqp_redelivered=false, id=bbd84286-fae6-36e2-f5e8-d8d9714cde6c, amqp_consumerTag=amq.ctag-Anv7GXs7ZWVoznO64euyjQ, contentType=text/plain, timestamp=1493845833053}]] | |
2017-05-03 21:10:33.057 DEBUG 1 — [cTaskExecutor-1] c.v.c.controller.CandidateController : Request message: 2016 Presidential Election | |
# Querying MongDB for candidates | |
2017-05-03 21:10:33.063 DEBUG 1 — [cTaskExecutor-1] o.s.data.mongodb.core.MongoTemplate : Executing aggregation: { "aggregate" : "candidate" , "pipeline" : [ { "$match" : { "election" : "2016 Presidential Election"}} , { "$project" : { "firstName" : 1 , "lastName" : 1 , "politicalParty" : 1 , "election" : 1 , "fullName" : { "$concat" : [ "$firstName" , " " , "$lastName"]}}} , { "$sort" : { "lastName" : 1}}]} | |
2017-05-03 21:10:33.063 DEBUG 1 — [cTaskExecutor-1] o.s.data.mongodb.core.MongoDbUtils : Getting Mongo Database name=[candidates] | |
2017-05-03 21:10:33.064 DEBUG 1 — [cTaskExecutor-1] org.mongodb.driver.protocol.command : Sending command {aggregate : BsonString{value='candidate'}} to database candidates on connection [connectionId{localValue:2, serverValue:3}] to server mongodb:27017 | |
2017-05-03 21:10:33.067 DEBUG 1 — [cTaskExecutor-1] org.mongodb.driver.protocol.command : Command execution completed | |
# Responding to queue with results | |
2017-05-03 21:10:33.094 DEBUG 1 — [cTaskExecutor-1] .a.r.l.a.MessagingMessageListenerAdapter : Listener method returned result [[{"fullName":"Darrell Castle","politicalParty":"Constitution Party","election":"2016 Presidential Election"},{"fullName":"Hillary Clinton","politicalParty":"Democratic Party","election":"2016 Presidential Election"},{"fullName":"Gary Johnson","politicalParty":"Libertarian Party","election":"2016 Presidential Election"},{"fullName":"Chris Keniston","politicalParty":"Veterans Party","election":"2016 Presidential Election"},{"fullName":"Jill Stein","politicalParty":"Green Party","election":"2016 Presidential Election"},{"fullName":"Donald Trump","politicalParty":"Republican Party","election":"2016 Presidential Election"}]] – generating response message for it | |
2017-05-03 21:10:33.096 DEBUG 1 — [cTaskExecutor-1] .a.r.l.a.MessagingMessageListenerAdapter : Publishing response to exchange = [], routingKey = [amq.rabbitmq.reply-to.g2dkAA9yYWJiaXRAcmFiYml0bXEAAAH3AAAAAAI=.GREaYm1ow+4nMWzSClXlfQ==] | |
# Returning to listening for messages | |
2017-05-03 21:10:33.123 DEBUG 1 — [cTaskExecutor-1] o.s.a.r.listener.BlockingQueueConsumer : Retrieving delivery for Consumer@662706a7: tags=[{amq.ctag-Anv7GXs7ZWVoznO64euyjQ=voter.rpc.requests}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest@172.20.0.2:5672/,1), conn: Proxy@6f92666f Shared Rabbit Connection: SimpleConnection@6badffc2 [delegate=amqp://guest@172.20.0.2:5672/, localPort= 33932], acknowledgeMode=AUTO local queue size=0 | |
2017-05-03 21:10:34.125 DEBUG 1 — [cTaskExecutor-1] o.s.a.r.listener.BlockingQueueConsumer : Retrieving delivery for Consumer@662706a7: tags=[{amq.ctag-Anv7GXs7ZWVoznO64euyjQ=voter.rpc.requests}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest@172.20.0.2:5672/,1), conn: Proxy@6f92666f Shared Rabbit Connection: SimpleConnection@6badffc2 [delegate=amqp://guest@172.20.0.2:5672/, localPort= 33932], acknowledgeMode=AUTO local queue size=0 | |
2017-05-03 21:10:35.126 DEBUG 1 — [cTaskExecutor-1] o.s.a.r.listener.BlockingQueueConsumer : Retrieving delivery for Consumer@662706a7: tags=[{amq.ctag-Anv7GXs7ZWVoznO64euyjQ=voter.rpc.requests}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest@172.20.0.2:5672/,1), conn: Proxy@6f92666f Shared Rabbit Connection: SimpleConnection@6badffc2 [delegate=amqp://guest@172.20.0.2:5672/, localPort= 33932], acknowledgeMode=AUTO local queue size=0 |
Performance
What about the performance of Spring AMQP RPC IPC versus REST HTTP IPC? RabbitMQ has proven to be very performant, having been clocked at one million messages per second on GCE. I performed a series of fairly ‘unscientific’ performance tests, completing 250, 500, and then 1,000 requests. The tests were performed on a six-node Docker Swarm cluster with three instances of each service in a round-robin load-balanced configuration, and a single instance of RabbitMQ. The scripts to create the swarm cluster can be found in the Voter service GitHub project.
Based on consistent test results, the speed of the two methods was almost identical. Both methods performed between 3.1 to 3.2 responses per second. For example, the Spring AMQP RPC IPC method successfully completed 1,000 requests in 5 minutes and 11 seconds, while the REST HTTP IPC method successfully completed 1,000 requests in 5 minutes and 18 seconds, 7 seconds slower than the RPC method.
There are many variables to consider, which could dramatically impact IPC performance. For example, RabbitMQ was not clustered. Also, we did not use any type of caching, such as Varnish, Memcached, or Redis. Both these could dramatically increase IPC performance.
There are also several notable differences between the two methods from a code perspective. The REST HTTP method relies on Spring Data Projection combined with Spring Data MongoDB Repository, to obtain the candidate list from MongoDB. Somewhat differently, the RPC method makes use of Spring Data MongoDB Aggregation to return a list of candidates. Therefore, the test results should be taken with a grain of salt.
Production Considerations
The post demonstrated a simple example of RPC communications between two services using Spring AMQP. In an actual Production environment, there are a few things that must be considered, as Pivotal points out:
- How should either service react on startup if RabbitMQ is not available? What if RabbitMQ fails after the services have started?
- How should the Voter server (the client) react if there are no Candidate service instances (the server) running?
- Should the Voter service have a timeout for the RPC response to return? What should happen if the request times out?
- If the Candidate service malfunctions and raises an exception, should it be forwarded to the Voter service?
- How does the Voter service protect against invalid incoming messages (eg checking bounds of the candidate list) before processing?
- In all of the above scenarios, what, if any, response is returned to the API end user?
Conclusion
Although in this post we did not achieve asynchronous inter-process communications, we did achieve a higher level of service decoupling, using message-based RPC IPC. Adopting a message-based, loosely-coupled architecture, whether asynchronous or synchronous, wherever possible, will improve the overall functionality and deliverability of a microservices-based platform.
References
- RabbitMQ: Understanding Message Broker
- Remote Procedure Call (RPC) Using the Spring AMQP Client
- Building Microservices: Inter-Process Communication in a Microservices Architecture
- Spring: Understanding AMQP
- AMQP 0-9-1 Complete Reference Guide
All opinions in this post are my own and not necessarily the views of my current employer or their clients.
Calling Third-Party HTTP-based RESTful APIs from the MEAN Stack
Posted by Gary A. Stafford in Client-Side Development, Software Development on January 6, 2015
Example of calling Google’s Custom Search http-based RESTful API, using Node.js with Express and Request, from a MEAN.io-generated MEAN stack application.
Introduction
Most MEAN stack articles and tutorials demonstrate how AngularJS, on the client-side, calls Node.js with Express on the server-side, via a http-based RESTful API. In turn, on the server-side, Node.js with Express, and often a ODM like Mongoose, calls MongoDB. Below is a simple, high-level sequence diagram of a typical MEAN stack request/response data flow from the client to the server to the database, and back.
However in many situations, applications don’t only call into their own application stack. Applications often call third-party http-based RESTful APIs, including social networks, cloud providers, e-commerce, and news aggregators. Twitter’s REST API and Facebook Graph API are two popular social network examples. Within larger enterprise environments, applications call multiple internal applications. For example, an online retailer’s storefront application accesses their own inventory control system via RESTful URIs. This is the same RESTful API the retailer’s authorized resellers use to interact with the retailer’s own inventory control system.
Calling APIs from the MEAN Stack
From the Client-Side
There are two ways to call third-party http-based APIs from a MEAN stack application. The first approach is calling directly from the client-side. AngularJS calls the third-party API, directly. All logic is on the client-side, instead of on the server-side. Node.js and Express are not involved in the process. The approach requires less moving parts than the next approach, but is less secure and places more demand on the client to handle the application’s business logic. Below is a simple, high-level sequence diagram demonstrating a request/response data flow from AngularJS on the client-side to a third-party API, and back.
From the Server-Side
The second approach, using Node.js and Express, on the servers-side, is slightly more complex. However, this approach is also more architecturally sound, scalable, secure, and performant. AngularJS, on the client side, calls Node.js with Express, on the server-side. Node.js with Express then calls the service and pass the response back to the client-side, to AngularJS. Below is a simple, high-level sequence diagram demonstrating a request/response data flow from the client-side to the server-side, to a third-party API, and back.
Example
MEAN.io
Using the MEAN.io ‘FullStack JS Development’ framework, I have created a basic example of calling Google’s Custom Search http-based RESTful API, from Node.js with Express and Request. MEAN.io provides an ready-made MEAN stack boilerplate framework/generator, saving a lot of coding time. Irregardless of the generator or framework you choose, you would architect this example the same.
Google Custom Search API
Google provides the Custom Search API as part of their Custom Search, one of many API’s, available through the Google Developers portal. According to Google, “the JSON/Atom Custom Search API lets you develop websites and applications to retrieve and display search results from Google Custom Search programmatically. With this API, you can use RESTful requests to get either web search or image search results in JSON or Atom format.”
In order to use the Custom Search API, you will need to first create a Google account, API project, API key, Custom Search Engine (CSE), and CSE ID, through Google’s Developers Console. If you have previously worked with Google, FaceBook, or Twitter APIs, creating an API project, CSE, API key, and CSE ID, if very similar.
Like most of Google’s APIs, the Custom Search API pricing and quotas depend on the engine’s edition. You have a choice of two engines. According to Google, the free Custom Search Engine provides 100 search queries per day for free. If you need more, you may sign up for billing in the Developers Console. Additional requests cost $5 per 1000 queries, up to 10k queries per day. The limit of 100 is more than enough for this demonstration.
Installing and Configuring the Project
All the code for this project is available on GitHub at /meanio-custom-search. Before continuing, make sure you have the prerequisite software installed – Git, Node.js with npm, and MongoDB. To install the GitHub project, follow these commands:
git clone https://github.com/garystafford/meanio-custom-search.git cd meanio-custom-search npm install
Alternatively, if you want to code the project yourself, these are the commands I used to set up the base MEAN.io framework, and create ‘search
‘ package:
sudo npm install -g mean-cli mean init meanio-custom-search cd meanio-custom-search npm install mean package search
After creating your own CSE ID and API key, create two environmental variables, GOOGLE_CSE_ID
and GOOGLE_API_KEY
, to hold the values.
echo "export GOOGLE_API_KEY=<YOUR_API_KEY_HERE>" >> ~/.bashrc echo "export GOOGLE_CSE_ID=<YOUR_CSE_ID_HERE>" >> ~/.bashrc
The code is run from a terminal prompt with the grunt
command. Then, in the browser, go to http://localhost:3000
. Once on the main home page, you can navigate to the ‘Search Example’ page, and input a search term, such as ‘MEAN Stack’. All the instructions on the MEAN.io Github site, apply to this project.
The Project’s Architecture
According to MEAN.io, everything in mean.io is a ‘package’. When extending mean with custom functionality, you create a new ‘package’. In this case, I have created a ‘search’ package, with the command above, ‘mean package search
‘. Below is the basic file structure of the ‘search
‘ package, within the overall MEAN.io project framework. The ‘public
‘ folder contains all the client-side, AngularJS code. The ‘server
‘ folder contains all the server-side, Node.js/Express/Request code. Note that each ‘package’ also has its own ‘package.json
‘ npm file and ‘bower.json
‘ Bower file.
The simple, high-level sequence diagram below shows the flow of the custom search request from the ‘Search Example’ view to the Google Custom Search API. The diagram also shows the response from the Google Custom Search API all the way back up the MEAN stack to the client-side view.
Client-Side Request/Response
If you view the network traffic in your web browser, you will see a RESTful URI call is made between AngularJS’ service factory, on the client-side, and Node.js with Express, on the server-side. The RESTful endpoint, called with $http.jsonp()
, will be similar to: http://localhost:3000/customsearch/MEAN.io/10?callback=angular.callbacks._0
. In actuality, the callback parameter name, the AngularJS service factory, is ‘JSON_CALLBACK
‘. This is replaced by AngularJS with an incremented ‘angular.callbacks._X
‘ parameter name, making the response callback name incremental and unique.
The response returned to AngularJS from Node.js is a sub-set of full response from Google’s Custom Search API. Only the search results items and a ‘200’ status code are returned to AngularJS as JavaScript, JSONP wrapped in a callback. Below is a sample response, truncated to just a single search result. I have highlighted the four fields that are displayed in the ‘Search Example’ view, using AngularJS’ ng-repeat
directive.
/**/ typeof angular.callbacks._0 === 'function' && angular.callbacks._0({ "statusCode": 200, "items" : [{ "kind" : "customsearch#result", "title" : "MEAN.IO - MongoDB, Express, Angularjs Node.js powered fullstack ...", "htmlTitle" : "<b>MEAN</b>.<b>IO</b> - MongoDB, Express, Angularjs Node.js powered fullstack <b>...</b>", "link" : "http://mean.io/", "displayLink" : "mean.io", "snippet" : "MEAN - MongoDB, ExpressJS, AngularJS, NodeJS. based fullstack js framework.", "htmlSnippet" : "<b>MEAN</b> - MongoDB, ExpressJS, AngularJS, NodeJS. based fullstack js framework.", "cacheId" : "_CZQNNP6VMEJ", "formattedUrl" : "mean.io/", "htmlFormattedUrl": "<b>mean</b>.<b>io</b>/", "pagemap" : { "cse_image" : [{"src": "http://i.ytimg.com/vi/oUtWtSF_VNY/hqdefault.jpg"}], "cse_thumbnail": [{ "width" : "259", "height": "194", "src" : "https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSIVwPo7OcW9u_b3P3DGxv8M7rKifGZITi1Bhmpy10_I2tlUqjRUVVUBKNG" }], "metatags" : [{ "viewport" : "width=1024", "fb:app_id" : "APP_ID", "og:title" : "MEAN.IO - MongoDB, Express, Angularjs Node.js powered fullstack web framework - MEAN.IO - MongoDB, Express, Angularjs Node.js powered fullstack web framework", "og:description": "MEAN MongoDB, ExpressJS, AngularJS, NodeJS.", "og:type" : "website", "og:url" : "APP_URL", "og:image" : "APP_LOGO", "og:site_name" : "MEAN.IO", "fb:admins" : "APP_ADMIN" }] } }] });
Server-Side Request/Response
On the server-side, Node.js with Express and Request, calls the Google Custom Search API via a RESTful URI. The RESTful URI, called with request.get()
, will be similar to: https://www.googleapis.com/customsearch/v1?cx=ed026i714398r53510g2ja1ru6741h:73&q=MEAN.io&num=10&key=jtHeNjIAtSa1NaWJzmVvBC7qoubrRSyIAmVJjpQu
. Note the URI contains both the your CSE ID and API key (not my real ones, of course). The JSON response from Google’s Custom Search API has other data, which is not necessary to display the results.
Shown below is a sample response with a single search result. Like the URI above, the response from Google has your Custom Search Engine ID. Your CSE ID and API key should both be considered confidential and not visible to the client. The CSE ID could be easily intercepted in both the URI and the response object, and used without your authorization. Google has a page that suggests methods to keep your keys secure.
{ kind: "customsearch#search", url: { type: "application/json", template: "https://www.googleapis.com/customsearch/v1?q={searchTerms}&num={count?}&start={startIndex?}&lr={language?}&safe={safe?}&cx={cx?}&cref={cref?}&sort={sort?}&filter={filter?}&gl={gl?}&cr={cr?}&googlehost={googleHost?}&c2coff={disableCnTwTranslation?}&hq={hq?}&hl={hl?}&siteSearch={siteSearch?}&siteSearchFilter={siteSearchFilter?}&exactTerms={exactTerms?}&excludeTerms={excludeTerms?}&linkSite={linkSite?}&orTerms={orTerms?}&relatedSite={relatedSite?}&dateRestrict={dateRestrict?}&lowRange={lowRange?}&highRange={highRange?}&searchType={searchType}&fileType={fileType?}&rights={rights?}&imgSize={imgSize?}&imgType={imgType?}&imgColorType={imgColorType?}&imgDominantColor={imgDominantColor?}&alt=json" }, queries: { nextPage: [ { title: "Google Custom Search - MEAN.io", totalResults: "12100000", searchTerms: "MEAN.io", count: 10, startIndex: 11, inputEncoding: "utf8", outputEncoding: "utf8", safe: "off", cx: "ed026i714398r53510g2ja1ru6741h:73" } ], request: [ { title: "Google Custom Search - MEAN.io", totalResults: "12100000", searchTerms: "MEAN.io", count: 10, startIndex: 1, inputEncoding: "utf8", outputEncoding: "utf8", safe: "off", cx: "ed026i714398r53510g2ja1ru6741h:73" } ] }, context: { title: "my_search_engine" }, searchInformation: { searchTime: 0.237431, formattedSearchTime: "0.24", totalResults: "12100000", formattedTotalResults: "12,100,000" }, items: [ { kind: "customsearch#result", title: "MEAN.IO - MongoDB, Express, Angularjs Node.js powered fullstack ...", htmlTitle: "<b>MEAN</b>.<b>IO</b> - MongoDB, Express, Angularjs Node.js powered fullstack <b>...</b>", link: "http://mean.io/", displayLink: "mean.io", snippet: "MEAN - MongoDB, ExpressJS, AngularJS, NodeJS. based fullstack js framework.", htmlSnippet: "<b>MEAN</b> - MongoDB, ExpressJS, AngularJS, NodeJS. based fullstack js framework.", cacheId: "_CZQNNP6VMEJ", formattedUrl: "mean.io/", htmlFormattedUrl: "<b>mean</b>.<b>io</b>/", pagemap: { cse_image: [ { src: "http://i.ytimg.com/vi/oUtWtSF_VNY/mqdefault.jpg" } ], cse_thumbnail: [ { width: "256", height: "144", src: "https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTXm3rYwGdWs9Cx3s5VvooATKlgtrVZoP83hxfAOjGvsRMqLpMKuycVl_sF" } ], metatags: [ { viewport: "width=1024", fb:app_id: "APP_ID", og:title: "MEAN.IO - MongoDB, Express, Angularjs Node.js powered fullstack web framework - MEAN.IO - MongoDB, Express, Angularjs Node.js powered fullstack web framework", og:description: "MEAN MongoDB, ExpressJS, AngularJS, NodeJS.", og:type: "website", og:url: "APP_URL", og:image: "APP_LOGO", og:site_name: "MEAN.IO", fb:admins: "APP_ADMIN" } ] } } ] }
The best way to understand the project’s sample code is to clone the GitHub repo, and explore the files directly associated with the search, starting in the ‘packages/custom/search
‘ subdirectory.
Helpful Links
Learn REST: A RESTful Tutorial
Using an AngularJS Factory to Interact with a RESTful Service
Google APIs Client Library for JavaScript (Beta)
REST-ful URI design
Creating a CRUD App in Minutes with Angular’s $resource
Using the WCF Web HTTP Programming Model with Entity Framework 5
Posted by Gary A. Stafford in .NET Development, Client-Side Development, Software Development on December 12, 2012
Build a IIS-hosted WCF Service using the WCF Web HTTP Programming Model. Use basic HTTP Methods with the WCF Service to perform CRUD operations on a SQL Server database using a Data Access Layer, built with Entity Framework 5 and the Database First Development Model.
You can download a complete copy of this Post’s source code from DropBox.
Introduction
In the two previous Posts, we used the new Entity Framework 5 to create a Data Access Layer, using both the Code First and Database First Development Models. In this Post, we will create a Windows Communication Foundation (WCF) Service. The service will sit between the client application and our previous Post’s Data Access Layer (DAL), built with an ADO.NET Entity Data Model (EDM). Using the WCF Web HTTP Programming Model, we will expose the WCF Service’s operations to a non-SOAP endpoint, and call them using HTTP Methods.
Why use the WCF Web HTTP Programming Model? WCF is a well-established, reliable, secure, enterprise technology. Many large, as well as small, organizations use WCF to build service-oriented applications. However, as communications become increasingly Internet-enabled and mobile, the WCF Web HTTP Programming Model allows us to add the use of simple HTTP methods, such as POST, GET, DELETE, and PUT, to existing WCF services. Adding a web endpoint to an existing WCF service extends its reach to modern end-user platforms with minimal effort. Lastly, using the WCF Web HTTP Programming Model allows us to move toward the increasingly popular RESTful Web Service Model, so many organizations are finally starting to embrace in the enterprise.
Creating the WCF Service
The major steps involved in this example are as follows:
- Create a new WCF Service Application Project;
- Add the Entity Framework package via NuGet;
- Add a Reference the previous Post’s DAL project;
- Add a Connection String to the project’s configuration;
- Create the WCF Service Contract;
- Create the operations the service will expose via a web endpoint;
- Configure the service’s behaviors, binding, and web endpoint;
- Publish the WCF Service to IIS using VS2012’s Web Project Publishing Tool;
- Test the service’s operations with Fiddler.
The WCF Service Application Project
Start by creating a new Visual Studio 2012 WCF Service Application Project, named ‘HealthTracker.WcfService’. Add it to a new Solution, named ‘HealthTracker’. The WCF Service Application Project type is specifically designed to be hosted by Microsoft’s Internet Information Services (IIS).
Once the Project and Solution are created, install Entity Framework (‘System.Data.Entity’) into the Solution by right-clicking on the Solution and selecting ‘Manage NuGet Packages for Solution…’ Install the ‘EntityFramework’ package. If you haven’t discovered the power of NuGet for Visual Studio, check out their site.
Next, add a Reference in the new Project, to the previous ‘HealthTracker.DataAccess.DbFirst’ Project. When the WCF Service Application Project is built, a copy of the ‘HealthTracker.DataAccess.DbFirst.dll’ assembly will be placed into the ‘bin’ folder of the ‘HealthTracker.WcfService’ Project.
Next, copy the connection string from the previous project’s ‘App.Config file’ and paste into the new WCF Service Application Project’s ‘Web.config’ file. The connection is required by the ‘HealthTracker.DataAccess.DbFirst.dll’ assembly. The connection string should look similar to the below code.
<connectionStrings> <add name="HealthTrackerEntities" connectionString="metadata=res://*/HealthTracker.csdl|res://*/HealthTracker.ssdl|res://*/HealthTracker.msl;provider=System.Data.SqlClient;provider connection string="data source=[Your_Server]\[Your_SQL_Instance];initial catalog=HealthTracker;persist security info=True;user id=DemoLogin;password=[Your_Password];MultipleActiveResultSets=True;App=EntityFramework"" providerName="System.Data.EntityClient" /> </connectionStrings>
The WCF Service
Delete the default ‘Service.svc’ and ‘IService.cs’ created by the Project Template. You can also delete the default ‘App_Data’ folder. Add a new WCF Service, named ‘HealthTrackerWcfService.svc’. Adding a new service creates both the WCF Service file (.svc), as well as a WCF Service Contract file (.cs), an Interface, named ‘IHealthTrackerWcfService.cs’. The ‘HealthTrackerWcfService’ class implements the ‘IHealthTrackerWcfService’ Interface class (‘public class HealthTrackerWcfService : IHealthTrackerWcfService’).
The WCF Service file contains public methods, called service operations, which the service will expose through a web endpoint. The second file, an Interface class, is referred to as the Service Contract. The Service Contract contains the method signatures of all the operations the service’s web endpoint expose. The Service Contract contains attributes, part of the ‘System.ServiceModel’ and ‘System.ServiceModel.Web’ Namespaces, describing how the service and its operation will be exposed. To create the Service Contract, replace the default code in the file, ‘IHealthTrackerWcfService.cs’, with the following code.
using System.Collections.Generic; using System.ServiceModel; using System.ServiceModel.Web; using HealthTracker.DataAccess.DbFirst; namespace HealthTracker.WcfService { [ServiceContract] public interface IHealthTrackerWcfService { [OperationContract] [WebInvoke(UriTemplate = "GetPersonId?name={personName}", Method = "GET")] int GetPersonId(string personName); [OperationContract] [WebInvoke(UriTemplate = "GetPeople", Method = "GET")] List<Person> GetPeople(); [OperationContract] [WebInvoke(UriTemplate = "GetPersonSummaryStoredProc?id={personId}", Method = "GET")] List<GetPersonSummary_Result> GetPersonSummaryStoredProc(int personId); [OperationContract] [WebInvoke(UriTemplate = "InsertPerson", Method = "POST")] bool InsertPerson(Person person); [OperationContract] [WebInvoke(UriTemplate = "UpdatePerson", Method = "PUT")] bool UpdatePerson(Person person); [OperationContract] [WebInvoke(UriTemplate = "DeletePerson?id={personId}", Method = "DELETE")] bool DeletePerson(int personId); [OperationContract] [WebInvoke(UriTemplate = "UpdateOrInsertHydration?id={personId}", Method = "POST")] bool UpdateOrInsertHydration(int personId); [OperationContract] [WebInvoke(UriTemplate = "InsertActivity", Method = "POST")] bool InsertActivity(Activity activity); [OperationContract] [WebInvoke(UriTemplate = "DeleteActivity?id={activityId}", Method = "DELETE")] bool DeleteActivity(int activityId); [OperationContract] [WebInvoke(UriTemplate = "GetActivities?id={personId}", Method = "GET")] List<ActivityDetail> GetActivities(int personId); [OperationContract] [WebInvoke(UriTemplate = "InsertMeal", Method = "POST")] bool InsertMeal(Meal meal); [OperationContract] [WebInvoke(UriTemplate = "DeleteMeal?id={mealId}", Method = "DELETE")] bool DeleteMeal(int mealId); [OperationContract] [WebInvoke(UriTemplate = "GetMeals?id={personId}", Method = "GET")] List<MealDetail> GetMeals(int personId); [OperationContract] [WebInvoke(UriTemplate = "GetPersonSummaryView?id={personId}", Method = "GET")] List<PersonSummaryView> GetPersonSummaryView(int personId); } }
The service’s operations use a variety of HTTP Methods, including GET, POST, PUT, and DELETE. The operations take a mix of primitive data types, as well as complex objects as arguments. The operations also return the same variety of simple data types, as well as complex objects. Note the operation ‘InsertActivity’ for example. It takes a complex object, an ‘Activity’, as an argument, and returns a Boolean. All the CRUD operations dealing with inserting, updating, or deleting data return a Boolean, indicating success or failure of the operation’s execution. This makes unit testing and error handling on the client-side easier.
Next, we will create the WCF Service. Replace the existing contents of the ‘HealthTrackerWcfService.svc’ file with the following code.
using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; using System.ServiceModel; using HealthTracker.DataAccess.DbFirst; namespace HealthTracker.WcfService { [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] public class HealthTrackerWcfService : IHealthTrackerWcfService { private readonly DateTime _today = DateTime.Now.Date; #region Service Operations /// <summary> /// Example of Adding a new Person. /// </summary> /// <param name="person">New Person Object</param> /// <returns>True if successful</returns> public bool InsertPerson(Person person) { try { using (var dbContext = new HealthTrackerEntities()) { dbContext.People.Add(new DataAccess.DbFirst.Person { Name = person.Name }); dbContext.SaveChanges(); return true; } } catch (Exception exception) { Debug.WriteLine(exception); return false; } } /// <summary> /// Example of Updating a Person. /// </summary> /// <param name="person">New Person Object</param> /// <returns>True if successful</returns> public bool UpdatePerson(Person person) { try { using (var dbContext = new HealthTrackerEntities()) { var personToUpdate = dbContext.People.First(p => p.PersonId == person.PersonId); if (personToUpdate == null) return false; personToUpdate.Name = person.Name; dbContext.SaveChanges(); return true; } } catch (Exception exception) { Debug.WriteLine(exception); return false; } } /// <summary> /// Example of deleting a Person. /// </summary> /// <param name="personId">PersonId</param> /// <returns>True if successful</returns> public bool DeletePerson(int personId) { try { using (var dbContext = new HealthTrackerEntities()) { var personToDelete = dbContext.People.First(p => p.PersonId == personId); if (personToDelete == null) return false; dbContext.People.Remove(personToDelete); dbContext.SaveChanges(); return true; } } catch (Exception exception) { Debug.WriteLine(exception); return false; } } /// <summary> /// Example of finding a Person's Id. /// </summary> /// <param name="personName">Name of the Person to find</param> /// <returns>Person's unique Id (PersonId)</returns> public int GetPersonId(string personName) { try { using (var dbContext = new HealthTrackerEntities()) { var personId = dbContext.People .Where(person => person.Name == personName) .Select(person => person.PersonId) .First(); return personId; } } catch (Exception exception) { Debug.WriteLine(exception); return -1; } } /// <summary> /// Returns a list of all People. /// </summary> /// <returns>List of People</returns> public List<Person> GetPeople() { try { using (var dbContext = new HealthTrackerEntities()) { var people = (dbContext.People.Select(p => p)); var peopleList = people.Select(p => new Person { PersonId = p.PersonId, Name = p.Name }).ToList(); return peopleList; } } catch (Exception exception) { Debug.WriteLine(exception); return null; } } /// <summary> /// Example of adding a Meal. /// </summary> /// <param name="meal">New Meal Object</param> /// <returns>True if successful</returns> public bool InsertMeal(Meal meal) { try { using (var dbContext = new HealthTrackerEntities()) { dbContext.Meals.Add(new DataAccess.DbFirst.Meal { PersonId = meal.PersonId, Date = _today, MealTypeId = meal.MealTypeId, Description = meal.Description }); dbContext.SaveChanges(); return true; } } catch (Exception exception) { Debug.WriteLine(exception); return false; } } /// <summary> /// Example of deleting a Meal. /// </summary> /// <param name="mealId">MealId</param> /// <returns>True if successful</returns> public bool DeleteMeal(int mealId) { try { using (var dbContext = new HealthTrackerEntities()) { var mealToDelete = dbContext.Meals.First(m => m.MealTypeId == mealId); if (mealToDelete == null) return false; dbContext.Meals.Remove(mealToDelete); dbContext.SaveChanges(); return true; } } catch (Exception exception) { Debug.WriteLine(exception); return false; } } /// <summary> /// Return all Meals for a Person. /// </summary> /// <param name="personId">PersonId</param> /// <returns></returns> public List<MealDetail> GetMeals(int personId) { try { using (var dbContext = new HealthTrackerEntities()) { var meals = dbContext.Meals.Where(m => m.PersonId == personId) .Select(m => new MealDetail { MealId = m.MealId, Date = m.Date, Type = m.MealType.Description, Description = m.Description }).ToList(); return meals; } } catch (Exception exception) { Debug.WriteLine(exception); return null; } } /// <summary> /// Example of adding an Activity. /// </summary> /// <param name="activity">New Activity Object</param> /// <returns>True if successful</returns> public bool InsertActivity(Activity activity) { try { using (var dbContext = new HealthTrackerEntities()) { dbContext.Activities.Add(new DataAccess.DbFirst.Activity { PersonId = activity.PersonId, Date = _today, ActivityTypeId = activity.ActivityTypeId, Notes = activity.Notes }); dbContext.SaveChanges(); return true; } } catch (Exception exception) { Debug.WriteLine(exception); return false; } } /// <summary> /// Example of deleting a Activity. /// </summary> /// <param name="activityId">ActivityId</param> /// <returns>True if successful</returns> public bool DeleteActivity(int activityId) { try { using (var dbContext = new HealthTrackerEntities()) { var activityToDelete = dbContext.Activities.First(a => a.ActivityId == activityId); if (activityToDelete == null) return false; dbContext.Activities.Remove(activityToDelete); dbContext.SaveChanges(); return true; } } catch (Exception exception) { Debug.WriteLine(exception); return false; } } /// <summary> /// Return all Activities for a Person. /// </summary> /// <param name="personId">PersonId</param> /// <returns>List of Activities</returns> public List<ActivityDetail> GetActivities(int personId) { try { using (var dbContext = new HealthTrackerEntities()) { var activities = dbContext.Activities.Where(a => a.PersonId == personId) .Select(a => new ActivityDetail { ActivityId = a.ActivityId, Date = a.Date, Type = a.ActivityType.Description, Notes = a.Notes }).ToList(); return activities; } } catch (Exception exception) { Debug.WriteLine(exception); return null; } } /// <summary> /// Example of updating existing Hydration count. /// Else adding new Hydration if it doesn't exist. /// </summary> /// <param name="personId">PersonId</param> /// <returns>True if successful</returns> public bool UpdateOrInsertHydration(int personId) { try { using (var dbContext = new HealthTrackerEntities()) { var existingHydration = dbContext.Hydrations.First( hydration => hydration.PersonId == personId && hydration.Date == _today); if (existingHydration != null && existingHydration.HydrationId > 0) { existingHydration.Count++; dbContext.SaveChanges(); return true; } dbContext.Hydrations.Add(new Hydration { PersonId = personId, Date = _today, Count = 1 }); dbContext.SaveChanges(); return true; } } catch (Exception exception) { Debug.WriteLine(exception); return false; } } /// <summary> /// Return a count of all Meals, Hydrations, and Activities for a Person. /// Based on a Database View (virtual table). /// </summary> /// <param name="personId">PersonId</param> /// <returns>Summary for a Person</returns> public List<PersonSummaryView> GetPersonSummaryView(int personId) { try { using (var dbContext = new HealthTrackerEntities()) { var personView = (dbContext.PersonSummaryViews .Where(p => p.PersonId == personId)) .ToList(); return personView; } } catch (Exception exception) { Debug.WriteLine(exception); return null; } } /// <summary> /// Return a count of all Meals, Hydrations, and Activities for a Person. /// Based on a Stored Procedure. /// </summary> /// <param name="personId">PersonId</param> /// <returns>Summary for a Person</returns> public List<GetPersonSummary_Result> GetPersonSummaryStoredProc(int personId) { try { using (var dbContext = new HealthTrackerEntities()) { var personView = (dbContext.GetPersonSummary(personId) .Where(p => p.PersonId == personId)) .ToList(); return personView; } } catch (Exception exception) { Debug.WriteLine(exception); return null; } } #endregion } #region POCO Classes public class Person { public int PersonId { get; set; } public string Name { get; set; } } public class Meal { public int PersonId { get; set; } public int MealTypeId { get; set; } public string Description { get; set; } } public class MealDetail { public int MealId { get; set; } public DateTime Date { get; set; } public string Type { get; set; } public string Description { get; set; } } public class Activity { public int PersonId { get; set; } public int ActivityTypeId { get; set; } public string Notes { get; set; } } public class ActivityDetail { public int ActivityId { get; set; } public DateTime Date { get; set; } public string Type { get; set; } public string Notes { get; set; } } #endregion }
Each method instantiates an instance of ‘HeatlthTrackerEntities’, Referenced by the project and accessible to the class via the ‘using HealthTracker.DataAccess.DbFirst;’ statement, ‘HeatlthTrackerEntities’ implements ‘System.Data.Entity.DBContext’. Each method uses LINQ to Entities to interact with the Entity Data Model, through the ‘HeatlthTrackerEntities’ object.
In addition to the methods (service operations) contained in the HealthTrackerWcfService class, there are several POCO classes. Some of these POCO classes, such as ‘NewMeal’ and ‘NewActivity’, are instantiated to hold data passed in the operation’s arguments by the client Request message. Other POCO classes, such as ‘MealDetail’ and ‘ActivityDetail’, are instantiated to hold data passed back to the client by the operations, in the Response message. These POCO instances are serialized to and deserialized from JSON or XML.
The WCF Service’s Configuration
The most complex and potentially the most confusing part of creating a WCF Service, at least for me, is always the service’s configuration. Due in part to the flexibility of WCF Services to accommodate many types of client, server, network, and security situations, the configuration of the services takes an in-depth understanding of bindings, behaviors, endpoints, security, and associated settings. The best books I’ve found on configuring WCF Services is Pro WCF 4: Practical Microsoft SOA Implementation, by Nishith Pathak. The book goes into great detail on all aspects of configuring WCF Services to meet your particular project’s needs.
Since we are only using the WCF Web HTTP Programming Model to build and expose our service, the ‘webHttpBinding’ binding is the only binding we need to configure. I have made an effort to strip out all the unnecessary boilerplate settings from our service’s configuration.
<?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" /> </configSections> <appSettings> <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" /> </appSettings> <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" /> </system.web> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <behaviors> <endpointBehaviors> <behavior name="webHttpBehavior"> <webHttp helpEnabled="true" defaultOutgoingResponseFormat="Json" defaultBodyStyle="Bare" automaticFormatSelectionEnabled="true"/> </behavior> </endpointBehaviors> </behaviors> <services> <service name="HealthTracker.WcfService.HealthTrackerWcfService"> <endpoint address="web" binding="webHttpBinding" behaviorConfiguration="webHttpBehavior" contract="HealthTracker.WcfService.IHealthTrackerWcfService" /> </service> </services> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <directoryBrowse enabled="false" /> </system.webServer> <entityFramework> <defaultConnectionFactory type="System.Data.Entity.Infrastructure.SqlConnectionFactory, EntityFramework" /> </entityFramework> <connectionStrings> <add name="HealthTrackerEntities" connectionString="metadata=res://*/HealthTracker.csdl|res://*/HealthTracker.ssdl|res://*/HealthTracker.msl;provider=System.Data.SqlClient;provider connection string="data source=gstafford-windows-laptop\DEVELOPMENT;initial catalog=HealthTracker;persist security info=True;user id=DemoLogin;password=DemoLogin123;MultipleActiveResultSets=True;App=EntityFramework"" providerName="System.Data.EntityClient" /> </connectionStrings> </configuration>
Some items to note in the configuration:
- Line 4: Entity Framework – The Entity Framework 5 reference you added earlier via NuGet.
- Line 18: Help – This enables an automatically generated Help page, displaying all the service’s operations for the endpoint, with details on how to call each operation.
- Lines 18-19: Request and Response Message Formats – Default settings for message format and body style of Request and Response messages. In this case, JSON and Bare. Setting defaults saves lots of time, not having to add attributes to each individual operation.
- Line 25-26: Endpoint – The service’s single endpoint, with a single binding and behavior. For this Post, we are only using the ‘webHttpBinding’ binding type.
- Line 38: Connection String – The SQL Server Connection String you copied from the previous Post’s Project. Required by the DAL Project Reference you added, earlier.
Deploying the Service to IIS
Now that the service is complete, we will deploy and host it in IIS. There are many options when it comes to creating and configuring a new website – setting up domain names, choosing ports, configuring firewalls, defining bindings, setting permissions, and so forth. This Post will not go into that level of detail. I will demonstrate how I chose to set up my website and publish my WCF Service.
We need a physical location to deploy the WCF Service’s contents. I recommend a location outside of the IIS root directory, such as ‘C:\HealthTrackerWfcService’. Create this folder on the server where you will be running IIS, either locally or remotely. This folder is where we will publish the service’s contents to from Visual Studio, next.
Create a new website in IIS to host the service. Name the site ‘HealthTracker’. You can configure and use a domain name or a specific port, from which to call the service. I chose to configure a domain name on my IIS Server, ”WcfService.HealthTracker.com’. If you are unsure how to setup a new domain name on your network, then a local, open port is probably easier for you. Pick any random port, like 15678.
Publish the WCF Service to the deployment location, explained above, using Visual Studio 2012’s Web Project Publishing Tool. Exactly how and where you set-up your website, and any security considerations, will affect the configuration of the Publishing Tool’s Profile. My options will not necessarily work for your specific environment.
- Publish Web – Profile
- Publish Web – Connection
- Publish Web – Settings
- Publish Web – Preview
Testing the WCF Service
Congratulations, your service is deployed. Now, let’s see if it works. Before we test the individual operations, we will ensure the service is being hosted correctly. Open the service’s Help page. This page automatically shows details on all operations of a particular endpoint. The address should follow the convention of http://%5Byour_domain%5D:%5Byour_port%5D/%5Byour_service%5D/%5Byour_endpoint_address%5D/help. In my case ‘http://wcfservice.healthtracker.com/HealthTrackerWcfService.svc/web/help’. If this page displays, then the service is deployed correctly and it’s web endpoint is responding as expected.
While on the Help page, click on any of the HTTP Methods to see a further explanation of that particular operation. This page is especially useful for copying the URL of the operation for use in Fiddler. It is even more useful for grabbing the sample JSON or XML Request messages. Just substitute your test values for the default values, in Fiddler. It saves a lot of typing and many potential errors.
Fiddler
The easiest way to test each of the service’s operations is Fiddler. Download and install Fiddler, if you don’t already have it. Using Fiddler, construct a Request message and call the operations by executing the operation’s associated HTTP Method. Below is an example of calling the ‘InsertActivity’ operation. This CRUD operation accepts a new Activity object as an argument, inserts into the database via the Entity Data Model, and returns a Boolean value indicating success.
To call the ‘InsertActivity’ operation, 1) select the ‘POST’ HTTP method, 2) input the URL for the ‘InsertActivity’ operation, 3) select a version of HTTP (1.2), 4) input the Content-Type (JSON or XML) in the Request Headers section, 5) input the body of the Request, a new ‘Activity’ as JSON, in the Request Body section, and 6) select ‘Execute’. The 7) Response should appear in the Web Sessions window.
Executing the 1) Request (constructed above), should result in a 2) Response in the Web Sessions window. Double clicking on the Web Session should result in the display of the 3) Response message in the lower righthand window. The operation returns a Boolean indicating if the operation succeeded or failed. In this case, we received a value of ‘true’.
To view the Activity we just inserted, we need to call the ‘GetActivities’ operation, passing it the same ‘PersonId’ argument. In Fiddler, 1) select the ‘GET’ HTTP method, 2) input the URL for the ‘GetActivities’ operation including a value for the ‘PersonId’ argument, 3) select the desired version of HTTP (1.2), 4) input a Content-Type (JSON or XML) in the Request Headers section, and 5) select ‘Execute’. Same as before, the 6) Response should appear in the Web Sessions window. This time there is no Request body content.
As before, executing the 1) Request should result in a 2) Response in the Web Sessions window. Doubling clicking on the Web Session should result in the display of the 3) Response in the lower left window. This method returns a JSON payload with each Activity, associated with the PersonId argument.
You can use this same process to test all the other operations at the WCF Service’s endpoint. You can also save the Request message or complete Web Sessions in Fiddler should you need to re-test.
Conclusion
We now have a WCF Service deployed and running in IIS, and tested. The service’s operations can be called from any application capable of making an HTTP call. Thank you for taking the time to read this Post. I hope you found it beneficial.