Subido por Jacob Mellado Toledo

DNCMag-Issue44

Anuncio
CONTENTS
SEPTEMBER 2019
WHAT’S NEW
IN
GRPC WITH
ASP.NET CORE
3.0
.NET CORE 3.0
24
AZURE
C# AND .NET
104
Automation in
Microsoft Azure
92
Developing Mobile
applications in
.NET
06
Diving into
Azure CosmosDB
78
Function Parameters
in C#
VISUAL STUDIO
54
Streamlining your
VS Project Setup
04
2
32
07
10
13
DEVOPS
68
Devops Timeline
16
19
21
23
26
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
29
31
33
LETTER FROM
THE EDITOR
Dear Readers, The response for the 7th
Anniversary edition was FABULOUS!
With over 137K downloads, the efforts
put in by the DotNetCurry team for the
anniversary edition, was all worthwhile!
Thank you readers!
As you are aware, some updates and recent announcements were
made around C# 8, .NET Core 3 and such. This edition aims at
keeping you up-to-date with some of these changes.
We have a bouquet of exclusive articles for you covering
.NET Core 3.0, ASP.NET Core gRPC, Visual Studio Productivity,
Azure Cosmos DB, Azure Automation, Design Patterns, DevOps
and more.
I also wanted to take this opportunity to express my
gratitude to our patrons who purchased our latest C# and .NET
Book. Patrons, your continuous support and a few sponsorships
has helped us release 44 editions for free so far, and will help
us to continue our efforts in the future too! Cheers to all of you!
How was this edition?
Contributing Authors :
Damir Arh
Daniel Jimenez Garcia
Dobromir Nikolov
Gouri Sohoni
Subodh Sohoni
Tim Sommer
Yacoub Massad
Technical Reviewers :
Yacoub Massad
Tim Sommer
Subodh Sohoni
Mahesh Sabnis
Gouri Sohoni
Gerald Verslius
Dobromir Nikolov
Daniel Jimenez Garcia
Damir Arh
Next Edition : Nov 2019
Make sure to reach out to me directly with your comments
and feedback on twitter @dotnetcurry or email me at
[email protected].
Happy Learning!
Copyright @A2Z Knowledge Visuals Pvt.
Ltd.
Art Director : Minal Agarwal
Suprotim Agarwal
Editor in Chief
Editor In Chief :
Suprotim Agarwal (suprotimagarwal@
dotnetcurry.com)
Disclaimer :
Reproductions in whole or part prohibited except by written permission. Email requests to “suprotimagarwal@
dotnetcurry.com”. The information in this magazine has been reviewed for accuracy at the time of its publication,
however the information is distributed without any warranty expressed or implied.
Windows, Visual Studio, ASP.NET, Azure, TFS & other Microsoft products & technologies are trademarks of the Microsoft group of companies.
‘DNC Magazine’ is an independent publication and is not affiliated with, nor has it been authorized, sponsored, or otherwise approved by
Microsoft Corporation. Microsoft is a registered trademark of Microsoft corporation in the United States and/or other countries.
www.dotnetcurry.com/magazine
3
AZURE COSMOS DB
Tim Sommer
DIVING INTO
AZURE COSMOS DB
Azure Cosmos DB is Microsoft's fully managed
globally distributed, multi-model database service "for
managing data at planet-scale".
But what do those terms mean?
Does "planet-scale" mean you can scale indefinitely?
Is it really a multi-model Database?
In this tutorial, I'd like to explorer the major
components and features that define Azure Cosmos
DB, making it a truly unique database service.
6
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Purpose of this article
This article is meant for developers who want to get a grasp of the concepts, features and advantages they
get when using Azure Cosmos DB in distributed environments.
I'm not going to go touch specific detailed aspects, introductions and code samples. These can already be
found in abundance across the web, including different tutorials on DotNetCurry.
On the other hand, at times, you need extended knowledge and gain some practical experience to be
able to follow the official Azure Cosmos DB documentation. Hence, my aim is to meet the theoretical
documentation with a more practical and pragmatic approach, offering a detailed guide into the realm of
Azure Cosmos DB, but maintaining a low learning curve.
What is NoSQL?
To start off, we need a high-level overview of the purpose and features of Azure Cosmos DB. And to do that,
we need to start at the beginning.
Azure Cosmos DB is a "NoSQL" database.
But what is "NoSQL", and why and how do "NoSQL" databases differ from traditional "SQL" databases?
As defined in Wikipedia; SQL, or Structured Query Language, is a "domain specific" language. It is designed
for managing data held in relational database management systems (RDMS). As the name suggests, it is
particularly useful in handling structured data, where there are relations between different entities of that
data.
So, SQL is a language, not tied to any specific framework or database. It was however standardized (much
like ECMA Script became the standard for JavaScript) in 1987, but despite its standardization, most SQL
code is not completely portable among different database systems. In other words, there are lots of
variants, depending on the underlying RDMS.
SQL has become an industry standard. Different flavours include T-SQL (Microsoft SQL Server), PS/SQL
(Oracle) and PL/pgSQL (PostgreSQL); and the standard supports all sorts of database systems including
MySQL, MS SQL Server, MS Access, Oracle, Sybase, Informix, Postgres and many more.
Even Azure Cosmos DB supports a SQL Like syntax to query NoSQL data, but more on that later.
So that's SQL, but what do we mean when we say “NoSQL”?
Again, according to Wikipedia, a NoSQL (originally referring to “non-SQL” or “non-relational”) database provides
a mechanism for storage and retrieval of data that is modelled different than the tabular relations used in
relational databases. It is designed specifically for (but is certainly not limited to) handling Big Data.
Big Data often is defined by four V's: Volume (high count of record with quantities of data that reach almost
incomprehensible proportions), Variety (high difference between different data), Velocity (how fast data is
coming in and how fast it is processed) and Veracity (refers to the biases, noise and abnormality in data).
www.dotnetcurry.com/magazine
7
Figure 1: Big Data, Four V’s
Because of their history, strict schema rules and lack (or let's call it challenges) of scaling options;
relational databases are simply not optimized to handle such enormous amounts of data.
Relational databases were mostly designed to scale up, by increasing the CPU power and RAM of the
hosting server. But, for handling Big Data, this model just doesn’t suffice.
For one, there are limits to how much you can scale up a server. Scaling up also implies higher costs: the
bigger the host machine, the higher the end bill will be. And, after you hit the limits of up-scaling, the only
real solution (there are workarounds, but no real solutions) is to scale out, meaning the deployment of the
database on multiple servers.
Don't get me wrong, relational databases like MS SQL Server and Oracle are battle tested systems, perfectly
capable in solving a lot of problems in the current technical landscape. The rise of NoSQL databases does
not, in any way, take away their purpose.
But for Big Data, you need systems that embrace the “distributed model”, meaning scale-out should be
embraced as a core feature.
In essence, that is what NoSQL is all about - allowing you to distribute databases over different servers,
allowing them to handle gigabytes and even petabytes of data.
As they are designed especially for these kinds of cases, NoSQL databases can handle massive amounts of
data whilst ensuring and maintaining low latency and high availability. NoSQL databases are simply more
suited to handle Big Data, than any kind of RDMS.
8
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
NoSQL databases are also considered "schema free". This essentially means that you can have unlimited
different data-models (or schemas) within a database. To be clear, the data will always have a schema (or
type, or model), but because the schema can vary for each item, there is no schema management. Whilst
schema changes in relational databases can pose a challenge, in NoSQL databases, you can integrate
schema changes without downtime.
What is Azure Cosmos DB?
So, we now have fairly good idea of what defines a system as a NoSQL database. But what is Azure Cosmos
DB, and how does it differ from other frameworks and (NoSQL) databases already available?
Azure Cosmos DB, announced at the Microsoft Build 2017 conference, is an evolution of the former Azure
Document DB, which was a scalable NoSQL document database with Low Latency, and hosted on Microsoft's
Azure platform.
Azure Cosmos DB allows virtually unlimited scale and automatically manages storage with server-side
partitioning to uphold performance. As it is hosted on Azure, a Cosmos DB can be globally distributed.
Cosmos DB supports multiple data models, including JSON, BSON, Table, Graph and Columnar, exposing
them with multiple APIs - which is probably why the name "Document DB" didn't suffice any more.
Azure Cosmos DB was born!
Let's sum up the core feature of Azure Cosmos DB:
•
•
•
•
•
•
•
Turnkey global distribution.
Single-digit millisecond latency.
Elastic and unlimited scalability.
Multi-model with wire protocol compatible API endpoints for Cassandra, MongoDB, SQL, Gremlin and
Table along with built-in support for Apache Spark and Jupyter notebooks.
SLA for guarantees on 99.99% availability, performance, latency and consistency
Local emulator
Integration with Azure Functions and Azure Search.
Let's look at these different features, starting with what makes Azure Cosmos DB a truly unique database
service: multi-model with API endpoint support.
Multi-model
SQL (Core) API
Azure Cosmos DB SQL API is the primary API, which lives on from the former Document DB. Data is stored
in JSON format. The SQL API provides a formal programming model for rich queries over JSON items. The
SQL used is essentially a subset, optimized for querying JSON documents.
Azure Cosmos DB API for Mongo DB
The MongoDB API provides support for MongoDB data (BSON). This will appeal to existing MongoDB
developers, because they can enjoy all the features of Cosmos DB, without changing their code.
www.dotnetcurry.com/magazine
9
Cassandra API
The Cassandra API, using columnar as data model, requires you to define the schema of your data up front.
Data is stored physically in a column-oriented fashion, so it's still okay to have sparse columns, and it has
good support for aggregations.
Gremlin API
The Gremlin API (Graph traversal language) provides you with a "graph database", which allows you to
annotate your data with meaningful relationships. You get the graph traversal language that leverages
these annotations allowing you to efficiently query across the many relationships that exist in the database.
Table API
The Table API is an evolution of Azure Table Storage. With this API, each entity consists of a key and a value
pair. But the value itself, can again contain set of key - value pairs.
Spark
The Spark API enables real-time machine learning and AI over globally distributed data-sets by using builtin support for Apache Spark and Jupyter notebooks.
The choice is yours!
The choice of which API (and underlying data model) to use, is crucial. It defines how you will be
approaching your data and how you will query it. But, regardless of which API you choose, all the key
features of Azure Cosmos DB will be available.
Internally, Cosmos DB will always store your data as "ARS", or Atom Record Sequence. This format gets
projected to any supported data model. No matter which API you choose, your data is always stored as keyvalues; where the values can be simple values or nested key-values (think of JSON and BSON). This concept
allows developers to have all the features, regardless of the model and API they choose.
In this article, we will focus on the SQL API, which is still the primary API that lives on from the original
Document DB Service.
Azure Cosmos DB Accounts, Databases and
Containers
An Azure Cosmos database is, in essence, a unit of management for a set of schema-agnostic containers.
They are horizontally partitioned across a set of machines using "Physical partitions" hosted within one or
more Azure regions. All these databases are associated with one Azure Cosmos DB Account.
10
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Figure 2: Simplified Azure Cosmos DB top-level overview
To create an Azure Cosmos DB account, we need to input a unique name, which will represent your endpoint
by adding “.documents.azure.com”. Within this account, you can create any number of databases. An account
is associated with one API. In the future, it might be possible to freely switch between APIs within the
account. At the moment of writing, this is not yet the case. All the databases within one account will always
exploit the same API.
You can create an Azure Cosmos DB Account on the Azure portal. I'll quickly walk through the creation of an
Azure Cosmos DB account, creating a database and a couple of collections which will serve as sample data
for the rest of this article.
Figure 3: Create Azure Cosmos DB Account
www.dotnetcurry.com/magazine
11
For this article, we'll be using a collection of volcanoes to demonstrate some features. The document looks
like this:
{
}
"VolcanoName": "Acatenango",
"Country": "Guatemala",
"Region": "Guatemala",
"Location": {
"type": "Point",
"coordinates": [
-90.876,
14.501
]
},
"Elevation": 3976,
"Type": "Stratovolcano",
"Status": "Historical",
"Last Known Eruption": "Last known eruption in 1964 or later",
"id": "a6297b2d-d004-8caa-bc42-a349ff046bc4"
For this data model, we'll create a couple of collections, each with different partition keys. We’ll look into
that concept in depth later on in the article, but for now, I’ll leave you with this: A partition key is a predefined hint Azure Cosmos DB uses to store and locate your documents.
For example, we’ll define three collections with different partition keys:
•
•
•
By country (/Country)
By Type (/Type)
By Name (/Name)
You can use the portal to create the collections:
Figure 4: Creating Collections and database using Azure portal
12
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
All right! We have now set up some data containers with sample data which can be used to explain the core
concepts. Let's dive in!
Throughput, horizontal partitioning and global
distribution
Throughput
In Azure Cosmos DB, performance of a container (or a database) is configured by defining its throughput.
Throughput defines how many requests can be served within a specific period, and how many concurrent
requests can always be served within any given second. It defines how much load your database and/or
container needs to be able to handle, at any given moment in time. It is not to be mistaken with latency,
which defines how fast a response for a given request is served.
Managing throughput and ensuring predictable performance is configured by assigning Request Units (or
RUs). Azure Cosmos DB uses the assigned RUs to ensure that the requested performance can be handled,
regardless of how demanding the workload you put on the data in your database, or your collection.
Request Units are a blended measurement of computational cost for any given request. Required CPU
calculations, memory allocation, storage, I/O and internal network traffic are all translated into one unit.
Simplified, you could say that RUs are the currency used by Azure Cosmos DB.
The Request Unit concept allows you to disregard any concerns about hardware. Azure Cosmos DB, being
a database service, handles that for you. You only need to think about the amounts of RUs should be
provisioned for a container.
To be clear, a Request Unit is not the same as a Request, as all requests are not considered equal. For example,
read requests generally require less resources than write requests. Write requests are generally more
expensive, as they consume more resources. If your query takes longer to process, you'll use more resources;
resulting in a higher RU cost rate.
For any request, Azure Cosmos DB will tell you exactly how many RUs that request consumed. This allows
for predictable performance and costs, as identical requests will always consistently cost the same number of
RUs.
In simple terms: RUs allow you to manage and ensure predictable performance and costs for your database and/
or data container.
If we take the sample volcano data, you can see the same RU cost for inserting or updating each item.
Figure 5: Insert or update an item in a data container
www.dotnetcurry.com/magazine
13
On the other hand, if we read data, the RU cost will be lower. And for every identical read request, the RU
cost will always be the same.
Figure 6: Query RU cost
When you create a Data Container in Cosmos DB, you reserve the number of Request Units per second (RU/s)
that needs to be serviced for that container. This is called "provisioning throughput".
Azure Cosmos DB will guarantee that the provisioned throughput can be consumed for the container every
second. You will then be billed for the provisioned throughput monthly.
Your bill will reflect the provisions you make, so this is a very important step to take into consideration. By
provisioning throughput, you tell Azure Cosmos DB to reserve the configured RU/s, and you will be billed for
what you reserve, not for what you consume.
Should the situation arise that the provisioned RU/s are exceeded, further requests are "throttled". This
basically means that the throttled request will be refused, and the consuming application will get an error.
In the error response, Azure Cosmos DB will inform you how much time to wait before retrying.
So, you could implement a fallback mechanism fairly easily. When requests are throttled, it almost always
means that the provisioned RU/s is insufficient to serve the throughput your application requires. So, while
it is a good idea to provide a fallback "exceeding RU limit" mechanism, note that you will almost always
have to provision more throughput.
Request Units and billing
Provisioning throughput is directly related to how much you will be billed monthly. Because unique
requests always result in the same RU cost, you can predict the monthly bill in an easy and correct way.
When you create an Azure Cosmos DB container with the minimal throughput of 400 RU/s, you'll be billed
roughly 24$ per month. To get a grasp of how many RU/s you will need to reserve, you can use the Request
Unit Calculator. This tool allows you to calculate the required RU/s after setting parameters as item size,
reads per second, writes per second, etc. If you don't know this information when you create your containers,
don't worry, throughput can be provisioned on the fly, without any downtime.
14
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
The calculator can be accessed here: https://cosmos.azure.com/capacitycalculator/
Figure 7: Azure Cosmos DB Request Unit Calculator
Apart from the billing aspect, you can always use the Azure Portal to view detailed charts and reports on
every metric related to your container usage.
Partitioning
Azure Cosmos DB allows you to massively scale out containers, not only in terms of storage, but also in
terms of throughput. When you provision throughput, data is automatically partitioned to be able to handle
the provisioned RU/s you reserved.
Data is stored in physical partitions, which consist of a set of replicas (or clones), also referred to as replica
sets. Each replica set hosts an instance of the Azure Cosmos database engine, making data stored within
the partition durable, highly available, and consistent. Simplified, a physical partition can be thought of as a
fixed-capacity data bucket. Meaning that you can't control the size, placement, or number of partitions.
To ensure that Azure Cosmos DB does not need to investigate all partitions when you request data, it
requires you to select a partition key.
For each container, you'll need to configure a partition key. Azure Cosmos DB will use that key to group
items together. For example, in a container where all items contain a City property, you can use City as the
partition key for that container.
www.dotnetcurry.com/magazine
15
Figure 8: Image logical partition within physical partition
Groups of items that have specific values for City, such as London, Brussels and Washington will form distinct
logical partitions.
Partition key values are hashed, and Cosmos DB uses that hash to identify the logical partition the data
should be stored in. Items with the same partition key value will never be spread across different logical
partitions.
Internally, one or more logical partitions are mapped to a physical partition. This is because, while
containers and logical partitions can scale dynamically; physical partitions cannot i.e. you would wind up
with way more physical partitions than you would actually need.
If a physical partition gets near its limits, Cosmos DB automatically spins up a new partition, splitting and
moving data across the old and new one. This allows Cosmos DB to handle virtually unlimited storage for
your containers.
Any property in your data model can be chosen as the partition key. But only the correct property will result
in optimal scaling of your data. The right choice will produce massive scaling, while choosing poorly will
impede Azure Cosmos DB's ability to scale your data (resulting in so called hot partitions).
So even though you don't really need to know how your data eventually gets partitioned, understanding it
in combination with a clear view of how your data will be accessed, is absolutely vital.
16
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Choosing your partition key
Firstly, and most importantly, queries will be most efficient if they can be scoped to one logical partition.
Next to that, transactions and stored procedures are always bound to the scope of a single partition. So,
these make the first consideration on deciding a partition key.
Secondly, you want to choose a partition key that will not cause throughput or storage bottlenecks.
Generally, you’ll want to choose a key that spreads the workload evenly across all partitions and evenly over
time.
Remember, when data is added to a container, it will be stored into a logical partition. Once the physical
partition hosting the logical grows out of bounds, the logical partition is automatically moved to a new
physical partition.
You want a partition key that yields the most distinct values as possible. Ideally you want distinct values in
the hundreds or thousands. This allows Azure Cosmos DB to logically store multiple partition keys in one
physical partition, without having to move partitions behind the scenes.
Take the following rules into consideration upon deciding partition keys:
•
A single logical partition has an upper limit of 10 GB of storage. If you max out the limit of this
partition, you'll need to manually reconfigure and migrate data to another container. This is a disaster
situation you don’t want!
•
To prevent the issue raised above, choose a partition key that has a wide range of values and access
patterns, that are evenly spread across logical partitions.
•
Choose a partition key that spreads the workload evenly across all partitions and evenly over time.
•
The partition key value of a document cannot change. If you need to change the partition key, you'll
need to delete the document and insert a new one.
•
The partition key set in a collection can never change and has become mandatory.
•
Each collection can have only one partition key.
Let's try a couple of examples to make sense of how we would choose partition keys, how hot partitions
work and how granularity in partition keys can have positive or negative effects in the partitioning of your
Azure Cosmos DB. Let’s take the sample volcano data again to visualize the possible scenarios.
Partition key “Type”
When I first looked through the volcano data, “type” immediately stood out as a possible partition key. Our
partitioned data looks something like this:
www.dotnetcurry.com/magazine
17
Figure 11: Volcanoes by type
To be clear, I’m in no means a volcano expert, the data repo is for sample purposes only. I have no idea if
the situation I invented is realistic - it is only for demo purpose.
Okay, now let’s say we are not only collecting meta-data for each volcano, but also collecting data for
each eruption. Which means we look at the collection from a Big Data perspective. Let’s say that “Lava
Cone” volcanoes erupted three times as much this year, than previous years. But as displayed in the image,
the “Lava Cone” logical partition is already big. This will result in Cosmos DB moving into a new Physical
Partition, or eventually, the logical partition will max out.
So, type, while seemingly a good partition key, it is not such a good choice if you want to collect hour to
hour data of erupting events. For the meta data itself, it should do just fine.
The same goes for choosing “Country” as a partition key. If for one reason or another, Russia gets more
eruptions than other countries, its size will grow. If the eruption data gets fed into the system in real-time,
you can also get a so-called hot partition, with Russia taking up all the throughput in the container - leaving
none left for the other countries.
If we were collecting data in real-time on a massive scale, the only good partition key in this case is by
“Name”. This will render the most distinct values, and should distribute the data evenly amongst the
partitions.
Microsoft created a very interesting and comprehensive case study on how to model and partition data on
Azure Cosmos DB using a real-world example. You can find the documentation here: https://docs.microsoft.
com/en-us/azure/cosmos-db/partition-data.
Cross partition Queries
You will come across situations where querying by partition key will not suffice. Azure Cosmos DB can span
queries across multiple partitions, which basically means that it will need to look at all partitions to satisfy
18
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
the query requirements. I hope it’s clear at this point that you don't want this to happen too frequently, as it
will result in high RU costs.
When cross partition queries are needed, Cosmos DB will automatically “fan out the query”, i.e. it will visit
all the partition in parallel, gather the results, and then return all results in a single response.
This is more costly than running queries against single partitions, which is why these kinds of queries
should be limited and avoided as much as possible. Azure Cosmos DB enforces you to explicitly enable
this behaviour by setting it in code or via the portal. Writing a query without the partition key in the where
clause, and without explicitly setting the "EnableCrossPartitionQuery" property, will fail.
Back to the sample volcano data, we see the RU cost rise when we trigger a cross partition query. As we
saw earlier, the RU cost for querying a collection with partition key “Country” was 7.240 RUs. If we Query
that collection by type, the RU cost is 10.58. Now, this might not be a significant rise, but remember, the
sample data is small, and the query is executed only once. In a distributed application model, this could be
quite catastrophic!
Figure 12: Cross partition query RU cost.
Indexing
Azure Cosmos DB allows you to store data without having to worry about schema or index management. By
default, Azure Cosmos DB will automatically index every property for all items in your container.
This means that you don’t really need to worry about indexes, but let’s take a quick look at the indexing
modes Azure Cosmos DB provides:
•
Consistent: This is the default setting. The indexes are updated synchronously as you create, update or
delete items.
•
Lazy: Updates to the index are done at a much lower priority level, when the engine is not doing any
other work. This can result in inconsistent or incomplete query results, so using this indexing mode is
recommended against.
•
None: Indexing is disabled for the container. This is especially useful if you need to big bulk
www.dotnetcurry.com/magazine
19
operations, as removing the indexing policy will improve performance drastically. After the bulk
operations are complete, the index mode can be set back to consistent, and you can check the
IndexTransformationProgress property on the container (SDK) to the progress.
You can include and exclude property paths, and you can configure three kinds of indexes for data.
Depending on how you are going to query your data, you might want to change the index on your colums:
•
Hash index: Used for equality indexes.
•
Range index: This index is useful if you have queries where you use range operations (< > !=), ORDER BY
and JOIN.
•
Spatial Index: This index is useful if you have GeoJSON data in your documents. It supports Points,
LineStrings, Polygons, and MultiPolygons.
•
Composite Index: This index is used for filtering on multiple properties.
As indexing in Azure Cosmos DB is performed automatically by default, I won’t include more information
here. Knowing how to leverage these different kind of indexing (and thus deviating from the default) in
different scenarios can result in lower RU cost and better performance. You can find more info in the official
Microsoft documentation, specific to a range of Use Cases.
Global distribution
As said in the intro, Azure Cosmos DB can be distributed globally and can manage data on a planet scale.
There are a couple of reasons why you would use Geo-replication for your Cosmos databases:
Latency and performance: If you stay within the same Azure region, the RU/s reserved on a container are
guaranteed by the replicas within that region. The more replicas you have, the more available your data will
become. But Azure Cosmos DB is built for global apps, allowing your replicas to be hosted across regions.
This means the data can be allocated closer to consuming applications, resulting in lower latency and high
availability.
Azure Cosmos DB has a multi-master replication protocol, meaning that every region supports both writes
and reads. The consuming application is aware of the nearest region and can send requests to that region.
This region is identified without any configuration changes. As you add more regions, your application does
not need to be paused or stopped. The traffic managers and load balancers built into the replica set, take
care of it for you!
Disaster recovery: By Geo-replicating your data, you ensure the availability of your data, even in the events
of major failure or natural disasters. Should one region become unavailable, other regions automatically
take over to handle requests, for both write and read operations.
Turnkey Global Distribution
In your Azure Cosmos DB Account, you can add or remove regions with the click of a mouse! After you select
multiple regions, you can choose to do a manual fail over, or select automatic fail over priorities in case
of unavailability. So, if a region should become unavailable for whatever reason, you control which region
behaves as a primary replacement for both reads and writes.
20
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
You can Geo-replicate your data in every Azure Region that is currently available:
Figure 13: Azure regions currently available
https://azure.microsoft.com/en-us/global-infrastructure/regions/
In theory, global distribution will not affect the throughput of your application. The same RUs will be used
for request between regions i.e. a query to one region will have the same RU cost for the same query in
another region. Performance is calculated in terms of latency, so actual data over the wire will be affected
if the consuming application is not hosted in the same Azure Region. So even though the costs will be the
same, latency is a definite factor to take into consideration when Geo-replicating your data.
While there are no additional costs for global distribution, the standard Cosmos DB Pricing will get
replicated. This means that your provisioned RU/s for one container will also be provisioned for its Georeplicated instances. So, when you provisioned 400 RU/s for one container in one region; you will reserve
1200 RU/s if you replicate that container to two other regions.
Multiple Region Write accounts will also require extra RUs. These are used for operations such as managing
write conflicts. These extra RUs are billed at the average cost of the provisioned RUs across your account's
regions. Meaning if you have three regions configured at 400 RU/s (billed around 24$ per region = 70 $),
you'll be billed an additional 24$ for the extra write operations.
This can all be simulated fairly simply by consulting the Cosmos DB Pricing Calculator: https://azure.
microsoft.com/en-us/pricing/calculator/?service=cosmos-db.
www.dotnetcurry.com/magazine
21
Replication and Consistency
As Azure Cosmos DB replicates your data to different instances across multiple Azure regions, you need
ways to get consistent reads across all the replicas. By applying consistency levels, you get control over
possible read version conflicts scattered across the different regions.
There are five consistency levels that you can configure for your Azure Cosmos DB Account:
Figure 14: Consistency levels
These levels, ranging from strong to eventual, allow you to take complete control of the consistency
tradeoffs for your data containers. Generally, the stronger your consistency, the lower your performance in
terms of latency and availability. Weaker consistency results in higher performance but has possible dirty
reads as a tradeoff.
Strong consistency ensures that you never have a dirty read, but this comes at a huge cost in latency. In
strong consistency, Azure Cosmos DB will enforce all reads to be blocked until all the replicas are updated.
Eventual consistency, on the other side of the spectrum, will offer no guarantee whatsoever that your data
is consistent with the other replicas. So you can never know whether the data you are reading is the latest
version or not. But this allows Cosmos DB to handle all reads without waiting for the latest writes, so
queries can be served much faster.
In practice, you only must take consistency into consideration when you start Geo-replicating your Azure
Cosmos databases. It's almost impossible to experience a dirty read if your replicas are hosted within
one Azure Region. Those replicas are physically located closely to each other, resulting in data transfer
almost always within 1ms. So even with the strong consistency level configured, the delay should never be
problematic for the consuming application.
The problem arises when replicas are distributed globally. It can take hundreds of milliseconds to
move your data across continents. It is here that chances of dirty reads become exponential, and where
consistency levels come into play.
Let's look at the different levels of consistency:
•
Strong: No dirty reads, Cosmos DB waits until all replicas are updated before responding to read
requests.
•
Bounded Staleness: Dirty reads are possibly but are bounded by time and updates. This means that you
can configure the threshold for dirty reads, only allowing them if the data isn't out of date. The reads
might be behind writes by at most “X” versions (i.e., “updates”) of an item or by “Y” time interval. If the
22
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
data is too stale, Cosmos DB will fall back to strong consistency.
•
Session Consistency (default): No dirty reads for writers, but possible for other consuming applications.
• All writers include a unique session key in their requests. Cosmos DB uses this session key to ensure
strong consistency for the writer, meaning a writer will always read the same the data as they wrote.
Other readers may still experience dirty reads.
•
When you use the .NET SDK, it will automatically include a session key, making it by default strongly
consistent within the session that you use in your code. So, within the lifetime of a document client,
you're guaranteed strong consistency.
•
Consistent prefix: Dirty reads are possible but can never be out-of-order across your replicas. So if the
same data gets updated six times, you might get version 4 of the data, but only if that version of the
data has been replicated to all your replicas. Once version 4 has been replicated, you'll never get a
version before that (1-3), as this level ensures that you get updates in order.
•
Eventual consistency: Dirty reads possible, no guaranteed order. You can get inconsistent results in
a random fashion, until eventually all the replicas get updated. You simply get data from a replica,
without any regards of other versions in other replicas.
When Bounded staleness and Session don't apply strong consistency, they fallback to Consistent prefix, not
eventual consistency. So even in those cases, reads can never be out-of-order.
You can set the preferred consistency level for the entire Cosmos DB Account i.e. every operation for all
databases will use that default consistency strategy. This default can be changed at any time.
It is however possible to override the default consistency at request level, but it is only possible to weaken
the default. So, if the default is Session, you can only choose between Consistent or Eventual consistency
levels. The level can be selected when you create the connection in the .NET SDK.
Conclusion
Azure Cosmos DB is a rich, powerful and an amazing database service, that can be used in a wide variety of
situations and use cases. With this article, I hope I was able to give you a simplified and easy to grasp idea
of its features. Especially in distributed applications and in handling Big Data, Azure Cosmos DB stands out
of the opposition. I hope you enjoyed the read!
Tim Sommer
Author
Tim Sommer lives in the beautiful city of Antwerp, Belgium. He is passionate about computers and
programming for as long as he can remember. He’s a speaker, teacher and entrepreneur. He is also
a former Windows Insider MVP. But most of all, he’s a developer, an architect, a technical specialist and a
coach; with 8+ years of professional experience in the .NET framework.
Thanks to Mahesh Sabnis for reviewing this article.
www.dotnetcurry.com/magazine
23
AZURE DEVOPS
Damir Arh
WHAT’S NEW IN
.NET CORE 3.0?
This article is an overview of .NET Core 3.0 covering all
the new features and improvements it’s bringing to the
.NET ecosystem.
A
fter more than two
long-term support (LTS)
As a companion to .NET Core
years since the
version will be .NET Core 3.1 in
3.0, .NET Standard 2.1 is also
November 2019.
being released. In comparison
release of .NET Core 2.0 in
August 2017, .NET Core 3.0
to .NET Standard 2.0, the latest
was released as the next
To develop for .NET Core 3.0,
2.1 includes many new classes
major version of .NET Core.
you need an up-to-date version
and methods which were
It’s been supported for use in
of Visual Studio 2019. There’s
added to .NET Core 3.0 making
production since July 2019
no support for it in Visual
it possible for cross-platform
when Preview 7 of .NET Core
Studio 2017.
class libraries to use them as
3.0 was released. The next
24
well.
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
.NET Core. You can read more about .NET Standard in my previous
article – .NET Standard 2.0 and XAML Standard.
.NET Core 3.0 is also the first .NET runtime to fully support C#
8.0. You can read more about the new language features that it
introduces in my previous article – New C# 8 Features in Visual
Studio 2019.
The so-called .NET Core Global Tools can also be installed as local
tools in .NET Core 3.0. Global tools are command line utilities which
can be easily installed and run using the dotnet command. They
were originally introduced with .NET Core 2.1, but they could only be
installed globally (as their name implies) which made them always
available from the command line. In .NET Core 3.0, they can also be
installed locally as a part of a .NET Core project or solution and as
such automatically available to all developers working on the same
code repository. You can read more about .NET Core Global Tools in
my previous article – .NET Core Global Tools - (What are Global Tools,
How to Create and Use them).
Despite that, it’s still a good
idea to write your libraries for
.NET Standard 2.0 whenever
possible because this will
make them available to
different .NET runtimes, e.g.
.NET framework which isn’t
going to support .NET Standard
2.1 and previous versions of
3.0
www.dotnetcurry.com/magazine
25
Table 1: .NET Core application models and libraries
Windows-Specific Features
Since further development of .NET framework stopped with version 4.8, .NET Core 3.0 has a big focus on
Windows-specific features which were previously not supported. The goal is to make .NET Core a way
forward for Windows developers who currently use .NET framework. Unlike almost all the other parts of
.NET Core so far, these new features will not be cross-platform and will only work on Windows because they
heavily rely on services provided by the operating system.
Windows Desktop
The most prominent new feature in .NET Core 3.0 is support for development of Windows desktop
applications using Windows Forms or WPF (Windows Presentation Foundation). Both application models
are fully compatible with their .NET framework counterparts. Therefore, any existing .NET framework
applications using them can be ported to .NET Core unless they depend on some other feature that is not
supported in .NET Core, such as .NET Remoting or advanced WCF features for example.
Porting existing .NET framework applications to .NET Core is not trivial and requires some changes to the
source code as well as some testing afterwards. Because of this, it only really makes sense for applications
which are still being actively developed. Also, the designers in Visual Studio 2019 don’t work yet on .NET
Core projects. To use them, a companion .NET framework project is required which shares the files with the
.NET Core project. This workaround is required only temporarily, as support for designers will be added to
future versions of Visual Studio 2019.
Although Windows Forms and WPF in .NET Core 3.0 are mostly just ports from .NET framework, there are
two new features worth mentioning:
•
26
Windows Forms has enhanced support for high DPI screens. Since it is not fully compatible with high
DPI behavior in .NET framework, it requires additional testing and potentially changes to source code.
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
•
Both Windows Forms and WPF support XAML islands, i.e. the ability to host UWP (Universal Windows
Platform) controls inside a Windows Forms or WPF application). This is particularly useful for (although
not limited to) hosting first-party UWP controls such as WebView, Map Control, MediaPlayerElement and
InkCanvas).
To simplify deployment of Windows desktop applications developed in .NET Core 3.0, the new application
package format MSIX can be used. It’s a successor to previous Windows deployment technologies, such as
MSI, APPX and ClickOnce which can be used to publish the applications to Microsoft Store or to distribute
them separately. A dedicated Windows Application Packaging Project template in Visual Studio is available
to generate the distribution package for your application.
Support for COM Components
The significance of COM (Component Object Model) components has declined with an increased adoption
of .NET framework. However, there are still applications which rely on it, e.g. automation in several
Microsoft Office products is based on it.
.NET Core 3.0 applications on Windows can act in both COM roles:
•
As a COM client, activating existing COM components, for example to automate a Microsoft Office
application.
•
As a COM server, implementing a COM component which can be used by other COM clients.
Entity Framework
Entity Framework is Microsoft’s data access library for .NET. As part of .NET Core, a completely new version
of Entity Framework was developed from scratch – Entity Framework Core. Although its API feels familiar
to anyone who has used the original Entity Framework, it is neither source code compatible nor feature
equivalent to it. Any existing code using Entity Framework therefore needs to be rewritten in order to use
Entity Framework Core instead.
To make porting of existing Windows applications using Entity Framework to .NET Core easier, .NET Core 3.0
includes a port of the original Entity Framework in addition to a new version of Entity Framework Core.
Entity Framework 6.3 for .NET Core
Entity Framework 6.3 version in .NET Core is primarily meant for porting existing applications. New
applications should use Entity Framework Core instead which is still in active development.
The tooling for Entity Framework is currently only supported in Visual Studio 2019 on Windows. The
designer support will be added in a later Visual Studio 2019 update. Until then, the models can only be
edited from a .NET framework project similar to the approach required for the Windows Forms and WPF
designers.
However, the applications developed with Entity Framework 6.3 for .NET Core can run on any platform
which makes it suitable for porting not only Windows desktop applications but also web applications,
making them cross-platform in the process. Unfortunately, new providers are required for .NET Core.
Currently only the SQL Server provider is available. No support for SQL Server spatial types and SQL Server
www.dotnetcurry.com/magazine
27
Compact is available or planned.
Entity Framework Core 3.0
A new version of Entity Framework Core is also included with .NET Core 3.0. Among its new features, the
most important are the improvements to LINQ which make it more robust and more efficient because
larger parts of the queries are now executed on the database server instead of in the client application. As
part of this change, client-side evaluation of queries as fallback when server-side query generation fails,
was removed. Only the projection from the final Select part of the query might still be executed on the
client. This can break existing applications. Therefore, full testing of existing applications is required when
upgrading to Entity Framework Core 3.0.
Additionally, Entity Framework Core 3.0 now includes support for Cosmos DB (implemented against its SQL
API) and takes advantage of new language features in C# 8.0 (asynchronous streams). As a result, it now
targets .NET Standard 2.1 instead of .NET Standard 2.0 and therefore can’t be used with .NET framework or
older versions of .NET Core anymore.
ASP.NET Core
ASP.NET Core is probably the most widely used application model in .NET Core as all types of web
applications and services are based on it. Like Entity Framework Core, its latest version works only in .NET
Core 3.0 and isn’t supported in .NET framework.
To make applications smaller by default and reduce the number of dependencies, several libraries were
removed from the basic SDK and must now be added manually when needed:
•
Entity Framework Core must now be added to the project as a standalone NuGet package. A different
data access library can be used instead.
•
Roslyn was used for runtime compilation of views. This is now an optional feature which can be added
to a project using the Microsoft.AspNetCore.Mvc.Razor.RuntimeCompilation NuGet package.
•
JSON.NET as the library for serializing and deserializing JSON has been replaced with a built-in JSON
library with focus on high performance and minimal memory allocation. JSON.NET can still be installed
into the project and used instead.
Endpoint routing which was first introduced in .NET Core 2.2 has been further improved. Its main advantage
is better interaction between routing and the middleware. Now, the effective route is determined before the
middleware is run. This allows the middleware to inspect the route during its processing. This is especially
useful for implementing authorization, CORS configuration and similar cross-cutting functionalities as
middleware.
Server-Side Blazor
Probably the most eagerly awaited new feature in ASP.NET Core 3.0 is Blazor. In .NET Core 3.0, only serverside Blazor is production ready (this feature was named Razor Components for a while in early previews of
.NET Core).
Blazor makes client-side interactivity in a browser possible without any JavaScript code. If necessary,
28
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
JavaScript can still be invoked (e.g. to use browser APIs, such as geolocation and notifications). All the other
code is written in .NET instead. In server-side Blazor, this code runs on the server and manipulates the
markup in the browser using SignalR. For this to work, a constant connection between the browser and the
server is required. Offline scenario is not supported.
Figure 1: Blazor server-side execution model
Client-side Blazor is still in preview and will ship at an unannounced time in the future. As the name
implies, all the code runs on the client in the browser like in JavaScript-based SPAs (single page
applications). The .NET code is compiled to web assembly so that browsers can execute it.
Figure 2: Blazor client-side execution model
You can read more about Blazor in an (old albeit relevant) article by Daniel Jimenez Garcia – Blazor - .NET in
the browser.
Worker Service Template
Worker Service is a new project template for an ASP.NET Core application hosting long-running background
processes instead of responding to client requests. There are support classes available to integrate such
applications better with the hosting operating system:
•
•
On Windows, the application can act as a Windows Service.
On Linux, it can run as a systemd service.
Support for gRPC
gRPC is a modern high-performance contract-first RPC (remote procedure call) protocol developed by
Google and supported in many languages and on many platforms. It uses HTTP/2 for transfer and Protocol
Buffers (also known as Protobuf) for strongly-typed binary serialization. This makes it a great alternative to
WCF (Windows Communication Foundation) which has only limited support in .NET Core:
•
•
The client libraries only support a subset of WCF bindings.
There’s only an early open-source .NET Core port of the server libraries available.
www.dotnetcurry.com/magazine
29
While Web API can be used to implement a REST service, gRPC is better suited to remote procedure call
scenarios which were the most common use case for WCF services. Its performance benefits are most
obvious when many RPC calls and large payloads are required.
Although a gRPC library for C# has been available for a while, .NET Core 3.0 and Visual Studio 2019 now
include first-class support with project templates and tooling to make development easier.
Editorial Note: This magazine edition contains a dedicated article on gRPC with ASP.NET Core 3.0 on Page 32.
Make sure to check it out.
Changes in Deployment Model
.NET Core supports two deployment models:
•
Framework-dependent deployments require that a compatible version of the .NET Core runtime is
installed on the target computer. This allows the deployment package to be smaller because it only
needs to contain compiled application code and third-party dependencies. These are all platform
independent, therefore a single deployment package can be created for all platforms.
•
Self-contained-deployments additionally include the complete .NET Core runtime. This way the target
computer doesn’t need to have the .NET Core runtime preinstalled. As a result, the deployment package
is much larger and specific to a platform.
Before .NET Core 3.0, only self-contained deployments included an executable. Framework-dependent
deployments had to be run with dotnet command line tool. In .NET Core 3.0, framework-dependent
deployments can also include an executable for a specific target platform.
Additionally, in .NET Core 3.0, self-contained deployments support assembly trimming. This can be used
to make the deployment package smaller by only including the assemblies from the .NET Core runtime
which are used by the application. However, dynamically accessed assemblies (through Reflection) can’t
be automatically detected which can cause the application to break because of a missing assembly. The
project can be manually configured to include such assemblies, but this approach requires the deployment
package to be thoroughly tested to make sure that no required assembly is missing.
Both deployment models now also support the creation of single-file executables which include all their
dependencies (only third-party dependencies in framework-dependent deployments, also the .NET Core
runtime in self-contained deployments). Of course, these executables are always platform specific.
Startup-Time Improvements
In the field of performance, the following features focus on reducing the application startup-time:
•
Two-tier JIT (just-in-time) compiler has been available for a while, but with .NET Core 3.0, it is enabled
by default although it can still be disabled. To improve startup-time, two-tier JIT performs a faster lower
quality compilation first and makes a slower second pass at full quality later when the application is
already running.
•
Ready-to-run images introduce AOT (ahead-of-time) compilation to .NET Core. Such deployment
30
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
packages are larger because they include a native binary alongside the regular IL (intermediate
language) assemblies which are still needed. Because there’s no need for JIT compilation before startup,
the application can start even faster. For now, there’s no cross-targeting support for creating ready-torun-images, therefore they must be built on their target platform.
Conclusion
A closer look at .NET Core 3.0 makes it clear that .NET Core is maturing.
Since there’s no further development planned for .NET framework, .NET Core is now the most advanced
.NET implementation. It’s cross-platform and more performant than .NET framework.
Most of the features from .NET framework are now also included in .NET Core.
For those features which aren’t planned to be supported in .NET Core, there are alternatives available: for
Web Forms, there’s Blazor; for WCF and .NET Remoting, there’re Web API and gRPC; for Workflow Foundation
there’s Azure Logic Apps.
Now is the time to start porting the .NET framework projects you’re still actively developing, if you haven’t
done so already. This way, you’ll be ready when .NET 5 arrives, as the unified .NET runtime, by the end of
2020.
Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools; both in
complex enterprise software projects and modern cross-platform mobile applications.
In his drive towards better development processes, he is a proponent of test driven
development, continuous integration and continuous deployment. He shares his
knowledge by speaking at local user groups and conferences, blogging, and answering
questions on Stack Overflow. He is an awarded Microsoft MVP for .NET since 2012.
Thanks to Daniel Jimenez Garcia for reviewing this article.
www.dotnetcurry.com/magazine
31
ASP.NET CORE
Daniel Jimenez Garcia
GRPC WITH
ASP.NET
CORE 3.0
With the release of ASP.NET Core 3.0, amongst many other features, gRPC
will become a first-class citizen of the ecosystem with Microsoft
officially supporting and embracing it.
What this means for developers is that ASP.NET Core 3.0 now
ships with templates for building gRPC services, tooling for
defining service contracts using protocol buffers, tooling to
generate client/server stubs from said proto files and integration with
Kestrel/HttpClient.
After a brief introduction to gRPC, this article provides an overview on how
gRPC services can be created with ASP.NET Core and how to invoke these
services from .NET Core. Next, it will take a look at the cross-language
nature of gRPC by integrating with a Node.js service. We will finish by
exploring gRPC built-in security features based on TLS/SSL.
32
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Companion code of this tutorial can be found on GitHub.
Editorial Note: This article was originally written using Preview 7 of ASP.NET Core 3.0.
While Daniel has updated this article to reflect the final ASP.NET Core 3.0 release, some
screenshots may still show Preview 7.
1. THE 5 MINUTES INTRODUCTION TO GRPC
GRPC is a framework designed by Google and open sourced in 2015, which enables efficient Remote
Procedure Calls (RPC) between services. It uses the HTTP/2 protocol to exchange binary messages, which are
serialized/deserialized using Protocol Buffers.
Protocol buffers is a binary serialization protocol also designed by Google. It is highly performant by
providing a compact binary format that requires low CPU usage.
Developers use proto files to define service and message contracts which are then used to generate the
necessary code to establish the connection and exchange the binary serialized messages. When using C#,
tooling will generate strongly typed message POCO classes, client stubs and service base classes. Being a
cross-language framework, its tooling lets you combine clients and services written in many languages like
C#, Java, Go, Node.js or C++.
Protocol buffers also allow services to evolve while remaining backwards compatible. Because each field in
a message is tagged with a number and a type, a recipient can extract the fields they know, while ignoring
the rest.
Figure 1, gRPC basics
The combination of the HTTP/2 protocol and binary messages encoded with Protocol Buffers lets gRPC
achieve much smaller payloads and higher performance, than traditional solutions like HTTP REST services.
But traditional RPC calls are not the only scenario enabled by gRPC!
www.dotnetcurry.com/magazine
33
The framework takes advantage of HTTP/2 persistent connections by allowing both server and client
streaming, resulting in four different styles of service methods:
•
Unary RPC, where client performs a traditional RPC call, sending a request message and receiving a
response.
•
Server streaming RPC, where clients send an initial request but get a sequence of messages back.
•
Client streaming RPC, where clients send a sequence of messages, wait for the server to process them
and receive a single response back.
•
Bidirectional streaming RPC, where both client and server send a sequence of messages. The streams
are independent, so client and server can decide in which order to read/write the messages
So far, it’s all great news. So now you might then be wondering why isn’t gRPC everywhere, replacing
traditional HTTP REST services?
It is due to a major drawback, its browser support!
The HTTP/2 support provided by browsers is insufficient to implement gRPC, requiring solutions like gRPCWeb based on an intermediate proxy that translates standard HTTP requests into gRPC requests. For more
information, see this Microsoft comparison between HTTP REST and gRPC services.
Narrowing our focus back to .NET, there has been a port of gRPC for several years that lets you work with
gRPC services based on proto files.
So, what exactly is being changed for ASP.NET Core 3.0?
The existing Grpc.Core package is built on top of unmanaged code (the chttp2 library), and does not play
nicely with managed libraries like HttpClient and Kestrel that are widely used in .NET Core. Microsoft
is building new Grpc.Net.Client and Grpc.AspNetCore.Server packages that will avoid unmanaged code,
integrating instead with HttpClient and Kestrel. Tooling is also been improved to support a code-first
approach and both proto2 and proto3 syntax. Checkout this excellent talk from Marc Gravell for more
information.
Let’s stop our overview of gRPC here. If you were completely new to gRPC, hopefully there was enough
to pique your interest. Those of you who have been around for a while might have recognized some WCF
ideas! Let’s then start writing some code so we can see these concepts in action.
2. CREATING A GRPC SERVICE
As mentioned in the introduction, ASP.NET Core 3.0 ships with a gRPC service template. This makes creating
a new service a very straightforward task.
If you are still following using a preview release of ASP.NET Core, remember to enable .NET Core SDK preview
features in Visual Studio options:
34
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Figure 2, enabling .NET Core SDK preview features
Start by creating a new ASP.NET Core project in Visual Studio 2019. Use Orders as the name of the project,
and a different name for the solution since we will be adding more projects later. Then select the gRPC
template from the ASP.NET Core 3.0 framework:
Figure 3, using the gRPC Service template with a new ASP.NET Core application
Congratulations you have created your first fully working gRPC service! Let’s take a moment to take
a look and understand the generated code. Apart from the traditional Program and Startup classes,
you should see a Protos folder containing a file named greet.proto and a Services folder containing a
GreeterService class.
www.dotnetcurry.com/magazine
35
Figure 4, gRPC service generated by the template
These two files define and implement a sample Greeter service, the equivalent of a “Hello World” program
for a gRPC service. The greet.proto file defines the contract: i.e. which methods does the service provide, and
what are the request/response message exchanged by client and server:
syntax = "proto3";
option csharp_namespace = "Orders";
package Greet;
service Greeter {
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
message HelloRequest {
string name = 1;
}
message HelloReply {
string message = 1;
}
The generated GreeterService class contains the implementation of the service:
public class GreeterService : Greeter.GreeterBase
{
private readonly ILogger<GreeterService> _logger;
public GreeterService(ILogger<GreeterService> logger)
{
_logger = logger;
}
public override Task<HelloReply> SayHello(HelloRequest request, ServerCallContext
36
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
}
context)
{
return Task.FromResult(new HelloReply
{
Message = "Hello " + request.Name
});
}
As you can see, it inherits from a suspicious looking class named Greeter.GreeterBase, while
HelloRequest and HelloReply classes are used as request and response respectively. These classes are
generated by the gRPC tooling, based on the greet.proto file, providing all the functionality needed to send/
receive messages so you just need to worry about the method implementation!
You can use F12 to inspect the class and you will see the generated code, which will be located at obj/
Debug/netcoreapp3.0/GreetGrpc.cs (service) and obj/Debug/netcoreapp3.0/Greet.cs (messages).
It is automatically generated at build time, based on your project settings. If you inspect the project file, you
will see a reference to the Grpc.AspNetCore package and a Protobuf item:
<ItemGroup>
<Protobuf Include="Protos\greet.proto" GrpcServices="Server" />
</ItemGroup>
<ItemGroup>
<PackageReference Include="Grpc.AspNetCore" Version="2.23.1" />
</ItemGroup>
Grpc.AspNetCore is a meta-package including the tooling needed at design time and the libraries
needed at runtime. The Protobuf item points the tooling to the proto files it needs to generate the service
and message classes. The GrpcServices="Server" attribute indicates that service code should be
generated, rather than client code.
Let’s finish our inspection of the generated code by taking a quick look at the Startup class. You will see
how the necessary ASP.NET Core services to host a gRPC service are added with services.AddGrpc(),
while the GreeterService is mapped to a specific endpoint of the ASP.NET Core service by using
endpoints.MapGrpcService<GreeterService>(); .
2.1 Defining a contract using proto files
Let’s now replace the sample service contract with our own contract for an imaginary Orders service. This
will be a simple service that lets users create orders and later ask for the status:
syntax = "proto3";
option csharp_namespace = "Orders";
package Orders;
service OrderPlacement {
rpc CreateOrder (CreateOrderRequest) returns (CreateOrderReply) {}
rpc GetOrderStatus (GetOrderStatusRequest) returns (stream GetOrderStatusResponse) {}
}
www.dotnetcurry.com/magazine
37
message CreateOrderRequest {
string productId = 1;
int32 quantity = 2;
string address = 3;
}
message CreateOrderReply {
string orderId = 1;
}
message GetOrderStatusRequest {
string orderId = 1;
}
message GetOrderStatusResponse {
string status = 1;
}
As you can see, CreateOrder is a standard Unary RPC method while GetOrderStatus is a Server
streaming RPC method. The request/response messages are fairly self-descriptive but notice how each field
has a type and an order, allowing backwards compatibility when creating new service versions.
Let’s now rename the file as orders.proto and let’s move it to a new Protos folder located at the solution
root, since we will share this file with a client project we will create later. Once renamed and moved to its
new folder, you will need to edit the Orders.csproj project file and update the Protobuf item to include the
path to the updated file:
<Protobuf Include="..\Protos\orders.proto" GrpcServices="Server" />
Let’s now implement the OrderPlacement service we just defined.
2.2 Implementing the service
If you build the project now, you should receive a fair number of errors since the tooling will no
longer generate classes for the previous Greeter service and instead will generate them for the new
OrderPlacement service.
Rename the GreeterService.cs file as OrdersService.cs and replace its contents with the following:
public class OrderPlacementService: OrderPlacement.OrderPlacementBase
{
private readonly ILogger<OrderPlacementService> _logger;
public OrderPlacementService(ILogger<OrderPlacementService> logger)
{
_logger = logger;
}
public override Task<CreateOrderReply> CreateOrder(CreateOrderRequest request,
ServerCallContext context)
{
var orderId = Guid.NewGuid().ToString();
this._logger.LogInformation($"Created order {orderId}");
return Task.FromResult(new CreateOrderReply {
38
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
}
});
OrderId = orderId
public override async Task GetOrderStatus(GetOrderStatusRequest request,
IServerStreamWriter<GetOrderStatusResponse> responseStream, ServerCallContext
context)
{
await responseStream.WriteAsync(
new GetOrderStatusResponse { Status = "Created" });
await Task.Delay(500);
await responseStream.WriteAsync(
new GetOrderStatusResponse { Status = "Validated" });
await Task.Delay(1000);
await responseStream.WriteAsync(
new GetOrderStatusResponse { Status = "Dispatched" });
}
}
The implementation of CreateOrder is fairly similar to the previous Greeter one. The GetOrderStatus
one is a bit more interesting. Being a server streaming method, it receives a ResponseStream parameter
that lets you send as many response messages back to the client as needed. The implementation above
simulates a real service by sending some messages after certain delays.
2.3 Hosting the service with ASP.NET Core
The only bit we need to modify from the generated project in order to host our new OrdersPlacement
service is to replace the endpoints.MapGrpcService<GreeterService>() in the Startup class with
endpoints.MapGrpcService<OrderPlacementService>().
The template is already adding all the gRPC infrastructure needed to host the service in ASP.NET via
the existing services.AddGrpc() call. You should be able to start the server with the dotnet run
command:
Figure 5, running the first gRPC service
www.dotnetcurry.com/magazine
39
The server will start listening on port 5001 using HTTPS (and 5000 using HTTP) as per the default settings.
Time to switch our focus to the client side.
3. CREATING A GRPC CLIENT
While there is no template to generate a client, the new Grpc.Net.Client and Grpc.Net.ClientFactory
packages combined with gRPC tooling makes it easy to create the code needed to invoke gRPC service
methods.
3.1 Using Grpc.Net.Client in a Console project
First, let’s create a simple console application that lets us invoke the methods of the OrdersPlacement
service we defined in the previous section. Start by adding a new project named Client to your solution,
using the .NET Core Console Application template.
Once generated, install the following NuGet packages: Grpc.Tools, Grpc.Net.Client and Google.Protobuf. (If
using the GUI in Visual Studio, remember to tick the “Include prerelease” checkbox)
Next, manually edit the Client.csproj project file. First update the Grpc.Tools reference with the
PrivateAssets attribute, so .NET knows not to include this library in the generated output:
<PackageReference Include="Grpc.Tools" Version="2.23.0" PrivateAssets="All" />
Then, include a new ItemGroup with a Protobuf item that references the orders.proto file defined in the
previous section:
<ItemGroup>
<Protobuf Include="..\Protos\orders.proto" GrpcServices="Client" />
</ItemGroup>
Notice the GrpcServices attribute telling Grpc.Tools that this time we need client stub code instead of
service base classes!
Once you are done, rebuild the project. You should now be able to use the generated client code under the
Orders namespace. Let’s write the code needed to instantiate a client of our service:
var channel = GrpcChannel.ForAddress(("https://localhost:5001");
var ordersClient = new OrderPlacement.OrderPlacementClient(channel);
We simply instantiate a GrpcChannel using the URL of our service using the factory method
provided by the GrpcClient.Client package, and use it to create the instance of the generated class
OrderPlacementClient.
You can then use the ordersClient instance to invoke the CreateOrder method:
40
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Console.WriteLine("Welcome to the gRPC client");
var reply = await ordersClient.CreateOrderAsync(new CreateOrderRequest
{
ProductId = "ABC1234",
Quantity = 1,
Address = "Mock Address"
});
Console.WriteLine($"Created order: {reply.OrderId}");
Now build and run the project while your server is also running. You should see your first client-server
interaction using gRPC.
Figure 6, client invoking a service method
Invoking the server streaming method is equally simple. The generated GetOrderStatus method of the
client stub returns an AsyncServerStreamingCall<GetOrderStatusResponse>. This is an object with
an async iterable that lets you process every message received by the server:
using (var statusReplies = ordersClient.GetOrderStatus(new GetOrderStatusRequest {
OrderId = reply.OrderId }))
{
while (await statusReplies.ResponseStream.MoveNext())
{
var statusReply = statusReplies.ResponseStream.Current.Status;
Console.WriteLine($"Order status: {statusReply}");
}
}
If you build and run the client again, you should see the streamed messages:
www.dotnetcurry.com/magazine
41
Figure 7, invoking the server streaming method
While this does not cover the four method styles available, it should give you enough information to
explore the remaining two on your own. You can also explore the gRPC examples in the official grpc-dotnet
repo, covering the different RPC styles and more.
3.2 Using Grpc.Net.ClientFactory to communicate between
services
We have seen how the Grpc.Net.Client package allows you to create a client using the well-known
HttpClient class. This might be enough for a sample console application like the one we just created.
However, in a more complex application you would rather use HttpClientFactory to manage your
HttpClient instances.
The good news is that there is a second package Grpc.Net.ClientFactory which contains the necessary
classes to manage gRPC clients using HttpClientFactory. We will take a look by exploring a not so
uncommon scenario, a service that invokes another service. Let’s imagine for a second that our Orders
service needs to communicate with another service, Shippings, when creating an Order.
3.2.1 Adding a second gRPC service
Begin by adding a new shippings.proto file to the Protos folder of the solution. This should look familiar by
now; it is another sample service exposing a traditional Unary RPC method.
syntax = "proto3";
option csharp_namespace = "Shippings";
package Shippings;
42
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
service ProductShipment {
rpc SendOrder (SendOrderRequest) returns (SendOrderReply) {}
}
message SendOrderRequest {
string productId = 1;
int32 quantity = 2;
string address = 3;
}
message SendOrderReply {
}
The only interesting bit is the empty reply message. It is expected for services to always return a message,
even if empty. This is just an idiom in gRPC, it will help with versioning if/when fields are added to the
response.
Now repeat the steps in section 2 of the article to add a new Shippings project to the solution that
implements the service defined by this proto file. Update its launchSettings.json so this service gets started
at https://localhost:5003 instead of its 5001 default.
If you have any trouble, check the source code on GitHub.
3.2.2 Registering a client with the HttpClientFactory
We will now update the Orders service so it invokes the SendOrder method of the Shippings service
as part of its CreateOrder method. Update the Orders.csproj file to include a new Protobuf item
referencing the shippings.proto file we just created. Unlike the existing server reference, this will be a
client reference so Grpc.Tools generates the necessary client classes:
<ItemGroup>
<Protobuf Include="..\Protos\orders.proto" GrpcServices="Server" />
<Protobuf Include="..\Protos\shippings.proto" GrpcServices="Client" />
</ItemGroup>
Save and rebuild the project. Next install the Grpc.Net.ClientFactory NuGet package. Once installed, you
will be able to use the services.AddGrpcClient extension method to register the client. Update the
ConfigureServices method of the startup class as follows:
services.AddGrpc();
services
.AddGrpcClient<ProductShipment.ProductShipmentClient>(opts =>
{
opts.Address = new Uri("https://localhost:5003");
});
This registers a transient instance of the ProductShipmentClient, where its underlying HttpClient is
automatically managed for us. Since it is registered as a service in the DI container, we can easily request
an instance in the constructor of the existing OrdersPlacement service:
private readonly ILogger<OrderPlacementService> _logger;
private readonly ProductShipment.ProductShipmentClient _shippings;
public OrderPlacementService(ILogger<OrderPlacementService> logger,
ProductShipment.ProductShipmentClient shippings)
www.dotnetcurry.com/magazine
43
{
}
_logger = logger;
_shippings = shippings;
Updating the existing CreateOrder method implementation so it invokes the new service is pretty easy as
well:
public override async Task<CreateOrderReply> CreateOrder(CreateOrderRequest
request, ServerCallContext context)
{
var orderId = Guid.NewGuid().ToString();
await this._shippings.SendOrderAsync(new SendOrderRequest
{
ProductId = request.ProductId,
Quantity = request.Quantity,
Address = request.Address
});
this._logger.LogInformation($"Created order {orderId}");
return new CreateOrderReply {
OrderId = orderId
};
}
Once you are done with the changes, start each service on its own console and run the client on a third:
Figure 8, trying our client and the 2 services
This concludes the basics for both implementing and calling gRPC services in .NET.
We have just scratched the surface of the different method styles available. For more information on
Authentication, Deadlines, Cancellation, etc. check the Microsoft docs and the gRPC examples in the official
grpc-dotnet repo.
4. CROSS-LANGUAGE GRPC CLIENT AND
SERVICE
One of the key aspects of gRPC is its cross-language nature. Given a proto file; client and service can be
44
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
generated using any combination of the supported languages. Let’s explore this by creating a gRPC service
using Node.js, a new Products service we can use to get the details of a given product.
4.1 Creating a gRPC server in Node
Like we did with the previous Orders and Shippings services, let’s begin by adding a new products.proto file
inside the Protos folder. It will be another straightforward contract with a Unary RPC method:
syntax = "proto3";
option csharp_namespace = "Products";
package products;
service ProductsInventory {
rpc Details(ProductDetailsRequest) returns (ProductDetailsResponse){}
}
message ProductDetailsRequest {
string productId = 1;
}
message ProductDetailsResponse {
string productId = 1;
string name = 2;
string category = 3;
}
Next create a new Products folder inside the solution and navigate into it. We will initialize a new
Node.js project by running the npm init command from this folder. Once finished, we will install the
necessary dependencies to create a gRPC service using Node:
npm install --save grpc @grpc/proto-loader
Once installed, add a new file server.js and update package.json main property as
"main": "server.js". This way we will be able to start the gRPC server running the npm run
command. It is now time to implement the service.
The actual method implementation is a fairly simple function:
const serviceImplementation = {
Details(call, callback) {
const { productId } = call.request;
console.log('Sending details for:', productId);
callback(null, {productId, name: 'mockName', category: 'mockCategory'});
}
};
Now we just need a bit of plumbing code using the installed grpc dependencies in order to get an HTTP/2
server started that implements the service defined by the proto file and the concrete implementation:
const grpc = require('grpc');
const protoLoader = require('@grpc/proto-loader');
// Implement service
const serviceImplementation = { ... };
www.dotnetcurry.com/magazine
45
// Load proto file
const proto = grpc.loadPackageDefinition(
protoLoader.loadSync('../Protos/products.proto', {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
})
);
// Define server using proto file and concrete implementation
const server = new grpc.Server();
server.addService(proto.products.ProductsInventory.service, { Details: getdetails
});
// get the server started
server.bind('0.0.0.0:5004', grpc.ServerCredentials.createInsecure());
server.start();
Even though we are using a different language, the code listing we saw should result pretty familiar after
the .NET services. Developers are responsible for writing the service implementation. Standard gRPC
libraries are then used to load the proto file and create a server able to handle the requests, by routing
them to the appropriated method of the implementation and handling the serialization/deserialization of
messages.
There is however an important difference!
Note the second argument grpc.ServerCredentials.createInsecure() to the server.bind
function. We are starting the server without using TLS, which means traffic won’t be encrypted across the
wire. While this is fine for development, it would be a critical risk in production. We will come back to this
topic in Section 5.
4.2 Communicate with the Node service
Now that we have a new service, let’s update our console application to generate the necessary client
stub code to invoke its methods. Edit the Client.csproj project file adding a new Protobuf element that
references the new products.proto file:
<Protobuf Include="..\Protos\products.proto" GrpcServices="Client" />
Once you save and rebuild the project, let’s add the necessary code to our client application in order to
invoke the service. Feel free to get a bit creative for your Console application to provide some interface that
lets you invoke either the Products or Order services.
The basic code to invoke the Details method of the products service is as follows:
var channel = GrpcChannel.ForAddress("http://localhost:5004");
var productsClient = new ProductsInventory.ProductsInventoryClient(channel);
var reply = await productsClient.DetailsAsync(new ProductDetailsRequest { ProductId
= "ABC1234" });
Console.WriteLine($"Product details received. Name={reply.Name}, Category={reply.
Category}");
46
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
This is very similar to the code that sends a request to the Orders service. Note however that since our
server does not have TLS enabled, we need to define the URL using the http:// protocol instead of
https://. If you now try to run the client and send a request to the server, you will receive the following
exception:
Figure 9, exception trying to invoke a service without TLS
This is because when using the HTTP/2 protocol, the HttpClient used by the service client will enforce
TLS. Since the Node server does not have TLS enabled, the connection cannot be established. In order to
allow this during development, we need to use the following AppContext switch as advised in the official
docs:
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport",
true);
Once we make that change, we are able to invoke the Node service from our .NET client:
Figure 10, invoking the Node.js gRPC service from the .NET client
As you can see, it does not matter on which language (or platform for that matter) are the client and server
www.dotnetcurry.com/magazine
47
running. As long as both adhere to the contract defined in the proto files, gRPC will make the transport/
language choice transparent for the other side.
The final section of the article will look at TLS/SSL support in greater detail.
5. UNDERSTANDING TLS/SSL WITH GRPC
5.1 Server credentials for TLS/SSL
gRPC is designed with TLS/SSL (Transport Layer Security/Secure Sockets Layer) support in mind in order
to encrypt the traffic sent through the wire. This requires the gRPC servers to be configured with valid SSL
credentials.
If you have paid attention throughout the article, you might have noticed that both services implemented
in .NET and hosted in ASP.NET Core applications (Orders and Shippings) used TLS/SSL out of the box
without us having to do anything. Whereas in the case of the Node.js app, we used unsecure connections
without TLS/SSL since we didn’t supply valid SSL credentials.
5.1.1 ASP.NET Core development certificates
When hosting a gRPC server within an ASP.NET Core application using Kestrel, we get the same default
TLS/SSL features for free as with any other ASP.NET Core application. This includes out of the box
development certificates and default HTTPS support in project templates, as per the official docs.
By default, development certificates are installed in ${APPDATA}/ASP.NET/Https folder (windows)
or the ${HOME}/.aspnet/https folder (mac and linux). There is even a CLI utility named dotnet devcertsthat comes with .NET Core and lets you manage the development certificate,. Check out this article
from Scott Hanselman from more information.
Essentially, this means when implementing gRPC services with ASP.NET Core, the SSL certificate is
automatically managed for us and TLS/SSL is enabled even in our local development environments.
5.1.2 Manually generating development certificates
You might find yourself working on a platform that does not provide you with development certificates by
default, like when we created the Node.js server.
In those cases, you will need to generate yourself a self-signed set of SSL credentials that you can then
provide to your server during startup. Using openssl, it is relatively easy to create a script that will generate
a new CA root certificate, and a public/private key pair for the server. (If you are in Windows, installing git and
the git bash is the easiest way to get it). Such a script will look like this:
openssl genrsa -passout pass:1234 -des3 -out ca.key 4096
openssl req -passin pass:1234 -new -x509 -days 365 -key ca.key -out ca.crt -subj
"/C=CL/ST=RM/L=Santiago/O=Test/OU=Test/CN=ca"
48
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
openssl genrsa -passout pass:1234 -des3 -out server.key 4096
openssl req -passin pass:1234 -new -key server.key -out server.csr -subj
ST=RM/L=Santiago/O=Test/OU=Server/CN=localhost"
"/C=CL/
openssl x509 -req -passin pass:1234 -days 365 -in server.csr -CA ca.crt -Cakey
ca.key -set_serial 01 -out server.crt
openssl rsa -passin pass:1234 -in server.key -out server.key
The script first generates a self-signed CA(Certificate Authority) root, then generates the public (server.crt)
and private key (server.key) parts of a X509 certificate signed by that CA. Check the GitHub repo for full
mac/linux and windows scripts.
Once the necessary certificates are generated, you can now update the code starting the Node.js gRPC
service to load the SSL credentials from these files. That is, we will replace the line:
server.bind('0.0.0.0:5004', grpc.ServerCredentials.createInsecure());
..with:
const fs = require('fs');
...
const credentials = grpc.ServerCredentials.createSsl(
fs.readFileSync('./certs/ca.crt'),
[{
cert_chain: fs.readFileSync('./certs/server.crt'),
private_key: fs.readFileSync('./certs/server.key')
}],
/*checkClientCertificate*/ false);
server.bind(SERVER_ADDRESS, credentials);
If you restart the Node service and update the address in the console client to https://localhost:5004, you
will realize something is still not working:
Figure 11, the self-signed certificate for the Node service is not trusted
www.dotnetcurry.com/magazine
49
This is because we have self-signed the certificate, and it is not trusted by our machine. We could add it to
our trusted store, but we could also disable the validation of the certificate in development environments.
We can provide our own HttpClient instance used by the GrpcChannel through the optional
GrpcChannelOptions parameter:
var httpHandler = new HttpClientHandler();
httpHandler.ServerCertificateCustomValidationCallback =
(message, cert, chain, errors) => true;
var httpClient = new HttpClient(httpHandler);
var channel = GrpcChannel.ForAddress("http://localhost:5004", new
GrpcChannelOptions { HttpClient = httpClient });
var productsClient = new ProductsInventory.ProductsInventoryClient(channel);
And this way we can use SSL/TLS with our Node.js service.
5.2 Mutual authentication with client provided credentials
gRPC services can be configured to require client certificates within the requests, providing mutual
authentication between client and server.
Begin by updating the previous certificate generation script so it also produces client certificates using the
same CA certificate:
openssl genrsa -passout pass:1234 -des3 -out client.key 4096
openssl req -passin pass:1234 -new -key client.key -out client.csr -subj
ST=RM/L=Santiago/O=Test/OU=Client/CN=localhost"
"/C=CL/
openssl x509 -passin pass:1234 -req -days 365 -in client.csr -CA ca.crt -CAkey
ca.key -set_serial 01 -out client.crt
openssl rsa -passin pass:1234 -in client.key -out client.key
openssl pkcs12 -export -password pass:1234 -out client.pfx -inkey client.key -in
client.crt
Now edit the Client.csproj project file to copy the generated certificate files to the output folder:
<ItemGroup>
<None Include="..\Products\certs\*.*" LinkBase="ProductCerts">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
</ItemGroup>
Note that the folder from which you Include them will depend on where your script creates the certificate files. In
my case, it was a folder named certs inside the Products service.
Now we need to provide the generated client.pfx certificate when connecting with the Products GRPC
server. In order to do so, we just need to load it as an X509Certificate2 and tell the HttpClientHandler
used by the GrpcChannel to supply said certificate as the client credentials.
50
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
This might sound complicated, but it doesn’t change much the code that initialized the Products client
instance:
var basePath = Path.GetDirectoryName(typeof(Program).Assembly.Location);
var certificate = new X509Certificate2(
Path.Combine(basePath, "ProductCerts", "client.pfx"), "1234"))
var handler = new HttpClientHandler();
handler.ClientCertificates.Add(certificate);
var httpClient = new HttpClient(handler);
var channel = GrpcChannel.ForAddress(serverUrl, new GrpcChannelOptions { HttpClient
= httpClient });
return new ProductsInventory.ProductsInventoryClient(channel);
Note that you need to supply the same password to the X509Certificate2 constructor that you used to
generate the client.pfx file. For the purposes of the article we are hardcoding it as 1234!
Once the client is updated to provide the credentials, you can update the checkClientCertificate
parameter of the Node server startup and verify that the client provides a valid certificate as well.
5.3 Using development certificates with Docker for local
development
To finish the article, let’s take a quick look at the extra challenges that Docker presents in regards to
enabling SSL/TLS during local development.
To begin with, the ASP.NET Core development certificates are not automatically shared with the containers.
In order to do so, we first need to export the certificate as a pfx file as per the commands described in this
official samples. For example, in Windows you can run:
dotnet dev-certs https -ep ${APPDATA}/ASP.NET/Https/aspnetapp.pfx -p mypassword
Exporting the certificate is just the beginning. When running your containers, you need to make this
certificate available inside the container. For example, when using docker-compose you can mount the
folder in which the pfx file was generated as a volume:
volumes:
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
Then setup the environment variables to override the certificate location and its password:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+;http://+
- ASPNETCORE_HTTPS_PORT=5001
- ASPNETCORE_Kestrel__Certificates__Default__Password=mypassword
- ASPNETCORE_Kestrel__Certificates__Default__Path=/root/.aspnet/https/aspnetapp.
pfx
www.dotnetcurry.com/magazine
51
This should let you run your ASP.NET Core gRPC services using docker with TLS/SSL enabled.
Figure 12, using docker for local development with SSL enabled
One last caveat though! The approach we just saw works as long as you map the container ports to your
localhost, since the self-signed SSL certificates are created for localhost. However, when two containers
communicate with each other (as in the Orders container sending a request to the Shippings service),
they will use the name of the container as the host in the URL (i.e., the Orders service will send a request
to shippings:443 rather than localhost:5003). We need to bypass the host validation or the SSL
connection will fail.
How the validation is disabled depends on how the client is created. When using the
Grpc.Net.Client, you can supply an HttpClientHandler with a void implementation of its
ServerCertificateCustomValidationCallback. We have already seen this in section 5.1.2 in order to
accept the self-signed certificate created for the Node server.
When using the Grpc.Net.ClientFactory, there is a similar approach that lets you configure the
HttpClientHandler after the call to services.AddGrpcClient:
services
.AddGrpcClient<ProductShipment.ProductShipmentClient>(opts =>
{
opts.Address = new Uri(shippings_url);
}).ConfigurePrimaryHttpMessageHandler(() => {
return new HttpClientHandler
{
ServerCertificateCustomValidationCallback = (message, cert, chain, errors) =>
true
};
});
The second approach would be needed if you want to run both the Orders and Shippings services inside
docker.
52
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Figure 13, services communicating with each other using SSL inside docker
Before we finish, let me remark that you only want to disable this validation for local development
purposes. Make sure these certificates and disabled validations do not end up being deployed in
production!
Conclusion
A first-class support of gRPC in the latest ASP.NET Core 3.0 release is great news for .NET developers. They
will get easier than ever access to a framework that provides efficient, secure and cross-language/platform
Remote Procedure Calls between servers.
While it has a major drawback in the lack of browser support, there is no shortage of scenarios in which it
can shine or at least become a worthy addition to your toolset. Building Microservices, native mobile apps
or Internet of Things (IoT) are just a few of the examples where gRPC would be a good fit.
Download the entire source code from GitHub at
bit.ly/dncm44-grpc
Daniel Jimenez Garcia
Author
Daniel Jimenez Garcia is a passionate software developer with 10+ years of experience. He started as
a Microsoft developer and learned to love C# in general and ASP MVC in particular. In the latter half
of his career he worked on a broader set of technologies and platforms while these days is particularly
interested in .Net Core and Node.js. He is always looking for better practices and can be seen answering
questions on Stack Overflow.
Thanks to Dobromir Nikolov for reviewing this article.
www.dotnetcurry.com/magazine
53
VISUAL STUDIO
Dobromir Nikolov
STREAMLINING YOUR
VISUAL STUDIO
PROJECT SETUP
Creating and distributing Visual Studio
templates is hard. You need to get familiar
with custom XML formats, the VSIX project
type, and Visual Studio’s occasionally eccentric
behavior.
Don’t waste time with that.
Learn how you can
instantly extract a readyto-go template out of your
existing solution using
solution-snapshotter.
Creating custom multi-project templates for Visual Studio is hard. There’s
lots of tricky stuff you need to do in order for your template to have a
decent physical and solution structure. I wrote a tool to help you with that.
Enter solution-snapshotter - a tool that automatically exports a given solution
as a Visual Studio template. It takes care of preserving your solution and
folder structure, and also keeps any extra non-project files you may have
such as configurations, ruleset files, etc. The generated template will be
packaged in a Visual Studio extension for easy installation and distribution.
Go straight to the repo to see it in action or read the rest of the article for a
more detailed explanation on what problems does it solve and how you can
use it.
Solution Snapshotter – Visual Studio tool
Have you ever found yourself setting up the same project structure, using the same framework, the same
ORM, the same logging library - over and over again?
I know I have.
This can potentially take up days until you get every tiny detail right.
When you’re using .NET with Visual Studio and you find yourself setting up projects often, the logical
solution would be to automate this process through a Visual Studio template. Like one of these:
www.dotnetcurry.com/magazine
55
To create such a template, there’s an Export Template button under the Project menu (on the top) inside
Visual Studio.
It creates a .zip file that you have to place inside the “%USER%/Documents/Visual Studio/Templates/
ProjectTemplates” folder. After doing that, you’ll have your project available as a template.
The problem with Visual Studio Export
Visual Studio export works only for single assemblies that don’t reference anything else, but you’d rarely
see a setup as simple as that.
Most modern projects adopt a multi-assembly approach, with solution structures often looking similar to
this:
And folder structures looking similar to this:
56
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
You can have templates like that. They are called multi-project templates.
The problem is that when you attempt to create one, things get unnecessarily complicated.
Through the rest of the article, we’ll go over the creation of such templates and the numerous caveats that
appear during the process.
Creating multi-project templates
Say you have the solution similar to the one we just saw and you have already set up various tools such as
XUnit, EntityFramework, Swagger, StyleCop, authentication/authorization, etc. You would like to export the
solution as a template so you can reuse the setup for other projects.
It shouldn’t be a big deal, right? Let’s check for ourselves.
Note: This particular project setup is available on the VS Marketplace. It is automatically generated from this
source project using solution-snapshotter - the tool we’ll be introducing.
Getting started
Note: The content that follows is not meant to be a tutorial on how to create multi-project templates. Don’t
worry if you feel lost at any point. For the more curious among you, I’ll be providing links where you can get
additional info on things that we’ll simply skim over. The main points are: there’s lots of tricky stuff to do; you can
avoid doing it by using the tool which we’ll introduce later.
The documentation for creating multi-project templates recommends us to use the built-in Export
Template functionality for each project in our solution. We should then combine the exported project
templates into a single, multi-project template using a special .vstemplate format.
After doing this for the setup that we introduced, we get to a folder looking like this:
Each MyProject.* folder is a project template that was exported by Visual Studio. What we did is simply
place them into a single directory. We also wrote the MyProjectTemplate.vstemplate file. It is what unites
everything into a single, multi-project template. Its contents are the following:
www.dotnetcurry.com/magazine
57
After zipping everything and placing it into Visual Studio’s ProjectTemplates folder, we have our template
available.
Let’s use it to create a project called TestTemplateProject.
58
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Structure-wise, it looks okay. However, if we try to build it, we’ll quickly find out that the references are all
broken!
You see, our initial project had a clean and tidy folder structure, like this:
However, when using the built-in Export Template functionality, Visual Studio ignores all that and just
flattens the structure like so:
It doesn’t care to fix the reference paths inside the .csproj files that were meant for the original folder
structure, thus they get broken.
Through the error messages, we can also see that the reference names haven’t been updated. Even though
we’ve named our new project TestTemplateProject, the references in the .csproj files are all looking for
assemblies named MyProject.*, which was the original project name.
Caveat 1: Custom physical folder structure
The bad news is, the multi-project template format does not support custom physical folder structures.
www.dotnetcurry.com/magazine
59
The good news is, we can work around that by packaging our template into a Visual Studio extension
(VSIX). This will allow us to plug in during the project creation phase and execute custom logic using the
IWizard interface.
Note: I won’t get too much into how you can package your template or use the IWizard interface as that will
bloat the article significantly. If it interests you, there’s a tutorial on packaging in the docs. After getting through
it, read about using wizards. Then, you’ll need to get familiar with the _Solution interface and its derivatives Solution2, Solution3, and Solution4. Overall, working with these is clumsy. You often need to resort to explicit
typecasts and try-catch blocks. If you’re looking into it, these extension methods will come in handy.
After packaging your template in a Visual Studio extension project, you’ll have something similar to this:
You can build this project to get a .vsix file, which you can run to install the extension to Visual Studio
instances of your choice.
After installing the built .vsix, the template will become available
in Visual Studio.
Again, there are quite a few steps to get to this point. We will not be digging further into those. Our goal is
to just skip over to the largest issues that arise during the template creation process. This is so we can then
clearly see how and why those issues are solved using solution-snapshotter - the tool we’ll introduce at the
end of the article.
If you would like to learn more about the manual process, the docs are a decent starting point.
We’re continuing the article under the following assumptions: we’ve successfully packaged our template
into a VSIX; we’ve successfully implemented custom logic that creates a correct folder structure and
rearranges our projects correspondingly; we’ve successfully fixed the broken reference paths; we’ve
successfully installed the VSIX to Visual Studio.
Our newly packaged template
Here is how our template looks after installing.
Let’s create a new project out of it called TestVSIXTemplate.
60
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Looks pretty nice, we can also see that the generated folder structure is correct.
But again, even though we’ve fixed the folder structure and references… it doesn’t build.
Caveat 2: Extra files and folders
We can see from the error message that the stylecop.json file is missing. In the initial project, this file was
placed in a separate folder called configuration. It wasn’t directly referenced by any .csproj, but instead was
added as a link.
The configuration folder is however, missing. This is because when we initially exported each project using
Export Template, any extra files were lost. The .vstemplate format only allows you to include files that are
directly referenced by the .csproj files.
Again, there’s a workaround.
Since we’ve packaged our template into a Visual Studio extension and Visual Studio extensions are simply
.NET assemblies, we can include any extra files we like by either embedding them or including them as
assets. They can be then unpacked during the folder structure creation using custom logic placed inside the
IWizard interface implementation.
Not a very pleasant development experience.
www.dotnetcurry.com/magazine
61
Caveat 3: Maintenance
Even if we do all of the above correctly, we still have to maintain the template. That brings us to another
problem.
The template we have is no longer a runnable project, but instead is a bunch of tokenized source files
similar to this one.
This means that any changes we make, we can’t compile and test. Testing is done by:
1.
2.
3.
4.
5.
Updating the template (means digging around broken files)
Updating the VSIX
Running the VSIX inside an experimental Visual Studio instance
Creating a new project with the template
Verifying everything works as expected
Great if you want to spend 30 minutes just updating a NuGet package and moving a few classes!!
Forget all about this - use solution-snapshotter
All of the aforementioned problems are reaI. I encountered them while building and maintaining the Dev
Adventures template.
And it never felt right. It always seemed to me that all of this VSIX mumbo jumbo should happen
automatically inside a CI/CD pipeline.
This led me to write a tool that automatically does everything for you. It takes an initial project source and
62
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
gives you a ready to go VSIX project that has your project packaged as a template. And it takes care of all
the mentioned issues!
Your physical folder structure, references and extra files will be preserved without writing a single line of
code. Just run solution-snapshotter.exe and enjoy the magic.
See the Dev Adventures template extension. It is now 100% automatically generated and published to the
VS Marketplace. You can check out the source repository here. Note the input.config file, that’s all you really
need.
solution-snapshotter is open-source and written in F#. You can visit the GitHub repo by going to this link.
The usage is very simple, the readme should be enough to get you started.
How can it help, you may ask? Let’s see.
For the setup that we introduced in the beginning..
..we can simply call solution-snapshotter.exe. (see the Minimal Usage section for a download link and a
command you can test with)
..and receive a ready-to-go VSIX project!
www.dotnetcurry.com/magazine
63
We can build it.
And receive our .vsix installer.
After installing, the template will become available inside Visual Studio!
64
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
We can use it to create a new project.
..and have our setup ready in moments!
And of course, with all references being valid!
www.dotnetcurry.com/magazine
65
If you’re working for a service company, use solution-snapshotter to create and maintain standard
templates for different project types.
If you’re doing microservices, use it to create common templates for services and shared libraries.
Or if you just finished setting up a great project, use it to share it with the world by extracting a template
and shipping it to the VS Marketplace.
Again, solution-snapshotter is open-source! All contributions and feedback are most welcome. Don’t
hesitate to open an issue if there’s anything you think can be improved.
Enjoy your headache-free templates!
Download the entire source code from GitHub at
bit.ly/dncm44-snapshotter
Dobromir Nikolov
Author
Dobromir Nikolov is a software developer working mainly with Microsoft technologies, with his
specialty being enterprise web applications and services. Very driven towards constantly improving
the development process, he is an avid supporter of functional programming and test-driven
development. In his spare time, you’ll find him tinkering with Haskell, building some project
on GitHub (https://github.com/dnikolovv), or occasionally talking in front of the local tech
community.
Thanks to Yacoub Massad for reviewing this article.
66
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
www.dotnetcurry.com/magazine
67
DEVOPS
Subodh Sohoni
DEVOPS
TIMELINE
C
oncept of Agile teams is a passé.
Many organizations do not consider to
build a DevOps teams. DevOps cannot be
DevOps teams is the new ‘in thing’. It is simple
‘implemented’. DevOps is the way teams
to understand that DevOps is just extending the
are designed and it becomes a part of the
concepts and scope of Agile to the operations
mindset of the team. These teams consist of
and testing functions in the organization, if they
representation from business, architecture,
are not already present.
development, operations, testing and any other
function that will make possible the agile
promise of ‘early and frequent delivery of value
to the customer’.
When I try and capture the context of DevOps
in one diagram, I usually end up with the one
shown in Figure 1 (see next page). Although it
shows all the activities in an order that is easy
to understand, it does not depict the correct
picture of how practically a team may execute
those activities.
Figure 1: Scope of DevOps
Issue at hand
When so many disciplines are going to work together, it becomes very difficult to imagine the coherence of
all activities of all of those disciplines. It is even more difficult to put those on the same timeline without
conflicting with the others, and at the same time, supporting others to improve the productivity of the
entire team.
The Scrum concept of Sprint gives one way of putting many activities in the timebox of sprint and provides
a guideline for certain timeline. These activities start from a development team starting a sprint, with
sprint planning, and ends with the same team doing a retrospection to inspect and adapt to improve.
These activities are fairly linear and not many are on a parallel or diverging track. May be, backlog
grooming is the only parallel track that can be thought of. Another observation is that all these activities
are related to managing the team, requirements and efforts involved. None of them are for agile practices
that focus on development and engineering.
I can easily list these activities for you, although it is not an exhaustive list:
•
•
•
•
•
•
•
Practices related to version control like branching and merging
Builds with automated triggers like continuous integration
Code review
Provisioning of environments to make them ready for deployment
Deployment of packages to the provisioned environments
Testing the product after it is deployed. These may be functional tests or non-functional tests like
performance testing
Monitoring the health of the product
www.dotnetcurry.com/magazine
69
How to put all these activities in the timebox of sprint and on the same timeline of earlier identified agile
practices, is an issue that many teams fail to address successfully.
What it means is that teams follow agile planning and management, but cannot correlate those to
continuous delivery(CD) using agile development practices. I have observed many teams finding a solution
of their own for this issue. Those solutions are applicable to their situations and scenarios.
This article is an effort to showcase the best of such practices that I have observed on one timeline, if they
are generically applicable.
Let us first define the context of the problem. We have a timebox of sprint, which is fixed for a team
anywhere between two to four weeks. In this timebox, there are predefined activities:
•
•
•
•
•
Sprint planning
Daily Scrum
Development, which may involve architecture changes, design, coding and code quality maintenance
like code analysis, unit testing and code review
Sprint Review
Retrospective
Start of the timeline with known Scrum activities
I am going to put it on a timeline of the sprint for better visualization.
Figure 2: Scrum events on timeline
Now, on this timeline, I would like to put the other variables that I have listed earlier. I will begin with
practices of version control - branching and merging.
Version Control – Align branching and
merging in iteration (sprint)
Feature Branches
For a team that is developing new software, I recommend using GitFlow branching strategy which is based
upon features.
At the beginning of the sprint, for every feature that is to be developed, a branch is created. These branches
are created from a long running branch, called Dev branch as a convention. Some feature branches are
long running. They span multiple sprints and are continued from an earlier sprint. All the code that is to be
deployed in the sprint will be developed in feature branches, and once developed, they will be merged in
the Dev branch. This merge will be done after a code review, say by a pull request on the feature branch.
70
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
It is also suggested to have synchronization of code by forward and backward merge. Frequency of such
synchronization should not exceed two days to reduce the chances of large conflicts getting created.
I am not showing those synchronization merges to avoid clutter on my diagrams.
Release Branch
A short running Release branch is created from Dev branch once all the feature branches of the sprint are
merged in the Dev branch. Release branch is where integration testing will be done on the code. There are
no direct commits or check-ins in the Dev and Release branch. Development of long running features will
continue in their own branch. After the integration testing and any other non-functional testing, the release
branch is merged in another long running branch, conventionally called as Production or master branch.
Production (master) branch
The production environment receives code from this branch.
Hotfix branches
You can also take care of urgent fixes for critical bugs in the production code by creating hotfix branches
from the Production branch. These get merged back in the Production branch as well as in the Dev branch
and in the Release branch, if it is there at the time when the fix was created.
We can view these branches on the timeline as shown in Figure 3.
Figure 3: Branching strategy in iteration
www.dotnetcurry.com/magazine
71
Version Control – Branching and merging
strategy without iterative development
For a team that is maintaining a developed software by doing bug fixes and small enhancements, concept
of the timebox of sprint is not applicable.
We are now in the context of continuous delivery of value to customers without the period of development
of batch of requirements. Every bug or a change request has to be delivered as soon as its code is ready for
delivery. In such conditions, it is necessary to have a different timeline and a branching strategy. There is no
start and end of a timebox, like a sprint, and each bug or a change request will have its own branch from
the Dev branch. Hotfix branches will remain as in the earlier model.
Note: Although I am considering and showing this model here, in the subsequent topics and diagrams, I have
considered the timebox model only, since that is more prevalent.
Figure 4: Branching strategy without iterations
Build Strategy on the timeline
Now, let us think about the builds. A common build strategy is to have a build pipeline defined for each of
the branch.
•
72
Trigger for build of the feature branches, as well as for hotfix branches, should be Continuous
Integration(CI). This ensures that whenever code is pushed in the branch, it will be built to ensure its
integration with the code that was pushed by other team members in the same branch.
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
•
Triggers for build on Dev and Production branch need to be against the code being merged from other
branches, which again is the trigger of Continuous Integration, but those branches will get built less
often.
•
Trigger for build of Release branch needs to be manual so that it gets built only as often as required.
Figure 5: Build strategy on timeline of iteration
Provisioning and Deployment
Let us now move our attention to the next important agile practice and that is continuous deployment (CD).
This practice is divided in two parts, Provisioning and Deployment.
Provisioning
For the CD to work, we need to ensure that the environments on which the deployment is to be done, is in a
state where deployment is possible.
For example, if a web application is to be deployed, it is necessary that the server that is going to host the
application has the necessary prerequisites like IIS installed and configured. If an earlier build is already
installed on it, then that needs to be removed.
This preparatory work is called provisioning. We need to define environments for deployments for the
purpose of development (smoke testing), testing, UAT, production etc. These environments may need to
be created from scratch or may need to be refreshed every time a new set of software packages are to be
deployed.
www.dotnetcurry.com/magazine
73
The Production environment usually will not change. Other environments may be provisioned initially
once in the sprint, or can be provisioned every time a new build is to be deployed. Deployment of builds of
feature branches and hotfix branches will happen on the Dev environment and that is provisioned every
time the build is to be deployed, just before the deployment is triggered.
For this activity to be idempotent (resulting in same outcome every time, if executed multiple times),
we need to automate it. This is done using either the ARM templates or any other technology that the
platform supports. For more information about it, please go through my article at www.dotnetcurry.com/
visualstudio/1344/vsts-release-management-azure-arm-template. Terraform is a popular software for this
which can target multiple platforms. Writing such a template that provides a desired state of environment
is also called “Infrastructure as Code”. Along with these technologies, PowerShell DSC (Desired State
Configuration) is an important part of it. PowerShell DSC is also used by Azure Automation.
Builds of the Dev branch are deployed on the QA or Test environment for doing integration testing. This
environment is usually provisioned initially once in the sprint. Since it has deployments of earlier features
which may be required for testing the integration, it is not advisable to provision them repeatedly as part of
the deployment process. We can view these on the timeline now.
Deployment
The Application is deployed on the provisioned servers. Automation of this part of the activity is possible
through various tools like Release Management part of Azure Pipelines or open source tool Jenkins (www.
dotnetcurry.com/devops/1477/jenkins-for-devops).
These tools may use protocols like http, https, ftp, WinRM, Bash etc. to copy the packages on target
(provisioned) hosts and to execute the installation scripts. Provisioning and deployment can be executed as
two steps in succession for Dev environment; whereas for test and production environments, they are not
clubbed together.
Figure 6: Provisioning and Deployment Strategy in an iteration
74
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Adding Testing on the timeline
Test types and execution
Testing is an important part of DevOps. It is done at multiple stages. My observations from the processes
followed by many organizations with whom I logically agree, are as follows:
•
Unit testing is done along with development as well as part of the build. This is done usually when the
code is in feature branches but also continues when the code is built in any other branch.
•
Functional testing including regression testing is done on the test environment. Some system
integration testing may also be done in the Test environment before code is branched into Release
branch.
•
Nonfunctional testing like performance testing, penetration testing or system integration testing is
done by a team which may exist outside the DevOps team. It is done on an environment outside the
context of this timeline. This is usually done once the release branch is created.
•
UAT is done on the Release branch in the Test environment
•
Availability testing is done continuously after the application is deployed to production.
This is not an exhaustive list of test types but a list of commonly used test strategies. One should
understand here that although testing is an integral part of DevOps, it is not necessary that all the testing
be done by the DevOps team only. It is observed in many organizations that there are specialized testing
team as part of the QA who do complex testing like nonfunctional testing. These nonfunctional testing
include performance testing, penetration testing and system integration testing. It is not physically and
logistically possible to bring these super-specialists under the DevOps teams. Usually such testing is done
as and when code from Dev branch is branched in to the Release branch.
Test Cases Creation – Planning the testing efforts
Testers not only have to run tests but much of their time is taken in understanding the application changes
and write the test cases for new features. They do start this part of their work as soon as the sprint is
started. It is recommended to write automated tests for unit tests and functional tests. The same automated
tests can be run as part of build (unit tests) and release (post deployment functional tests). The same tests
will also run as part of regression test suite in the subsequent sprints.
www.dotnetcurry.com/magazine
75
Figure 7: Testing on a common timeline
Summary
There are many disciplines that contribute their efforts in DevOps.
In this article, I have shown that their activities and strategies of Agile engineering can be brought on
the same timeline so that each team member playing any role in DevOps team knows what she / he is
supposed to do during the iterations. This strategy is one of the many that can be evolved.
I have pinned my observations from the many organizations I am a consultant for, and taken the best
of them as practices that can be followed by a team. One of the essential things is not to take software
engineering tasks out of context, and timebox agile management through a framework like Scrum.
Subodh Sohoni
Author
Subodh is a consultant and corporate trainer for the subjects of Azure DevOps. He has overall
32+ years of experience in aircraft production, sales, marketing, software development,
training and consulting. He is a Microsoft MVP – Azure DevOps since 2009, and is a Microsoft
Certified Trainer (MCT) since 2003. He is also a Microsoft Certified DevOps Engineer Expert.
He has conducted more than 350 corporate trainings and consulting assignments. He has also
executed trainings and consulting assignments on behalf of Microsoft.
Thanks to Gouri Sohoni for reviewing this article.
76
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
PATTERNS & PRACTICES
Yacoub Massad
FUNCTION
PARAMETERS
IN C# AND THE FLATTENED
SUM TYPE ANTI-PATTERN
In this tutorial, I will discuss function parameters in C#. I
will talk about how function parameters tend to become
unclear as we maintain our programs and how to fix
them.
Function Parameters - Introduction
Function parameters are part of a function’s signature and should be designed in a way to make it easier
for developers to understand a function.
However, badly designed function parameters are everywhere!
In this tutorial, I will go through an example to show how a function that starts with a relatively good
parameter list, ends up with a confusing one as new requirements come and the program is updated.
The sample application
The following is an example of a function’s signature:
public void GenerateReport(int customerId, string outputFilename)
This GenerateReport function exists in some e-commerce solution that allows customers to order
products. Specifically, it exists in a tool that allows people to generate reports about a particular customer.
Currently, the report only contains details about customer orders.
This GenerateReport function takes two parameters as inputs: customerId and outputFilename. The
customerId is used to identify the customer for which to generate the report. The outputFilename is the
location on the disk to store a PDF file that contains the generated report.
I have created a GitHub repository that contains the source code. You can find it here: https://github.com/
ymassad/FunctionParametersExamples
Take a look at the Tool1 project. It is a WPF based tool that looks like the following, when you run it:
The Click event handler of the Generate Report button collects and validates the input and then invokes
the GenerateReport method.
Usually, functions start with a simple parameter list. However, as the program is updated to meet new
requirements, parameter lists become more complicated.
Let’s say that we have a new requirement that says that the report should optionally include details about
customer complaints. That is, the report would include a list of all complaints that the customer has filed in
the system. We decide to add a checkbox to the window to allow users of the tool to specify whether they
want to include such details. Here is how the window in the Tool2 project looks like:
Here is how the signature of the GenerateReport function looks like:
public static void GenerateReport(int customerId, string outputFilename, bool
includeCustomerComplaints)
The includeCustomerComplaints parameter will be used by the GenerateReport function to decide
whether to include the customer complaints in the report.
Let’s say that we have another requirement which says the user should be able to specify whether to
include all customer complaints or just complaints that are opened (not resolved).
Here is how the window in Tool3 looks like:
We added a special checkbox to satisfy the requirement. Notice how this checkbox is disabled in the figure.
It becomes enabled only if the user checks the “Include customer complaints” checkbox. This makes sense
because the value of the second checkbox only makes sense if the first checkbox is checked. Take a look at
MainWindow.xaml in Tool3 to see how this is done.
Also notice how the new checkbox is moved a little to the right. This is to further indicate to the user that
this is somehow a child of the first checkbox.
Here is how the signature of the GenerateReport method in Tool3 looks like:
void GenerateReport(int customerId, string outputFilename, bool
includeCustomerComplaints, bool includeOnlyOpenCustomerComplaints)
We added a boolean parameter called includeOnlyOpenCustomerComplaints to the method which
enables the caller to specify whether to include only open customer complaints or to include all customer
complaints.
Although the UI design makes it clear that the value of “Include only open complaints” makes sense only if
“Include customer complaints” is checked, the signature of GenerateReport method does not make that
clear.
A reader who reads the signature of the GenerateReport method might be confused about which
combinations of parameter values are valid, and which are not.
For example, we know that the following combination does not make sense:
GenerateReport(
customerId: 1,
outputFilename: "c:\\outputfile.pdf",
80
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
includeCustomerComplaints: false,
includeOnlyOpenCustomerComplaints: true);
Since includeCustomerComplaints is false, the value of includeOnlyOpenCustomerComplaints is
simply ignored.
I will talk about a fix later.
For now, let’s look at another requirement: the ability to include only a summary of the customer
complaints. That is, instead of including a list of all complaints in the report, a summary will be included
that contains only:
•
•
•
•
•
•
•
•
•
The number of all complaints filed by the customer.
The number of open complaints.
The number of closed complaints.
The number of open complaints about shipment times.
The number of open complaints about product quality.
The number of open complaints about customer service.
The number of closed complaints about shipment times.
The number of closed complaints about product quality.
The number of closed complaints about customer service.
Before looking at the UI of Tool4, let’s first look at the signature of the GenerateReport method:
void GenerateReport(
int customerId,
string outputFilename,
bool includeCustomerComplaints,
bool includeOnlyOpenCustomerComplaints,
bool includeOnlyASummaryOfCustomerComplaints)
Please tell me if you understand the parameters of this method!
Which combinations are valid, and which are not? Maybe based on the descriptions of the requirements
I talked about so far, you will be able to know. But if you don’t already know about the requirements, the
signature of this method provides little help.
Let’s look at the UI in Tool4 now:
www.dotnetcurry.com/magazine
81
This is much better! We are used to radio buttons.
We know that we can select exactly one of these three options:
1. Do not include customer complaints
2. Include customer complaints
3. Include only a summary of customer complaints
We also know that if we select “Include customer complaints”, we can also choose one of the two options:
1. To include only open complaints.
2. To include all complaints, not just opened ones.
We know these options by just looking at the UI!
Now imagine that someone designed the UI of Tool4 like this:
Would users be able to understand the options displayed in the UI? Would such UI design pass customer
testing of the software?
Although you might have seen bad UI designs such as this one, I bet you have seen it fewer times
compared to the code in the GenerateReport method of Tool4.
Before discussing why this is the case and suggesting a solution, let’s discuss a new requirement first:
The user should be allowed to specify that they want only information relevant to open complaints when
generating the summary report. That is, if such an option is selected, then only the following details are
generated in the complaints summary section of the report:
•
•
•
•
The number of open complaints.
The number of open complaints about shipment times.
The number of open complaints about product quality.
The number of open complaints about customer service.
Here is how the UI looks like in Tool5:
82
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
The options in the UI looks understandable to me. I hope you think the same. Now, let’s look at the
signature of the GenerateReport method:
void GenerateReport(
int customerId,
string outputFilename,
bool includeCustomerComplaints,
bool includeOnlyOpenCustomerComplaints,
bool includeOnlyASummaryOfCustomerComplaints)
Nothing changes from Tool4. Why?
Because when writing code for the new feature, we decide that it is convenient to reuse the
includeOnlyOpenCustomerComplaints parameter. That is, this parameter will hold the value true if
any of the two checkboxes in the UI are checked.
We decided this because these two checkboxes are somewhat the same. They both instruct the
GenerateReport method to consider only open complaints; whether when including a list of complaints,
or just a summary. Also, it could be the case that this parameter is passed to another function, say
GetAllComplaintsDataViaWebService that obtains complaints data from some web service. Such data
will be used in the two cases; when we just want a summary of complaints, or when we want the whole list
of complaints.
I am providing following excerpt from a possible implementation of the GenerateReport function to
explain this point:
void GenerateReport(
int customerId,
string outputFilename,
bool includeCustomerComplaints,
bool includeOnlyOpenCustomerComplaints,
bool includeOnlyASummaryOfCustomerComplaints)
{
//...
if (includeCustomerComplaints)
{
var allComplaintsData = GetAllComplaintsDataViaWebService(
customerId,
includeOnlyOpenCustomerComplaints);
www.dotnetcurry.com/magazine
83
var complaintsSection =
includeOnlyASummaryOfCustomerComplaints
? GenerateSummaryOfComplaintsSection(allComplaintsData)
: GenerateAllComplaintsSection(allComplaintsData);
sections.Add(complaintsSection);
}
}
//...
Such implementation pushes a developer to have a single includeOnlyOpenCustomerComplaints
parameter in the GenerateReport method.
Take a look at MainWindow.xaml.cs in Tool5 to see how the input is collected from the UI and how the
GenerateReport method is called.
So, the signature of GenerateReport did not change between Tool4 and Tool5. What changed are the valid
(or meaningful) combinations of parameter values. For example, the following combination:
GenerateReport(
customerId: 1,
outputFilename: "c:\\outputfile.pdf",
includeCustomerComplaints: true,
includeOnlyOpenCustomerComplaints: true,
includeOnlyASummaryOfCustomerComplaints: true);
..is not meaningful in Tool4 but is meaningful in Tool5. That is, in Tool4, the value of
includeOnlyOpenCustomerComplaints is ignored when
includeOnlyASummaryOfCustomerComplaints is true; while in Tool5, the value of the same parameter
in the same case is not ignored.
Another confusing thing about the signature of GenerateReport in Tool4 and Tool5 is the relationship
between includeCustomerComplaints and includeOnlyASummaryOfCustomerComplaints. Is
includeOnlyASummaryOfCustomerComplaints considered only if includeCustomerComplaints is
true? Or should only one of these be true? The signature does not answer these questions.
The GenerateReport method can choose to throw an exception if it gets an invalid combination, or it may
choose to assume certain things in different cases.
Let’s go back to the question I asked before: why do we see bad code too much? Why is this unclarity seen
more in code, than in UI?
I have the following reasons in mind:
1. UI is what the end users see from the application, and thus fixing any unclarity in this area is given a
higher priority.
2. Many programming languages do not give us a convenient way to model parameters correctly. That is, it
is more convenient to model parameters incorrectly.
The first point is clear, and I will not discuss it any further.
The second point needs more explanation, I think.
84
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
A function parameter list is a product
type
In a previous article, Designing Data Objects in C# and F#, I talked about Algebraic Data Types: sum types
and product types.
In a nutshell, a sum type is a data structure that can be any one of a fixed set of types.
For example, we can define a Shape sum type that has the following three sub-types:
1. Square (int sideLength)
2. Rectangle (int width, int height)
3. Circle (int diameter)
That is, an object of type Shape can either be Square, Rectangle, or Circle.
A product type is a type that defines a set of properties, that an instance of the type needs to provide
values for. For example, the following is a product type:
public sealed class Person
{
public Person(string name, int age)
{
Name = name;
Age = age;
}
public string Name { get; }
public int Age { get; }
}
This Person class defines two properties; Name and Age. An instance of the Person class should specify a
value for Name and a value for Age. Any value of type string should be valid for the Name property and
any value of type int should be valid for the Age property.
The constructor of the Person class should not do any validation. That is, I would say that the following is
not a product type:
public sealed class Person
{
public Person(string name, int age)
{
if (Age > 130 || Age < 0)
throw new Exception(nameof(Age) + " should be between 0 and 130");
}
Name = name;
Age = age;
public string Name { get; }
}
public int Age { get; }
www.dotnetcurry.com/magazine
85
It's not a product type of string and int. Any string and int values cannot be used to construct a
Person.
However, the following Person class is a product type:
public sealed class Person
{
public Person(string name, Age age)
{
Name = name;
Age = age;
}
public string Name { get; }
public Age Age { get; }
}
public sealed class Age
{
public Age(int value)
{
if (value > 130 || value < 0)
throw new Exception(nameof(Age) + " should be between 0 and 130");
Value = value;
}
public int Value { get; }
}
Person is a product type of string and Age. Any string and any Age can be used to construct a Person
class. Age is not a product type, it is a special type that models an age of a person.
We should think about a parameter list as a product type, any combination of parameter values (i.e.,
arguments) should be valid. A function should not throw an exception because a certain combination
of values passed to it is invalid. Instead, function parameters should be designed in a way that any
combination is valid.
I am not saying that functions should not throw exceptions at all, that is a topic for a different article.
How to make all combinations valid?
The flattened sum type anti-pattern
Let’s refactor the GenerateReport method to take a parameter of type ComplaintsReportingSettings
instead of the three boolean parameters. This class would simply contain the three boolean values as
properties.
void GenerateReport(
int customerId,
string outputFilename,
ComplaintsReportingSettings complaintsReportingSettings)
public sealed class ComplaintsReportingSettings {
public ComplaintsReportingSettings(
bool includeCustomerComplaints,
bool includeOnlyOpenCustomerComplaints,
bool includeOnlyASummaryOfCustomerComplaints)
86
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
{
}
}
IncludeCustomerComplaints = includeCustomerComplaints;
IncludeOnlyOpenCustomerComplaints = includeOnlyOpenCustomerComplaints;
IncludeOnlyASummaryOfCustomerComplaints =
includeOnlyASummaryOfCustomerComplaints;
public bool IncludeCustomerComplaints { get; }
public bool IncludeOnlyOpenCustomerComplaints { get; }
public bool IncludeOnlyASummaryOfCustomerComplaints { get; }
What is wrong with the ComplaintsReportingSettings type?
The problem is that not all combinations of the properties (or the constructor’s parameters) are valid. Let’s
make this more explicit:
public sealed class ComplaintsReportingSettings
{
public ComplaintsReportingSettings(
bool includeCustomerComplaints,
bool includeOnlyOpenCustomerComplaints,
bool includeOnlyASummaryOfCustomerComplaints)
{
if (includeOnlyOpenCustomerComplaints && !includeCustomerComplaints)
throw new Exception(nameof(includeOnlyOpenCustomerComplaints) + " is
relevant only if " + nameof(includeCustomerComplaints) + " is true");
if (includeOnlyASummaryOfCustomerComplaints && !includeCustomerComplaints)
throw new Exception(nameof(includeOnlyASummaryOfCustomerComplaints) + " is
relevant only if " + nameof(includeCustomerComplaints) + " is true");
IncludeCustomerComplaints = includeCustomerComplaints;
IncludeOnlyOpenCustomerComplaints = includeOnlyOpenCustomerComplaints;
IncludeOnlyASummaryOfCustomerComplaints =
includeOnlyASummaryOfCustomerComplaints;
}
}
public bool IncludeCustomerComplaints { get; }
public bool IncludeOnlyOpenCustomerComplaints { get; }
public bool IncludeOnlyASummaryOfCustomerComplaints { get; }
The added statements make sure that combinations that are not valid or that are not meaningful, will cause
an exception to be thrown.
If we are designing this class for Tool4, then we would add another validation statement in the constructor:
if (includeOnlyOpenCustomerComplaints && includeOnlyASummaryOfCustomerComplaints)
throw new Exception(nameof(includeOnlyOpenCustomerComplaints) + " should not be
specified if " + nameof(includeOnlyASummaryOfCustomerComplaints) + " is true");
The ComplaintsReportingSettings class is an example of what I call the flattened sum type antipattern. That is, it is something that should be designed as a sum type but is instead flattened into what
looks like a product type (but is actually not a product type).
Once you see a type that looks like a product type but of which there is a combination (or combinations) of
www.dotnetcurry.com/magazine
87
its properties that is invalid or not meaningful, then you have identified an instance of this anti-pattern.
Take a look at the Tool6 project. Here is the signature of the GenerateReport method:
void GenerateReport(
int customerId,
string outputFilename,
ComplaintsReportingSettings complaintsReportingSettings)
But the ComplaintsReportingSettings type looks like this:
public abstract class ComplaintsReportingSettings {
private ComplaintsReportingSettings()
{
}
public sealed class DoNotGenerate : ComplaintsReportingSettings
{
}
public sealed class Generate : ComplaintsReportingSettings
{
public Generate(bool includeOnlyOpenCustomerComplaints)
{
IncludeOnlyOpenCustomerComplaints = includeOnlyOpenCustomerComplaints;
}
}
public bool IncludeOnlyOpenCustomerComplaints { get; }
public sealed class GenerateOnlySummary : ComplaintsReportingSettings
{
public GenerateOnlySummary(bool includeOnlyOpenCustomerComplaintsForSummary)
{
IncludeOnlyOpenCustomerComplaintsForSummary =
includeOnlyOpenCustomerComplaintsForSummary;
}
}
}
public bool IncludeOnlyOpenCustomerComplaintsForSummary { get; }
This is a sum type with three cases. I talked about modeling sum types in C# in the Designing Data Objects
in C# and F# article. In a nutshell, a sum type is designed as an abstract class with a private constructor.
Each subtype is designed as a sealed class nested inside the sum type class.
Please compare this to how the UI looks like. Here it is again (the same as Tool5):
88
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Can you see the similarities between the sum type version of ComplaintsReportingSettings and the
UI? In some sense, radio buttons together with the ability to enable/disable certain controls based on user
selection of radio buttons, help us model sum types in the UI.
It is easy to understand now why developers use the flattened sum type anti-pattern: it is more convenient!
It is much easier to add a boolean parameter to a method than to create a sum type in C#.
C# might get sum types in the future. See this Github issue here: https://github.com/dotnet/csharplang/
issues/113
Another reason why developers use this anti-pattern is that they are not aware of it. I hope that by writing
this article, developers are more aware of it.
This anti-pattern can be more complicated. For example, it might be the case that two sum types are
flattened into a single product type (in a parameter list). Consider this example:
void GenerateReport(
int customerId,
string outputFilename,
bool includeCustomerComplaints,
bool includeOnlyOpenCustomerComplaints,
bool includeOnlyASummaryOfCustomerComplaints,
bool includeCustomerReferrals,
bool includeReferralsToCustomersWhoDidNoOrders,
Maybe<DateRange> dateRangeToConsider)
This updated GenerateReport method allows users to specify whether they want to include details about
customer referrals in the report. Also, it allows the users to filter out referrals that yielded no orders. The
dateRangeToConsider is a bit special. This parameter is added to enable the caller to filter some data out
of the report based on a date range. The parameter is of type Maybe<DateRange>. Maybe is used to model
an optional value. For more information about Maybe, see the Maybe Monad article here
www.dotnetcurry.com/patterns-practices/1510/maybe-monad-csharp.
Although, it is not clear from the signature, this parameter affects not just referral data, it also effects
customer complaints data when the full list of complaints is to be included. That is, it affects these data:
•
•
Customer referral data
Complaints data when a full list is requested
And it does not affect the following:
•
Complaints data when only a summary is requested.
The signature of the method in no way explains these details.
The last six parameters of the GenerateReport method should be converted to two sum types,
ComplaintsReportingSettings and ReferralsReportingSettings:
public abstract class ComplaintsReportingSettings {
private ComplaintsReportingSettings()
{
www.dotnetcurry.com/magazine
89
}
public sealed class DoNotGenerate : ComplaintsReportingSettings
{
}
public sealed class Generate : ComplaintsReportingSettings
{
public Generate(bool includeOnlyOpenCustomerComplaints, DateRange
dateRangeToConsider)
{
IncludeOnlyOpenCustomerComplaints = includeOnlyOpenCustomerComplaints;
DateRangeToConsider = dateRangeToConsider;
}
public bool IncludeOnlyOpenCustomerComplaints { get; }
}
public DateRange DateRangeToConsider { get; }
public sealed class GenerateOnlySummary : ComplaintsReportingSettings
{
public GenerateOnlySummary(bool includeOnlyOpenCustomerComplaintsForSummary)
{
IncludeOnlyOpenCustomerComplaintsForSummary =
includeOnlyOpenCustomerComplaintsForSummary;
}
}
}
public bool IncludeOnlyOpenCustomerComplaintsForSummary { get; }
public abstract class ReferralsReportingSettings
{
private ReferralsReportingSettings()
{
}
public sealed class DoNotGenerate : ReferralsReportingSettings
{
}
public sealed class Generate : ReferralsReportingSettings
{
public Generate(bool includeReferralsToCustomersWhoDidNoOrders,
Maybe<DateRange> dateRangeToConsider)
{
IncludeReferralsToCustomersWhoDidNoOrders =
includeReferralsToCustomersWhoDidNoOrders;
DateRangeToConsider = dateRangeToConsider;
}
public bool IncludeReferralsToCustomersWhoDidNoOrders { get; }
}
90
}
public Maybe<DateRange> DateRangeToConsider { get; }
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Note the following:
1. Only ComplaintsReportingSettings.Generate and ReferralsReportingSettings.
Generate have a DateRangeToConsider property. That is, only in these cases is the original
dateRangeToConsider parameter relevant.
2. The DateRangeToConsider property in ComplaintsReportingSettings.Generate is required
(does not use Maybe); while in ReferralsReportingSettings.Generate it is optional. The original
parameter list in GenerateReport did not explain such details, it couldn’t have!
Here is how an updated signature would look like:
void GenerateReport(
int customerId,
string outputFilename,
ComplaintsReportingSettings complaintsReportingSettings,
ReferralsReportingSettings referralsReportingSettings)
Note that a function parameter list should always be a product type, but it should be a valid
one. Here, the parameter list of the GenerateReport method is a product type of int,
string, ComplaintsReportingSettings, and ReferralsReportingSettings. However,
ComplaintsReportingSettings and ReferralsReportingSettings are sum types.
Conclusion:
In this tutorial, I talked about function parameters. I have demonstrated an anti-pattern which I call the
flattened sum type anti-pattern, via an example. In this anti-pattern, a type that should be designed as a
sum type (or more than one sum type) is designed as something that looks like a product type.
Because function parameters lists look naturally like product types and because sum types don’t have a lot
of support in the C# language and the tooling around it, it is more convenient for developers to just add
new parameters to the list, than to correctly design some of the parameters as sum types.
Download the entire source code from GitHub at
bit.ly/dncm44-sumtype
Yacoub Massad
Author
Yacoub Massad is a software developer who works mainly with Microsoft technologies. Currently, he works
at Zeva International where he uses C#, .NET, and other technologies to create eDiscovery solutions. He
is interested in learning and writing about software design principles that aim at creating maintainable
software. You can view his blog posts at criticalsoftwareblog.com.
Thanks to Damir Arh for reviewing this article.
www.dotnetcurry.com/magazine
91
.NET
Damir Arh
DEVELOPING
MOBILE APPLICATIONS
IN .NET
IN THIS TUTORIAL, I WILL DESCRIBE THE TOOLS AND
FRAMEWORKS AVAILABLE TO .NET DEVELOPERS FOR ALL
ASPECTS OF MOBILE APPLICATION DEVELOPMENT I.E.
FRONT-END, BACK-END AND OPERATIONS.
I
n many ways Mobile development is like
Desktop development.
Both Mobile and Desktop applications run
natively on the device and their user interface
depends on the operating system they are running
on.
However, there’s currently no operating system
that runs both on desktop and mobile!
Therefore, the tools and frameworks used
for application development are different.
Additionally, because mobile applications don’t
run natively on the same device as they are being
developed, the development experience isn’t as
92
smooth as with desktop applications.
Front-end
Development
When talking about mobile development, one
usually thinks of an application running on a mobile
device. This is usually referred to as front-end
development.
In the .NET ecosystem, Xamarin is the obvious choice
when deciding on a framework to develop a mobile
application in. Despite that, Unity or MonoGame
might be a better fit in some scenarios.
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Xamarin
Xamarin is the .NET implementation for mobile devices. It was developed by a company with the same
name which was acquired by Microsoft in 2016. Although Xamarin was originally a paid product, it is
available for free to all developers ever since the acquisition.
There are two approaches to developing user interfaces with Xamarin:
Approach 1# In the traditional Xamarin approach, you create a new project for a single platform (Android or
iOS).
In this case, native technologies are used for creating the views (layouts in Android XML files for Android,
and storyboards for iOS). For example, the following Android layout file could be used in a traditional
Xamarin project or a native Android project without any changes:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android=http://schemas.android.com/apk/res/android
xmlns:app=http://schemas.android.com/apk/res-auto
xmlns:tools=http://schemas.android.com/tools
android:layout_width="match_parent"
android:layout_height="match_parent"
app:layout_behavior="@string/appbar_scrolling_view_behavior"
tools:showIn="@layout/activity_main">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerInParent="true"
android:text="Hello World!" />
</RelativeLayout>
The code for the views is still written in C# instead of Java, Kotlin, Objective C or Swift and uses .NET base
class libraries. Thanks to the .NET binding libraries provided by the Xamarin platform, Native platformspecific APIs can also be invoked directly from C#.
Approach 2# When creating a new project for multiple platforms, Xamarin.Forms UI framework is used
instead.
It’s somewhat similar to WPF and UWP in that it is XAML-based and works well with the MVVM pattern.
(You can read more about WPF and UWP, and XAML in my tutorial from the previous issue of DotNetCurry
Magazine – Developing desktop applications in .NET.) However, the control set is completely different
because under the hood it’s just an abstraction layer on top of the traditional Xamarin approach. This means
that individual controls are rendered differently on different platforms: each Xamarin.Forms control maps to
a specific native control on each platform.
Here’s an example of a simple Xamarin.Forms-based UI layout:
<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns=http://xamarin.com/schemas/2014/forms
xmlns:x=http://schemas.microsoft.com/winfx/2009/xaml
xmlns:d=http://xamarin.com/schemas/2014/forms/design
xmlns:mc=http://schemas.openxmlformats.org/markup-compatibility/2006
mc:Ignorable="d"
www.dotnetcurry.com/magazine
93
x:Class="XamarinApp1.Views.MenuPage"
Title="Menu">
<StackLayout VerticalOptions="FillAndExpand">
<ListView x:Name="ListViewMenu"
HasUnevenRows="True">
<d:ListView.ItemsSource>
<x:Array Type="{x:Type x:String}">
<x:String>Item 1</x:String>
<x:String>Item 2</x:String>
</x:Array>
</d:ListView.ItemsSource>
<ListView.ItemTemplate>
<DataTemplate>
<ViewCell>
<Grid Padding="10">
<Label Text="{Binding Title}"
d:Text="{Binding .}"
FontSize="20"/>
</Grid>
</ViewCell>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>
</StackLayout>
</ContentPage>
Although Xamarin.Forms initially imposed many limitations on the interfaces which could be developed
with them, they have improved a lot since then. They are already at version 4 and are probably used for
development of most new Xamarin applications today. In addition to Android and iOS, they also provide full
support for UWP (i.e. a single Xamarin application can run on Android, iOS and Windows). Along with the
three fully supported platforms, three more are currently in preview: WPF, macOS and GTK# (a wrapper on
top of the GTK cross-platform UI toolkit). Tizen .NET adds support for Tizen applications as well (Samsung’s
operating system for various smart devices).
In cases when the Xamarin.Forms control set doesn’t suffice; the so-called native views can be used to
complement them. This feature gives access to native UI controls of each individual platform. They can be
declared alongside Xamarin.Forms controls in the same XAML file.
Of course, each native UI control will only be used in building the UI for the corresponding platform.
Therefore, a different native control will usually be used in the same place for each platform supported by
the application, as in the following example:
<StackLayout Margin="20">
<ios:UILabel Text="Hello World"
TextColor="{x:Static ios:UIColor.Red}"
View.HorizontalOptions="Start" />
<androidWidget:TextView Text="Hello World"
x:Arguments="{x:Static androidLocal:MainActivity.Instance}" />
<win:TextBlock Text="Hello World" />
</StackLayout>
On mobile devices, UI controls are not the only platform specific part of the application. Some APIs are
also platform specific, mostly those for accessing different device sensors (such as geolocation, compass,
accelerometer etc.) and platform specific functionalities (e.g. clipboard, application launcher, dialer etc.). The
Xamarin.Essentials library can make their use simpler by providing its own unified API for them across all
94
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
supported platforms. Here’s an example of code for retrieving current geolocation:
var request = new GeolocationRequest(GeolocationAccuracy.Medium);
var location = await Geolocation.GetLocationAsync(request);
All of this makes Xamarin a feature-complete framework for developing mobile applications for Android
and iOS. In this field, it is the only choice for .NET developers who want to leverage their existing .NET and
C# knowledge for mobile application development.
Xamarin.Forms support for other platforms might make the framework an appropriate choice when trying
to target other devices with the same application (especially once all supported platforms come out of
preview). However, there aren’t many applications which would make sense on such a wide variety of
devices with the same codebase, and this could be one of the limiting factors.
To read more about Xamarin, check out some tutorials at www.dotnetcurry.com/tutorials/xamarin as well as
at https://docs.microsoft.com/en-us/xamarin/.
Unity and MonoGame
Unity and MonoGame are also based on .NET and can be used to develop applications which run on
Android and iOS (but also on Windows, macOS and Linux, as well as most gaming consoles on the market
today). In contrast to Xamarin which is meant for general-purpose application development, Unity and
MonoGame primarily target (cross-platform) game development.
They take a different approach to enabling that:
- Unity is a 3D game engine with an easy-to-learn graphical editor. When developing a game, the editor is
used for designing scenes which usually represent game levels, although they are also used for creating
game menus, animations and other non-gameplay parts of a game.
Figure 1: Unity Editor main window
www.dotnetcurry.com/magazine
95
C#/.NET is only used for writing scripts which implement the logic to control the objects created through
the editor, and bring the game to life. Here’s a typical example of a script in Unity:
public class PlayerDeath : Simulation.Event<PlayerDeath>
{
PlatformerModel model = Simulation.GetModel<PlatformerModel>();
public override void Execute()
{
var player = model.player;
if (player.health.IsAlive)
{
player.health.Die();
model.virtualCamera.m_Follow = null;
model.virtualCamera.m_LookAt = null;
player.controlEnabled = false;
}
}
}
if (player.audioSource && player.ouchAudio)
player.audioSource.PlayOneShot(player.ouchAudio);
player.animator.SetTrigger("hurt");
player.animator.SetBool("dead", true);
Simulation.Schedule<PlayerSpawn>(2);
- MonoGame is a much more low-level tool than Unity. It implements the API introduced by Microsoft XNA
Framework – a .NET wrapper on top of DirectX which was discontinued in 2014.
It’s not a full-blown game engine, but it does provide a content pipeline for managing and processing
the assets (sounds, images, models, etc.) used in the game. Still, developing a game in MonoGame mostly
consists of writing .NET code responsible for managing the game state as well as doing the rendering. As
you can see in the following example, MonoGame code is typically at a much lower level of abstraction
than Unity code:
protected override void Draw(GameTime gameTime)
{
graphics.GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin();
spriteBatch.Draw(ballTexture, new Vector2(0, 0), Color.White);
spriteBatch.End();
}
base.Draw(gameTime);
Apart from games, Unity and MonoGame can also be used for developing other types of applications,
especially when 3D visualization plays an important role in them. Good candidates are applications in the
field of augmented reality (AR), industrial design and education. Unity is recently focusing on providing the
tools to make development of such applications easier.
When deciding to develop an application this way instead of using general-purpose frameworks like
Xamarin, it’s important to keep in mind the potential downsides of running a 3D engine inside your
application. These considerations are even more important on a mobile device with limited processing
power, connectivity, and battery life: higher battery usage, longer initial load time and larger application
size. It is important that the benefits in a specific scenario outweigh them before deciding for such an
approach.
96
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
Building and Running the Mobile Applications
Because mobile applications target a different device, they can’t be simply run on the computer where they
are developed. They need to be deployed to an emulator or a physical device and run from over there.
For Android applications, this works on all desktop operating systems. Both Visual Studio and Visual Studio
for Mac will install all the necessary dependencies to make it work, if you choose so. The built-in user
interface will allow you to manage the installed versions of Android SDK and configured virtual devices
for the Android emulator. When starting an Android application from Visual Studio, you can select which
emulated or physical device to use.
Figure 2: Android Device Manager in Visual Studio 2019
Because of Apple’s licensing restrictions, this is not possible for iOS applications. To build and deploy an iOS
application, a computer running macOS is required. To make development on a Windows computer more
convenient, Visual Studio can pair with a macOS computer which will be used to build the iOS application
and to deploy it to a physical iOS device (which must also be connected to the macOS computer).
Figure 3: Pairing to Mac in Visual Studio 2019
The iOS Simulator can also run only on a macOS computer. To avoid any interaction with macOS desktop
when developing on Windows, Xamarin comes with Remoted iOS Simulator for Windows. When Visual
Studio is paired with a macOS computer, it can run the iOS application in the iOS Simulator on Mac and
www.dotnetcurry.com/magazine
97
show its screen in the Remoted iOS Simulator window. This window can also be used to interact with the
iOS application as if the simulator was running locally on the Windows computer.
Back-end Mobile Development
In most cases, the application running on the mobile device is only a part of the complete solution. The
data shown and manipulated in the application must be retrieved from and stored to somewhere. Unless
the application is a client for an existing public service, it will require a dedicated back-end service for this
functionality.
ASP.NET Core and ASP.NET Web API
The back-end service will typically expose a REST endpoint for the mobile application to communicate
with. As explained in more detail in my previous DotNetCurry article Developing Web Applications in .NET
(Different Approaches and Current State), these can be developed using ASP.NET Core or ASP.NET Web API.
From the viewpoint of the back-end service, a mobile application is no different from a web SPA (singlepage application).
This approach makes most sense when the mobile application is developed as a companion or an
alternative to a web application with a similar functionality because the same back-end service can be
used for both. It’s also appropriate when the back-end is not just a simple data-access service but needs to
implement more complex business logic. The higher overhead of a dedicated web application gives full
freedom to what can be implemented inside it.
Mobile Apps in Azure App Service
A standalone web application as the back-end might be an overkill if the mobile application only needs
some standard functionality such as authentication, cloud storage for user settings and other data, usage
analytics and similar. The Mobile Apps flavor of Azure App Service might be a good alternative in such
cases.
It provides the following key functionalities implemented in dedicated server and client SDKs:
•
Data access to any data source with an Entity Framework data provider (Azure or on-premise SQL
Server, Azure Table Storage, MongoDB, Cosmos DB, etc.) with support for offline data synchronization.
•
User authentication and authorization using social identity providers (Microsoft, Facebook, Google or
Twitter) for consumer applications or Active Directory for enterprise applications.
•
Push notifications using Azure Notification Hubs.
98
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
The .NET Server SDK consists of a collection of Microsoft.Azure.Mobile.Server.* NuGet packages which
must be installed into an empty ASP.NET Web Application project for .NET Framework. The Microsoft.Azure.
Mobile.Server.Quickstart NuGet package can be used for a basic setup with all supported functionalities, but
there are also separate NuGet packages available for each functionality. The SDK needs to be initialized in
an OWIN startup class:
[assembly: OwinStartup(typeof(MobileAppServer.Startup))]
namespace MobileAppServer
{
public class Startup
{
// in OWIN startup class
public void Configuration(IAppBuilder app)
{
HttpConfiguration config = new HttpConfiguration();
new MobileAppConfiguration()
.UseDefaultConfiguration()
.ApplyTo(config);
}
}
}
app.UseWebApi(config);
Individual functionalities can now be added with minimum code:
•
For data access, an Entity Framework DbContext class must be created first with DbSet<T> instances
for every table to be exposed. Then, for each table, a TableController<T> can be created using the
scaffolding wizard for creating a new Azure Mobile Apps Table Controller:
Figure 4: Scaffolding wizard for a new Azure Mobile Apps Table Controller
•
For authorization, the controller classes and action methods only need to be decorated using the
standard Authorize attribute.
•
For push notifications, the code sending them needs to instantiate the NotificationHubClient class and
use its methods to send notifications.
•
Additional custom API controllers can be exposed to the mobile application by decorating them with
the MobileAppControllerAttribute class.
All of this is explained in more details in the .NET SDK Server documentation.
www.dotnetcurry.com/magazine
99
There’s a corresponding Client SDK available for several different technology stacks. Xamarin applications
can use the .NET Client SDK by installing the Microsoft.Azure.Mobile.Client NuGet package. All calls to the
Mobile Apps backend are available as methods of the MobileServiceClient class:
•
To access data, the GetTable<T> and GetSyncTable<T> methods are available. The latter implements
offline synchronization but requires some additional setup before it can be used:
°° The Microsoft.Azure.Mobile.Client.SQLiteStore NuGet package must be installed.
°° An instance of the MobileServiceSQLiteStore class must be provided to the
MobileServiceClient instance:
var localStore = new MobileServiceSQLiteStore(localDbPath);
localStore.DefineTable<TodoItem>();
mobileServiceClient.SyncContext.InitializeAsync(localStore);
•
•
Authentication can be done using the LoginAsync method.
To register an application for push notifications, additional platform specific setup is required using
the dedicated Google and Apple services. Only the corresponding device token is sent to the backend
using the RegisterAsync method on the platform specific object returned by the call to the
MobileServiceClient’s GetPush method. Handling of received push notifications is done using the
standard API’s on each platform.
Microsoft has stopped further development of Azure Mobile Apps SDKs in favor of its Visual Studio App
Center solution for mobile application development. However, for most of the functionalities described
above, Visual Studio App Center doesn’t yet provide a functionally equivalent alternative which still makes
Azure Mobile Apps the only choice if you need any of its unique features. This will very likely change as
Visual Studio App Center is further developed.
Visual Studio App Center
Visual Studio App Center is a web portal solution with a primary focus on DevOps for mobile applications
as provided by three core services:
•
Build can connect to a Git repository in Azure DevOps, GitHub, Bitbucket or GitLab and build an Android
or iOS mobile application developed in one of the supported technology stacks (Xamarin and Unity are
among them, of course).
•
Test can run automated UI tests for mobile applications directly on hardware devices, not on emulators.
A selection of several hundred devices is available to choose from. Multiple test frameworks are
supported as well: Xamarin.UITest and Appium on both platforms, Espresso on Android and XCUITest on
iOS. The service is the successor of Xamarin Test Cloud.
•
Distribute is responsible for application deployment. In addition to the official public Google Play and
App Store services, Microsoft’s Intune mobile device management solution is supported as well. The
latter can be used for deploying internal applications in corporate environments.
100
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
All of these can be controlled and tracked from the web portal.
The next important part of Visual Studio App Center are Diagnostics and Analytics services. To take
advantage of them, the App Center’s SDK must be added to the mobile application. For Xamarin
applications, the Microsoft.AppCenter.Analytics and Microsoft.AppCenter.Crashes NuGet packages need to
be installed.
To initialize basic functionality, a single call to AppCenter.Start during application startup is enough.
Additional methods are available for tracking custom events in applications and for reporting errors which
were handled in code and didn’t cause the application to crash.
The web portal provides the tools to monitor and analyze all the analytical and diagnostical data
submitted from the application. These can be used to track application stability, resolve reported crashes or
errors, and explore application usage patterns to improve user experience and retention.
The remaining services to some extent overlap with Azure Mobile Apps:
•
•
•
Data caters to application’s data access needs both in online and offline scenario. However, unlike data
access features in Azure Mobile Apps it doesn’t build on top of Entity Framework data providers to
provide this functionality. Instead, it relies on the features of Azure Cosmos DB database service which
is the only supported data source.
Auth provides user authentication by integrating Azure Active Directory B2C identity management
service. In comparison to user authentication in Azure Mobile Apps it supports more social identity
providers but can’t be used in conjunction with a corporate Active Directory.
Push provides direct integration with Android and iOS native notification services (Firebase Cloud
Messaging and Apple Push Notifications, respectively) and doesn’t use the Azure Notification Hubs
service to achieve that like Azure Mobile Apps does.
Like Diagnostics and Analytics services, these three services also have corresponding Client SDK packages
which make them easier to use. For Xamarin applications, these are the Microsoft.AppCenter.* NuGet
packages.
It will depend on your application requirements which data access, authentication and push notification
services will suit you better (the ones from Azure Mobile Apps or the ones from Visual Studio App Center).
If you can achieve your goals with either, then out of the two, Visual Studio App Center currently seems a
better choice for two reasons:
•
There are additional DevOps services in Visual Studio App Center that are not available in Azure Mobile
Apps. If you decide to take advantage of these, then using Visual Studio App Center also for services
that have alternatives in Azure Mobile Apps, will allow you to manage everything from a single location
– the Visual Studio App Center web portal.
•
Unlike Azure Mobile Apps, Visual Studio App Center is still being actively developed which makes it a
safer choice for the future. Currently, Azure Mobile Apps are still fully supported but it’s impossible to
say whether or when this will change.
One can also safely assume that Visual Studio App Center will only improve with time. Existing services will
get additional features and new services might be added.
www.dotnetcurry.com/magazine
101
Conclusion
For front-end application development, most .NET developers will choose Xamarin as the only generalpurpose application framework. Unity and MonoGame are a great alternative for game development and
can sometimes make sense for development of other applications with a lot of 3D visualization, e.g. in the
field of augmented reality.
For back-end development, the best choice is not as obvious. Visual Studio App Center is the most feature
complete offering and Microsoft continues to heavily invest in it. This makes it a good choice for most
new applications. Azure Mobile Apps offers only a subset of its functionalities but can sometimes still be a
better fit because it takes a different approach to them. Unfortunately, it's not actively developed anymore
which makes it a riskier choice.
ASP.NET Core and ASP.NET Web API can be a viable alternative to the cloud services if you already have
your own custom back-end or just want to have more control over it.
Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools; both in
complex enterprise software projects and modern cross-platform mobile applications.
In his drive towards better development processes, he is a proponent of test driven
development, continuous integration and continuous deployment. He shares his
knowledge by speaking at local user groups and conferences, blogging, and answering
questions on Stack Overflow. He is an awarded Microsoft MVP for .NET since 2012.
Thanks to Gerald Verslius for reviewing this article.
102
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
www.dotnetcurry.com/magazine
103
AZURE
Gouri Sohoni
AUTOMATION
IN
MICROSOFT AZURE
The Azure Automation feature automates frequently required, time consuming
tasks.
Once automated, it reduces the workload, decreases bugs and increases efficiency.
Azure automation is also available for Hybrid mode to automate tasks in other
clouds or on-premises environments.
In this tutorial, I will discuss various aspects about Azure Automation that includes
•
•
•
•
•
•
104
Runbooks
Schedules
Jobs and webhooks for running runbooks
Desired State Configuration (DSC) to create configurations and applying it to
nodes
Update Management for virtual machines
Source Control Integration for keeping the configurations in Source Control.
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
AZURE AUTOMATION - OVERVIEW
Azure Automation is a service in Microsoft Azure that lets you automate management related tasks.
It also lets you create containers for Runbooks (a compilation of routine procedures and operations that the
system administrator or operator carries out).
These runbooks have scripts for executing certain tasks and can be either automatically executed using
schedules or jobs, or can be triggered using webhooks. We have a way to specify all your virtual machines
in a required state (DSC). We also have the option of making the VMs comply to a desired state by applying
(and correcting if required) the configuration to maintain the state.
AUTOMATION ACCOUNT
Creating Automation Account is the first step to start using all the features of Azure Automation. Sign in
to https://portal.azure.com to avail all these services. If you do not have an Azure Account, use this link to
create a free account.
Create Automation Account
Click on Create a resource after you sign in to the Azure portal
Select IT & Management Tools – Automation. Provide name for the account, resource group (container for a
number of resources) and select location. Select Create Azure Run As account and click on Create
For selection of a location, you may take a look at these Global Infra Services.
Select the account once it is created.
www.dotnetcurry.com/magazine
105
RUNBOOK
•
Runbook can have scripts which needs to be executed frequently. Once created, provide the schedule
for the execution either by creating a schedule or by creating a job which in turn will trigger the
execution.
•
Now that the automation account is ready, add different tasks to it which are required frequently. There
are a lot of tutorials available for runbooks. You can explore all these tutorials by selecting Runbook
from Process Automation tab. These tutorials are automatically created for you, in the account.
Create a Runbook
•
Click on Create a runbook from Process Automation – Runbooks, provide name and select the type of
runbook as PowerShell. Provide an appropriate description and click on Create
•
An IDE appears to write the script. You can write as complex PowerShell commands as required. I have
just created a demo runbook.
106
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
•
Save the runbook and click on the Test pane to view how it works. Start the runbook and you will get a
message saying that it is Queued. If you have added any parameters, you need to provide their values.
After successful execution, the test pane shows us the result as follows:
•
When we want the runbook to be available via a schedule or url which can be given in a webjob, we
have to publish the runbook. Click on Publish, you’ll see a message saying this will overwrite if it already
exists.
To monitor a particular folder and send a log message when any changes happen to the folder, create a
recurring schedule which will be automatically triggered every 15 minutes by using a schedule.
Sometimes you may want to provide automated messages or information to other apps, this can be
achieved by using webhooks. This feature is available by using the URL provided to trigger the runbook.
ADD A SCHEDULE AND VIEW JOB
•
Let us create a schedule for execution of the published runbook. Select Schedules tab from Resources,
Create on New Schedule, enter the name, specify the time to start along with your time zone, provide
whether you want the runbook to be triggered recurring or just once and finally click on Create. As a
best practice, keep a difference of at least a 5 minutes time lag at start time for schedule, and the time
at which we are creating it.
•
Link the schedule to the runbook
•
We can find the schedule under Schedules tab. After the specified time, we can see the job executed
which in turn has triggered runbook execution. Create a Webhook to trigger the runbook.
www.dotnetcurry.com/magazine
107
TRIGGER RUNBOOK USING AN APPLICATION
As part of a requirement, you may need to trigger a runbook with the help of an application. In order to
achieve this, we can create a webhook for it. A Webhook will start any runbook using a HTTP request. This
can in turn be used by any other application (custom application) or services like Azure DevOps, GitHub.
•
Select Properties from Runbook settings and click on Add Webhook
•
Provide a name, select webhook expiration period and do not forget to copy the URL. Select Enabled,
click Ok and finally click on Create
•
The newly created Webhook can be found under the tab of Webhook from Resources
•
Now we need a way to trigger the runbook. You can use any of the above-mentioned ways. I am going
to use Postman to trigger (we cannot use browser as this is a HTTP POST)
•
When I send a POST request via postman, I can see another job triggered and thus it ensures that the
URL works. You can view the job and look at the Input, Output provided via runbook.
With the Overview tab, we can have a look at the overall Job Statistics for the account.
STATE CONFIGURATION - DESIRED STATE CONFIGURATION (DSC)
Azure Automation provides us with feature of DSC (similar to Windows Feature DSC Service) which helps in
keeping all the nodes (Virtual machines) in a particular state.
Say with the Continuous Deployment (CD) concept, if we need all the machines in our environment to have
IIS or Tomcat installed on them before the deployment succeeds, we can use DSC.
This feature can be used for receiving configurations, checking for desired state (which as per our
requirement can be to check IIS server or Apache Tomcat server) and responding back with the compliance
for the same.
In order to apply it to nodes, we first have to create configuration, compile it and later apply it to one or
multiple nodes in our subscription. We can provide the options for automatically applying the state and the
frequency with which the compliance can be checked. We can also specify if the node is not complying and
if it needs to be corrected.
The prerequisite for this is you need at least one virtual machine which can be configured for DSC. I have
created a machine with Windows Server 2012 Datacenter as OS. As I will be applying the IIS configuration
using a script, I have not configured IIS server to ensure that it is added at the time of compliance check.
Configuration: DSC configuration are PowerShell scripts. There is a keyword Configuration for specifying
the name of the configuration, followed by Node blocks. There can be more than one node separated by a
comma. After writing the script, we have to compile it which in turn creates a MOF document.
Create a PowerShell script with the following code and save with an appropriate name.
configuration LabConfig
{
108
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
}
Node WebServer
{
WindowsFeature IIS
{
Ensure = 'Present'
Name = 'Web-Server'
IncludeAllSubFeature = $true
}
}
•
•
Select Configurations from Configuration Management – State configuration (DSC) tab
Click on Add and select the .ps1 file we created earlier and click on Ok
•
You will see that the new configuration has been added, select it and click on Compile (read the
message carefully and click on Yes)
•
The configuration will appear under Compiled Configurations now
•
Click on Nodes and select Add. We will get a list of all the virtual machines we have created with an
ARM template.
•
Select the virtual machine which we want to configure for IIS (make sure that the machine is running).
•
Click on Connect and provide the required values. There are three values for Configuration mode namely
ApplyAndMonitor (this is the default behavior, if the node does not comply DSC reports error), ApplyOnly
(DSC just tries to apply configuration and does nothing further) and ApplyAndAutocorrect (if the target
node seizes to be compliant, DSC reports it and also reapplies the configuration). Use the one required
for your usage. The refresh frequency and configuration mode frequency values. Observe there is check
box for reboot node if required, finally click on the Ok button.
www.dotnetcurry.com/magazine
109
•
110
The node will be monitored after a successful configuration. If by any chance the IIS server is deleted
from the node, the next time the compliance is checked, it will be automatically corrected or we will
just get a message saying the node is not compliant any more.
DNC MAGAZINE ISSUE 44 - SEP - OCT 2019
If the machine is stopped in between, DSC will fail.
UPDATE MANAGEMENT
This feature from Azure Automation can be used to manage Operating System related updates for virtual
machines running Windows or Linux, or even for on premises machines.
This feature can be enabled using the automation account (it requires Log Analytics workspace to be linked
to your automation account). This can be applied to multiple subscriptions at a time.
Actual updates are taken care of by runbooks (though you cannot view them or do any configurations on
them).
SOURCE CONTROL INTEGRATION
We create a runbook by writing a PowerShell script.
On similar lines, we have added a PowerShell file for DSC to apply on a node. Using source control
integration, we can always use the latest changes made to any of the scripts. The source control can be
from GitHub or Azure DevOps (both TFVC as well as Git). You can create a folder in your repository which
will have all your runbooks.
Select Source Control tab from Account Settings, provide name, select the type and authenticate. We
can have a look at all the repositories when authentication succeeds. Select the repository, select the
branch, the folder and click on the Save button. We can also specify if we want to sync all our runbooks
automatically or not. If you have any runbooks as part of the source control, they will be updated and
published after successful syncing. We can add as many source controls as the number of gm runbooks we
have ready in different repositories.
CONCLUSION
In this article, we discussed how various tasks can be automatically managed by using Azure Automation.
Automating such tasks will help in better and efficient management.
Gouri Sohoni
Author
Gouri is a Trainer and Consultant specializing in Microsoft Azure DevOps. She has an experience of
over 20 years in training and consulting. She is a Microsoft MVP for Azure DevOps since 2011 and
is a Microsoft Certified Trainer (MCT). She is also certified as an Azure DevOps Engineer Expert and
Azure Developer Associate.
Gouri has conducted several corporate trainings on various Microsoft Technologies. She is a regular
author and has written articles on Azure DevOps (VSTS) and DevOps Server (VS-TFS) on
www.dotnetcurry.com. Gouri also speaks frequently for Azure VidyaPeeth, and gives talks in
conferences and events including Tech-Ed and Pune User Group (PUG).
Thanks to Subodh Sohoni for reviewing this article.
www.dotnetcurry.com/magazine
111
Thank You
for the 44th Edition
@dani_djg
@damirarh
@yacoubmassad
@subodhsohoni
@sommertim
@gouri_sohoni
@jfversluis
dnikolovv
@suprotimagarwal
@maheshdotnet
@saffronstroke
Write for us - mailto: [email protected]
Descargar