I was able to setup a VM in a vNet and RDP to it. It is the simplest scenario to use Azure IaaS. A virtual network supplies a perfect isolation to group related resources that talk to each other. Usually that is not how it is used in the real world.
There are many services living under isolated environments. They expose endpoints that other services can communicate with. Warning: I do not discuss about microservices. Regardless of the term, each service will stay inside a virtual machine in a virtual network. What would it take to make them talk to each other?
Follow the step I did in the previous post, I created another setup in Central US.
Network Peering
There are 2 different virtual networks at different locations, with different address spaces
For 2 virtual networks, there is Network Peering. From each virtual network, create a peering to the other.
A peering can
Peer 2 virtual networks (of course there must be 2) in different regions
Belong to a different subscription. It is possible to select a different subscription when creating a peering.
Creating a peer is pretty simple
The above creates a peer from ps-az300-vnet to ps-vnet. To finish the peering, create another one from ps-vnet to ps-az300-vnet.
The peering is ready. Let’s see if these virtual machines can talk to each other
Let’s RDP to each machine and test a connection to the other. This picture makes my day
So far, I am able to
Create a virtual machine with its network setup. In a more abstract term, I create an isolated environment which allows me to deploy whatever I want
Connect the 2 isolated environments via Azure Peering resource
Gateway, Hub-spoke topology
Another option is to use a gateway, hub-spoke. They are kind of advanced topics that I do not really need to grasp at the moment. There are step by step on MS Docs site.
Azure has been there for a while. It is huge. I once said that I will study Azure. Then I started. Lost. There are so many materials out there, wonderful MS docs site, super Pluralsight, and many other personal blogs. “How do I start? Where do I start?” I asked.
I took a chance to read around, tried to capture some Azure concepts especially the mindset. Without a correct mindset, everything is a mess. What I read will confuse me more.
Almost everything in Azure is a resource. To manage a resource there is Resource Manager. A resource can be created, managed using templates. So there is Resource Template. As a developer, that part makes sense to me.
The design is modular, component-based. In the high level, its design is familiar with the software design principles.
A virtual machine is deployed to the cloud. Its connection is controlled by a Network Interface (NIC), a separated resource.
Let’s say we need to deploy a virtual machine in Azure. And we should be able to remote (RDP) to it. How many resources do we need? How does it look? Let’s find out.
All my resources start with ps-az300. The rest are auto generated by Azure or my mistakes while experiencing.
Resource group: rg-az300
Virtual network (vNet): ps-az300-vnet
Virtual machine: ps-az300
Network interface (NIC): ps-az300-nic
Network security group (NSG): ps-az300-nsg
Public IP address (PIP): ps-az300-pip
Resource Group
Resource groups are logical containers for everything. All resources used to setup our example are grouped in a resource group. Once the experience is completed, deleting the resource group will wipe out its resources.
Virtual Network
Virtual network (vNet) supplies an isolated environment where resources inside a vNet can talk to each other. It increases security.
Network Security Group (NSG)
Like firewall in Windows. Define the inbound and outbound rules. Beside the default rules generated by Azure, the inbound rule “RDP_3389” is created to allow remote desktop connection.
Network Interface (NIC)
Act as a lawyer between resources with the internet. A virtual machine should not define its firewall directly. Instead, a network interface is attached to it.
A network interface has a vNet, a NSG, and attaches to a Virtual Machine. It might have a public IP address, defined by a public IP address resource (ps-az300-pip).
This network interface allows the VM (ps-az300) communicates with other resources or over the internet. What it can communicate with depends on the NSG settings.
Its public IP address is configured under the Settings -> IP configurations
The interesting thing here is the Public IP Address. One can create a PIP easily, just remember in the Assignment section to choose the static.
Public IP Address (PIP)
As seen in the NIC section above. The public IP address for the NIC is 23.101.16.27 supplied by Azure. There are 5 reserved public IP address for a public IP address resource. That’s why I choose the static assignment.
Virtual Machine (VM)
Just go through the Azure wizard and choose settings: Resource security group, Network interface, Virtual network. There is another course regarding creating virtual machines in Azure. Hope that I can write something about it soon.
Since I am learning Virtual Network, this is the most interesting about the virtual machine setup – the Networking.
The Virtual Machine is the actual resource that hosts other business services; if we want to deploy, say a website, an internal web service.
Use the network interface ps-az300-nic to communicate with the outside
Run under the virtual network with the default subnet
Have a public IP address 23.101.16.27 with a private IP (10.1.0.13) inside its virtual network
Follow the inbound/outbound rules from the network security group ps-az300-nsg
With those setup, I can click on the Connect button and download RDP file.
There are many things in the process that I do not understand. There are many concepts in those images I paste here. That’s ok. Things make more sense to me.
The next challenge is to have 2 virtual machines in different virtual networks communicate to each other.
A year ago, I wrote Leaky Abstraction – Linq Usage. I use the design whenever I meet the same challenge. The system works as expected. I am happy about the design.
Until recently, there was a need to order the collection. Say that we need to order a TeacherCollection by years of experience. Assuming that there is a requirement to print the result in the years of experience order.
It is a pretty simple requirement. In the TeacherCollection constructor, this code will do the job.
public TeacherCollection(IEnumerable<Teacher> teachers)
{
if (teachers != null)
_teachers = teachers.Where(x => x.IsStillAtWork).OrderBy(x => x.StartedOn).ToList();
}
Everything works as expected.
Boom! The system is very slow when having more data. The profiler shows that a high number of time spent on the ordering. In that system, the ordering logic is much more complicated. It is not a pure Linq sorting as in the example. Still, the sorting cannot be a problem. That is for sure.
The problem is that the collection is accessed too many times. It is also an expected result because the collection is designed to filter data, to work in a pipeline in a safe way.
The ordering logic should not be placed here. The collection itself has all the information to do the sorting, filtering.
What should we change in term of the design to solve the problem and also support the sorting?
Identify Responsibilities
In my opinion, this is the most difficult part of writing code. I have not found any exact formula to get it right. Identifying responsibilities is a heuristic. Experience matters here.
Filtering and ordering should be treated as two separated responsibilities. It is very easy to mix them in one implementation and thus error-prone. When defining responsibility, one should consider at least 2 factors
The purpose of each: One is for filtering, the other is for sorting. They are 2 different operations.
When it is used and the usage frequency. Filtering is used a lot to extract sub collection from the original collection. Ordering is, on the other hand, only used when a final result is displayed to the end user or other form of presentation such as Console screen, word document.
It is kind of tricky to see them as separated responsibilities. In many case I even not bother to think about it. Well, It proves that I was wrong. Sometimes, it sounds cool and simple if just order the list.
Extract Explicit Interfaces
Before moving on, let’s take a look at the TeacherCollection. The additional feature we need is the ability to get exact index of a teacher.
ITeacherCollection – The default interface is extracted from the current TeacherCollection. The School now holds an instance of ITeacherCollection, instead of TeacherCollection implementation. This refactoring step will not break anything.
IIndexedTeacherCollection – A simple interface which supply only GetIndex API. A key point is that a consumer cannot instantiate it. The only way to have this API is a transition from ITeacherCollection.BuildIndex.
The sorting cost is paid only whenever a need arises.
The actual implementation of the improved design is almost identical with the original version. All the major implementation logic is there in the TeacherCollection class. The refactoring is safe because the compiler will tell us what goes wrong.
The client (Program class) only deals with interfaces and interfaces transition.
Why didn’t I have the ITeacherCollection at the first time? Well, we have not needed it at that time. We should not make thing complicated if there is no demand. Design is evolved.
2018 has almost already been in the book. I have had a wonderful year. Instead of spending time writing about it, I decide to spend time recalling it, and write a road map for 2019. The past is in a good care by the memory. The future must be written.
The road map will drive my time and energy. It also helps me make better decisions. If anything is against the road map, it is easy to say NO.
The road map is for my professional career. For my human career, the road map is always the same: be a good husband, a good dad, and a good self.
Project Management Professional (PMP)
I read about it 6 years ago. I did not understand a single word in the material. After spending 10+ years building software and running teams and the company, It is time to get it right again. I found it much easier this time.
Start with an PMP book from Head First, then a series of courses on the Pluralsight. I always love books from Head First.
Domain Driven Design (DDD), Refactoring, and Performance
The more I write code, the more value DDD I recognize, and the less I know about DDD. The last part, the less I know about DDD, is important. It tells me that I started to understand it.
DDD is not just about code, not just about writing code. A proper understanding of DDD will increase design skill, business analysis skill. When it combines with Event Storming, the power is unlimited.
The second edition Refactoring – Martin Fowler has come out. I read the first edition 10 years ago. It has helped me writing a better code for 10+ years. The second edition is definitely in my studying list this year.
DDD and Refactoring are universal and never obsoleted.
Building a running software is easy. Making it fast and stable is a different story. By paying attention to the performance, it drives me to study careful about threading, memory, performance pattern, scaling pattern, …
Active Sharing
Sharing is learning. Giving is receiving. I will active offer my help in LinkedIn. If I am lucky, I can help someone.
Writing code is hard. Writing human readable code is much harder. One of the contributing factor is that it depends on who read the code. Each developer has a different background, experiences, coding style, and even how their mind is wired up.
I do not believe there is a code that readable for every developer (because when talking about code, we prefer to developers). However, there is code that improves readability over time for the development team. It is hard for an external developer to understand the code just by looking at some files or pull requests. Context really matters.
This post is purely my point of view in term of code readability. That’s said the code that I consider as readability. There are many factors contributing on the readability. The method parameters is one of them. Let’s walk through some example code.
Let’s build a piece of code that stimulates the job applicant CV verification. The detail does not matter.
When verifying an applicant, there are 2 piece of information: ApplicantCv and Special Notes. Some candidates might get a direct introduction from the top managers.
And the consumer code
The interesting part is at the Verifying method of the SeniorPosition and JuniorPosition. The JuniorPosition passes the execution to a private implementation, whereas the Senior does not.
Let’s follow the code from the consumer side, the Program class. A list of applicants wrapped in VerificationContext. Each context has 2 important information: SpecialNotes and ApplicantCv.
The next level is the HrDepartment. All it does is simply passing the context to all implementations of IVerifyingByPosition, SeniorPosition and JuniorPosition. The HrDepartment depends on the VerificationContext. That is the information that it is going to use to do the work, the logic of verifying applicants.
The next level is the implementation of IVerifyingByPosition. The JuniorPosition.Verifying method consumes the VerificationContext. However, it delegates the logic to a private method which take only ApplicantCv. The scope is narrowed down from VerificationContext to ApplicantCv. By having that private method, I know for sure that the JuniorPosition depends only on the ApplicantCv, not the entire VerificationContext.
Why is that important? Imagine that later the VerificationContext is expanded with more properties. If there are many code in JuniorPosition, we have to read the code to know what information is using.
It sounds confusing and not obvious. The rule is very simple, a method should take just enough information, all parameters should be used. That is easy to understand. But what we do not pay attention to is the properties in objects passed in the parameter list. In the example that is the JuniorPosition uses VerificationContext in the method signature. But its implementation only uses a property in the context – the ApplicantCv.
When reading code, I usually pay attention to what information it consumes and depends on. Usually they are reflected via constructor or method’s parameters. The code should be refactored to gain that obvious dependencies for free.
Another factor in the code readability is the proper use of instance vs static. If a method can be made static, it should be declared with static. Why? Because by static, we know for sure that it does not depend on any instance fields. The dependency is narrowed to a smaller scope.
What else could we do? Method visibility is an important factor as well. A public method means it is being used by other consumers. A private method is safer to refactor.
Code readability is important and it is hard to get it right. However, with some small tips and care, we can do it better over the time. The readability also takes into account the human factor. The one who write the code and the one who read the code. They are human. They communicate via code.