Good day fellas! After quite some time never in touch with .NET code for a longer period of time, it's time for me to release extension methods that I have been using, with a twist. I'm releasing in conjunction of the latest version of .NET Core so that it will be usable in every application you might want to build (Windows, Linux, Android, iOS).
The main reason for this method was started when I code extensively with String.Format, but always fail to remember which index the exact object is located. String.Format does need you to specify which index of the object you want to format, and in most cases if you fail to remember, you could end up in error or wrong formatting.
When you want to start working on .NET Core Class Library, the first thing you would want to worry about is the attributes of the project you're working on. One of those things is versioning. It's always been a complicated ordeal because typically you want to version your work automatically.
There are many ways to version your class library, from utilizing the CI/CD components, or as simple as empowering your project file to do that for you automatically. This post will help you with the latter.
Creating .NET Core Class Library Project
To create a .NET Core Class Library project, you can follow the PowerShell commands below.
md dotnet-lib
cd .\dotnet-lib\
dotnet new sln --name dotnet-lib
dotnet new classlib --name MyLibrary --framework net6.0 --language C#
dotnet sln add MyLibrary\MyLibrary.csproj
Those commands will get you up quite easily in terms of creating the project, or you can use Visual Studio of any version (Community, Enterprise, or any latest versions).
Add Version Logic into the Project File
Once you're done with creating, now it's time to modify the project file. Now, here's something that's debatable in the sense of how you want to version your project. One might say you shouldn't version based on the date and time and stick with the traditional way of versioning, but I wouldn't mind if that will keep me focus on what I should do.
//standard versioning
1.2.4039
//{manual-major}.{manual-minor}.{build-count}
//date and time versioning
2022.11.21.1159
//{year}.{month}.{day}.{hour-minute}
In a real-world scenario, some people work on multi-projects that could take away our focus that would end up giving the wrong version, that's total madness. But each to their opinion by the way, and you should take a part when discussing versions about to take place.
Edit your MyLibrary\MyLibrary.csproj file, you should see something similar to the below code.
net6.0
enable
enable
Still inside the Project element & below the PropertyGroup, add the following code.
What this UsingTask element does is to create a custom code function within your project file, and that function will return the auto-generated version as an output. So, to make things clearer, the task name in this example is SetVersionNumber that accepts 1 input parameter of CreationYear and 1 output parameter of GeneratedVersion.
As we put this task, it won't do anything until we attach the task to certain build targets. put the following code just below that UsingTask element.
$(GeneratedVersion)
Now, this Target element does execute sequentially of all elements within. There are 3 major sections of elements: executing SetVersionNumber task and output as GeneratedVersion variable, then output a Message to the build console, and then finally set the different version artifacts from GeneratedVersion variable.
That's it!
Your final code of csproj file may look like this below.
And your generated Class Library after build will look like this below.
Conclusions
This is definitely one of the ways, the easier and you don't need to work too much to define your version. You just need to set this once and forget for all other times, gives you more focus on what you're doing.
I'll talk more about how you can combine this with NuGet packaging and publishing via GitHub Actions.
I've been working with SharePoint in more than half of my entire career since SharePoint 2003, that is now after the Online version is getting more recognized by organizations, sometimes we get questions from those who are very much expert in the on-premises version but unaware of the features of Online one. This is understandable because some organizations still want to keep their data on-premises and possibly not ready to move to the cloud becoming unaware of Microsoft cloud environment.
Now, a question in particular was about whether or not SharePoint can be migrated to Azure App Service, and if not, is there any particular documentation from Microsoft that clearly states that it can't be done.
This post hope to help anyone trying to understand why it can’t be done, in my personal opinion.
I did my little research - which I know it can't be done - to find any explicit statement from Microsoft docs that says so. Unfortunately, as I thought before, I couldn't find any. Or maybe somewhere under the rock. Okay, so I have to explain on what is Azure App Service and what are needed in order to run SharePoint any where.
In a nutshell, SharePoint can’t run on Azure App Service because of the complex services it serves, as most of these services aren’t just simply HTTP just like a web application/web service.
Azure App Service essentially is exactly the same like your typical web application sitting in your IIS in Windows Server or Apache in Linux. It serves files stored under IIS folder such as HTML, JavaScript, or compiled .NET code as DLL to be served via HTTP/S protocol. From the infrastructure perspective, Azure App Service basically same like your Windows Server, or Linux. One big advantage of using the App Service is that you don’t need to care the server configuration, setting up join domain, authentication provider, ports, so you can focus on building the application you like and serve it directly. Just like staying in a hotel, you just come, pay, and lie down literally.
Of course come to the disadvantage, you can’t roam freely to the kitchen where the chefs work, you can’t force the hotel appearance and ambience you like, or sleeping in the reception area much like your living room, much towards altering the entire hotel itself. App Service doesn’t allow you to install Window Service, dictate how many IIS sites you need to add, or connect to other servers you want. Your space is only that little tiny folder assigned to you via Azure App Service.
To take on the same analogy, SharePoint on the other hand, is not just you moving in but also your furniture, electronics, appliances, cupboard, and kitchen area that has a very specific requirements to operate it.
For instance, SharePoint User Profiles that is used to crawl users in your Active Directory, Timer Jobs that runs essential scheduled job for SharePoint, not to mention SharePoint Search to crawl the content of this SharePoint. These are under Window Service and served via particular ports and consumed by other SharePoint servers in the same farm.
For SharePoint SQL Server database however, you have the option to use Azure SQL Managed Instance (MI), with caveats. Forget about Azure SQL MI if your SharePoint farm was configured using Windows Authentication, but you can if it’s SQL Authentication. I saw some articles too that you can convert SharePoint database from Windows authentication to SQL, but let’s not talk about this for now.
What’s feasible then if you ask? SharePoint on Azure is the answer, not via App Service but the traditional Virtual Machine (VM). Lift and shift, load your existing VM to Azure (require downtime), then set your networking properly and make sure the servers can communicate each other, and ensure connection to Domain Controller is also established.
With this explanation, you will not get any complain from the hotel for bringing your own fridge and kitchen appliances down to hotel because you can’t do so. 😉
Now you want to learn Power Automate but you have no idea where to
start? Below are the keys to mastering Power Automate in no time!
You can go to Microsoft Learn to take on self-paced courses on dealing with
Power Automate in Get started with Power Automate - Learn | Microsoft Docs. But before that, here are the top 4 fundamental topics you need to master
before jumping into Power Automate.
1. JSON
JSON, short for JavaScript Object Notation, is merely a lightweight data
transportation format. Quoted from json.org, JSON is easy for humans to read
and write. Mainly because the data structure is simple but can fit into
different types of data. Imagine JSON is water, very fluid, and can fit into
many different containers. The entire IT standards are now revolving around JSON as the format of the
data transportation from one system to another.
In Power Automate, JSON plays a crucial element when coming to developing a
system. Often we face an error while developing and it's important to
pinpoint the core of the issue before we take action to fix it.
Power Automate, even the entire suite of Microsoft Azure products and Office
365, were built around JSON as the data payload. Calling a SharePoint site
with API, getting data from Dataverse, and even working with NoSQL databases
such as CosmosDB, they're using JSON. Below is an example.
{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": [
"GML",
"XML"
]
},
"GlossSee": "markup"
}
}
}
}
}
My only reference for learning about JSON is from
json.org.
2. Expression
You must also learn how to make Power Automate expression, which is
essentially a code you call to get the value you want in every part of Power
Automate. If you already know Power Automate, oftentimes you get data
directly from the Dynamic content just like below.
But here's a secret, NOT ALL data are presented in the dynamic content.
You sometimes have to run the flow to get the precedent actions/trigger data
presented in JSON format, and then consume it.
The example above is the code for getting data from the Power Automate
trigger, of the username in encoded format.
What if you want to check if the trigger came from a specific IP address
that is stored in X-Forwarded-For header? Well, it's the same process.
triggerOutputs()['headers']['X-Forwarded-For']
Now, what if you want to get a value that sometimes exists, but another time doesn't? Just put a question mark in between.
triggerOutputs()['headers']?['X-Forwarded-For']
By giving a question mark, your code will not turn into an error if in the future X-Forwarded-For is not always there, the value returned will be null or empty.
3. OData
Now you know the data is transported using JSON format, how did it get
transported? Some systems - if not, every system nowadays - are transported
via an API, essentially a web service that you can call using a specific
URL. OData itself is a standard most organizations follow including Microsoft,
to build RESTful API.
SharePoint Online and Microsoft Dataverse use OData standards to
communicate or get the data. This includes all administrative tasks such as
get site collection, get environment (of Dataverse), and even the underlying
Power Automate designer tool when you are working also uses OData standards.
You can try pressing F12 in your browser, open up a Power Automate designer,
and see it under the Network tab. That's a whole bunch of OData.
What makes OData unique is the ability to select, filter, expand, and sort,
and of course it all depends on whether the vendor conforms to the OData
standard. My go-to place to learn about OData is always SharePoint Online,
with the reference from Get to know the SharePoint REST service | Microsoft Docs. If you don't have one, you can create an Office 365 Developer tenant via
this link Developer Program | Microsoft 365 Dev Center.
Now, you can use Chrome or Edge (whichever you like), and
install ModHeader - Chrome Web Store (google.com). This tool helps you to send an HTTP Web Request header along with your
URL query. You need this tool because SharePoint Online uses XML to
transport the data.
Set the request header just like above, and you're good to go.
4. HTTP Web Request
Knowing the concept of HTTP Web Request is also crucial to better understand how things work in Power Automate. Every single action in Power Automate is essentially a block that will run HTTP Web Request depending on how it was developed.
Most importantly, the concept of different types of requests, to understand how to set a setting of your HTTP Web Request in the Request Header or Body, to retrieve the data you want according you your specification.
After my previous post about the conception of the spider workflow, let’s put this into action. Let’s think about multi-level of approvals that you might have in your requirement.
Let’s imagine we have this simple requirement for a workflow in Power Automate, and we need to express the logic in Power Automate using the spider workflow concept. First of all, let’s work the trigger header, because we might need some of the information in it especially which account triggered the workflow.
Next, let’s work on the variables. Like previously mentioned, we need a boolean variable which will be the stopper of the workflow just like a cork of your wine bottle. Next is the entire workflow configuration just like I have shown you earlier (read the previous post for more) but with a little twist.
If you want a quick escape, just copy and paste the following code below.
Next up, is to parse the JSON configuration well enough so that it’ll save time when you want to reference a specific JSON object property. And the subsequent action is to initialize the first index we need to tag as the first entry of the stages.
Just for your own sake, Index is just one idea to ID the current workflow stage. In the real world, you can use string, GUID, text, numbers, or anything that can identify what is the current stage and it has to be unique.
Now, here’s the heart of the spider workflow. We can’t use a state, pointer, go-to, or something like that in Power Automate to make the flow goes back to the controller. To overcome, we use Do-Until action until the [Keep Looping] variable turns false. You need the [Stop the loop] in case of any error or time out, so it doesn’t go to endless loop. Optionally, you can get the controller’s error message.
[Stop the loop] basically just to set [Keep Looping] to false.
Inside the controller, you probably need to find what is the current stage from the configuration. The next step up, is to filter the parsed body of the entire configuration that come out from [Parse Flow Configuration] action. This is where it gets interesting, you have to filter where the [Index] is equal to [Stage Index] which is 1. Then, parse the JSON value simply by taking the body of [Filter Flow Configuration on Index] which will return an array and take the first one.
Parsing the JSON is always helpful especially when you want to work the flow quickly without doing too many trial-and-error.
But then the next action is just simply notify the requestor what’s the current stage coming up. It’ll be just a message saying “Hi requestor, your flow now is in ___ stage with status of ____”. Simple enough…
Then, you [Switch on Type] that will be the path of your next bus stop of your flow. In this case, we identify 3 different types based on the configuration: BasicApproval, CustomApproval, and Closure. We can put [Closure] under [Default] which makes the logic more broad as if like “if the type is either Closure or everything else, then stop the loop”. Let’s take a closer look how the routing done on approval side.
Now you clearly see, that we’re going to filter the [Outcomes] array where [Outcome] property is equal to the outcome of the flow and return what’s the [Next Index] of that outcome and assign back to the [Stage Index]. Eventually, the loop will be executed once again to see whether it’s the Closure or another step of approval.
No bluff, that’s it!
Last words
Now, all I can say is just congratulations that you now can work with any kind of approval flow. Your stakeholder asks for more approval? No problem, give ‘em that!
But just don’t be too happy first and hear me out. If your workflow configuration is salty, the entire workflow may go ugly too. So, make sure you have done your thought process while thinking out loud what’d be the configuration for this flow.
There are many patterns to implement different types of workflow using Power Automate. But one that strikes me a lot in terms of the dynamicity, is the spider workflow. There are not much of reference out there, it was (in my work history) implemented in a project in one of the prestigious client in Singapore, sparked by my technical architect.
It’s called spider because… of course it looks like a spider. This pattern can be repeated in many projects and even one can implement the template for it in Power Automate. This pattern mainly to overcome a problem when the workflow is not well-defined, prone to change, or to provide dynamic ever-changing workflow without re-work of the entire workflow.
Implementing Spider Workflow
The main concept of this workflow is to loop the process back to the controller after the decision of the next path. The first step to work this out, is to define how many distinctive tasks that a process can do. In the sample diagram above, there are 5 paths that goes everywhere with one of them proceed to end the workflow. The other four, goes back to the controller to decide what’s the next path after an outcome is reached. In the real-world, it can spawn more than 5 or less, depending on the requirement.
The second step, you need to run your thought process to link up between an outcome with the next path. To give you an example, if say the initial flow goes to Approval first (Path 1), definitely it will have 2 outcomes whether it’s Approve or Reject. You could make a story where after the first approval, it will go to another level of approval if the outcome is Approve (Path 1). Otherwise if it’s Reject, you then decide to flow it to End the workflow (Path 3). If you get the feel of your own thought process thoroughly, you could end up having the exact same flow like in the diagram below.
Now, how do we take care of the workflow’s most important mechanism such as notifying the requestor or setting the workflow status once it’s completing one stage? These mechanism you actually can put under the controller itself since it’s executed every time it passes with another path decision.
Be very careful when you run your thought process, do avoid any process that potentially goes into an endless loop (because it can happen). The design itself is perfect, this endless loop can happen due to misconfiguration. Wait, there’s configuration?
Spider Workflow Configuration
There is a configuration to accompany this flow. You could actually leverage on anything to store, literally. Back in my old days, all workflow data are stored as a row in a SQL Server database. Nowadays, you can just work out the JSON configuration and let the configuration be parsed by the flow. Let’s take a sample on how you can define the JSON structure for just one single stage.
As above is just a simple sample, you could define your own JSON that depicts the flow process. To summarize,
Index – defines the index of this current path.
Stage – defines the current stage name (or whatever you want to define as the state of the current flow).
Status – defines the workflow status for the current path.
Type – defines the path where to go. If you could link this up with the first diagram above, it’s the Path 1.
SendAttachment – typically the current controller’s configuration to send attachment to the approver. You can also add your own other configuration.
AssignedTo – is also the current controller’s configuration to take the approver list dynamically from a table or database. The other values can also be “Requestor” where it will assign to the initiator of the flow.
Outcomes – defines the routing! You can read it simply by say if the outcome is “Approve”, go to the object that the index is 3. Else if it’s “Reject”, go to index 6.
Now, if you run your thought process, you could define a JSON configuration for the entire process just like below.
Lastly, this blog is just a conceptual idea that you can implement in each project you go on either with Nintex, Power Automate, K2, or any other cloud workflow out there.
Connecting to OAuth 2.0 authentication might be a common and critical task to do for many developers, especially in Microsoft Office 365 environment. It could be a connection to OneDrive, Excel, SharePoint, OneNote, even the big developer platform such as Microsoft Graph. It could be connecting to any other cloud app services such as twitter, facebook, google, pinterest, or any other cloud services in this world.
In practical way, you can follow my step-by-step below if you find the article is not clear enough or lack of examples. This article will be focusing on Microsoft Online credential which can be used with Azure Active Directory or Microsoft Online account.
Registering Your App
Before you can make connection to any cloud services using OAuth 2.0, you would need 2 things, Client ID and Client Secret. Anything that connects using OAuth 2.0 is considered an app, and you need to register that before proceeding. This will usually generates you a Client ID and commonly with Client Secret, which acts same like your own username and password.
Just want to share to you guys for today, which I hope it will help you out when you need it. I’ve been writing a few about PowerShell before, PowerShell Script upload large file. Here is the script to list all files and sub-folders in a SharePoint Document Library.