ArticlesBlog

UiPath Project Organization 5.3

UiPath Project Organization 5.3


Hello. Welcome to today’s tutorial, Project organization. In this video you’ll learn how to best organise
your projects as to maximise efficiency: yours, as well as that of your automations. The subject of organizing your project might
seem tedious, but it is a vital part in successful automation projects. What we’ll look at today are the lessons
we learned during our experience implementing large projects for some of the most influential
companies in the world. We distilled these lessons into a series of
best practices that will make your automation projects more reliable, efficient, maintainable,
and extensible. These are welcomed qualities in most cases,
but in larger enterprise process-automations, they become key requirements. Reliability is a characteristic we want
in all automations, and we discussed it in-depth in the previous video, Exception Handling. It deals with teaching your automation to
recognise and recover gracefully from various errors that might occur, internal or external. Because as we all know, they will. Efficiency is about cutting down development
time through various methods, but also about making sure that the automation runs smoothly
in production conditions. – And having a workflow that’s easy to Maintain
is essential in business environments, where collaboration and handovers are the normal
way of operating. – And so is Extensibility, because there will
always be more use cases that can be incorporated into existing automations. These are just the main benefits of applying
best-practices to your automation, but you will probably encounter other advantages too. So let’s see what are these wise lessons,
passed down from one generation to the next First of all, when you start a project
make sure you take a moment to choose the appropriate layout for each workflow and its
sub-workflows. You can do most tasks in UIPath in either
a sequence or a flowchart, but in practice some things are better suited for flowcharts
and others for sequences. For example, the main workflow works better
as a flowchart for a few reasons: Is easier to get an overview of the whole project, and
it’s also easier to test individual parts of the workflow by connecting them directly
to the start node. Sometimes it’s a good idea to use a state
machine for your main workflow. It works especially well if you have multiple
strict conditions for when your workflow can move from one state to another. We’ll take a look at that in a separate
video. And in general, higher-level decisions or
business-led choices work better in a flowchart. UI interactions are performed one after
the other, without much variation, so they’re better suited for a sequence. And try to avoid nested IF statements because
they can get a bit unwieldy and hard to follow. If you need more complex logic, the visual
nature of flowcharts makes them much easier to comprehend, so much better suited for this
purpose. To understand why breaking an automation
into smaller components is important, you must remember that much of what we do as automation
developers consists of managing complexity. We implement very complex applications and
algorithms by breaking them up into smaller manageable tasks. That brings a lot of advantages:
It’s easier to develop and test the functionality of independent pieces. You can make sure each piece works on its
own, and then you can tie them all together. It cuts developing time by reusing the same
functionality in different parts of the project it’s easier to collaborate with multiple
people on the same project, in a local or remote environment. Besides workflows and sequences, an even
better way of isolating pieces of automation is using the invoke command; and later we’ll
take a closer a look at how to use it. Exception handling should be used to capture
and handle errors, or at least log them for later analysis. You can put problematic sequences – like
let’s say, some sensitive UI interactions – inside a TryCatch block. This ensures that your automation will not
crash and stop completely for every minor error. TryCatch should also be used when invoking
external workflow files. It helps in debugging and in the the process
of recovery. Automatic recovery sequences placed inside
Catch blocks can make your automation very reliable. They make the difference between an automation
that works most of the time and one that is rock solid and almost never fails. So whenever possible, implement a sequence
which will reset the application or process into a useable state, in the event of an exception. We talked about keeping your projects
clear and readable in the previous episode too. It’s essential for efficient debugging,
and also in team projects where other people need to use your workflows. As you probably know, giving descriptive
names to all components and leaving clarifying notes and comments is imperative. And don’t forget to log the execution progress
too. It keeps you in the loop with what’s going
on, and helps pinpoint errors faster. Another useful trick when deploying large
projects is to keep settings of your workflow in a separate file. We recommend a JSON** file but you can use
a CSV or excel or text file if you prefer. It’s an easy method that simplifies changing
the parameters of your robot, even without starting UIPath. This one is indispensable: if you don’t
clean-up by closing unused applications and windows, after a few iterations there will
be countless instances that will slow-down and eventually crash the machine. Right, so these are the 5 “commandments”
of UIPath automation, if you will. Feel free to refer back to them if you need
to refresh your memory when starting a new project. But anyway, enough theory for now, let’s move
on to more practical aspects. We’ll come back to higher-level, organisational issues in a minute, but before that, I want
to quickly show you a few more details about the invoke command that we mentioned earlier
and in the previous video too. It’s one of the main tools for breaking up
a project into smaller components, and pretty powerful at that. It is available as an activity that you can
drag-and-drop – like any other one – but most of the time you will use it differently:
You can simply right click any sequence… or flowchart… or any other container or
activity… and choose Extract as Workflow. UIPath will automatically replace the selection
with an Invoke activity. All you have to do is choose an appropriate
name for the extracted workflow. Essentially, you just made these workflows
into functions that can be reused in other parts of the project, or maintained & improved
by other people, because they are simple files. And more importantly, each piece can be independently
tested. Right, so that’s pretty straightforward:
right click, extract workflow. But the real power of this technique comes
from the data that can be passed back and forward – as parameters, or arguments – between
the main workflow and its invoked descendants. For example, we have this workflow that enters
data into a myCRM app; with data coming from these 4 variables, declared globally in the
flowchart, not the current sequence. And there’s also a local Timeout variable. When we extract the workflow, all 4 global
variables become arguments, and the locally-declared Timeout variable will remain unchanged. And in the original parent workflow, we have
the Invoke activity. There are no arguments by default, but clicking
Import will bring-in all those declared. Here, all you have to do is decide what data
to populate these arguments with. Or in our case, what data to enter in the
CRM app. You can obviously use raw values, and that
is useful on its own, but most often you’ll use variables. Basically you’ll run the process with different
data. You get the idea. In the Direction column you can choose to
pass data Into the child workflow, like we are doing here, or get data out of it, if
the called workflow generates some data that you need accessible from the parent. Or Both, in which case the data is passed
both ways. For this instance we need to pass data into
the process, so we’ll leave them all set to In. And let’s test it. Great, it entered the dummy data that we passed
as arguments. But sometimes you need to GET data from a
child-process, into the current workflow. Like this GetUser sequence that uses the privacy
website FakeNameGenerator.com to create ficticeous user data. These Get actions extract data from the website
and store them in these globally declared variables: getName getAddress and getEmail. And here’s the result. The difference from the other workflow is
that now, after we extract this workflow, the imported arguments should be set to OUT,
because we want to get data out of the called workflow. UIPath automatically set them to IN and OUT,
which also works fine. We don’t need this new-line variable as
an argument, so we’ll delete it. Now we can go in the parent workflow and import
the GetUserData arguments. For Outgoing variables, the value is where
to store the produced data. We’ll put each one in it’s local corresponding
variable, left behind after the Extract workflow. So now, in the parent workflow we have these 2 invoke blocks: one that gets data from a
website, and one that enters data into another app. They are independent from each other, and
can be used and reused, however and wherever you need. To use them together, connect the two… then
go to the EnterUserData Invoke block, and in its arguments window, pass it the local
variables; the ones that are used to store the extracted data from the first invoke – GetUserData. Let’s see how it works. The name and address could use a bit of processing,
like splitting them up, but we’ll see about that later. As it happens, these 2 invoke commands are
a great introduction for the next section, project organization. We’ll build a full project, starting from
these 2 sub-workflows, to show you a real-world example of project organization. So, let’s say we want to make a robot that
generates 10 users and enters them in the myCRM. Similar to the workflow we just made, but
for multiple users, and we need it to keep a log of all the operations and their status
– as failed or succeeded, because it will run unattended. And because it will run in an enterprise environment,
it should be reliable, efficient, maintainable and extensible. So let’s plan this out a bit. We need to extract user data from a source
and load that data into a destination app; so we can break the workflow in 2. To get the users, we need to open the website,
generate a number of users using the workflow we created earlier, then close the browser. And for reliability, we’ll put everything
inside a tryCatch block. Pretty easy. The other half of the automation starts off
in a similar way: Open the app, insert X users, and close. Because we need individual status reports
for each user, the TryCatch block will be inside the Insert-X-Users loop. And here too we can use the other workflow
we created earlier, EnterUser. And in the Catch section, we’ll log the
success or failure of the operation in a separate excel file, and then initiate recovery methods. As you can see, this “sketch” already
gives us a good idea of what we have to do, so let’s give it a try. We’ll go over details faster than usual
because we want to give you a high-level overview of the process, without getting lost in the
technicalities. And you will be familiar with most low-level
aspects of what we’ll do anyway. We’ll start with an empty flowchart named
“Zero Main”, and creating the two sub-workflows: GetUsers and InsertUsers. We’ll actually make them separate workflows,
so they’re easier to test, and we’ll name them starting with numbers. This ensures they stay in the correct order
in the project view and maximises readability. This setup will give us a structure to build
upon. For GetUsers we’ll start with a TryCatch
block that’ll hold all the actions: for starters, an OpenBrowser. Now that we have that, let’s add the most
important part: the loop that will generate a number of users. The loop is just a simple DoWhile with a counter;
the important part is this: we’ll go to the workflow we made earlier, and bring in the
Invoke action that generates a single user. The arguments were copied along, but the local
variables don’t exist here; we’ll just create new ones, with the largest scope. Next, we need to somehow save the results
of the GetUserData sub-workflow. We’ll do that with a simple dataTable, one
row per user: we’ll create it before we open the browser, and write it to disk after
all the browser data extraction ends. In the BuildDataTable activity, we need to
setup all the columns for the user data: basically what we extract from the website, and an extra
one for the transaction status. And inside the loop, we’ll populate the
dataTable by adding each name we generate as a new row in the dataTable. We’ll simply use the GetUserData’s results
to create each row. Finally, we’ll write the dataTable to disk and also print it out for verification. Now that should be all. Let’s see how it works. Here it is generating each user… and these
are the results: the 3 individual users printed from the invoked GetUserData workflow, a warning
that we can ignore, and the saved dataTable, containing the 3 users. Right, so we’re done with user extraction. Next, let’s copy all these users from the
excel file where they were saved, to our custom CRM app. This time, in the InsertUsers workflow, we’ll
start by loading and setting up the data. After reading the file we saved earlier, there’s
a tricky part. We said that in the excel file we’ll also
keep the status of the copy operation, as successful or failed. Because we want to avoid any conflicts inside
the CRM app, we’ll filter just the users that weren’t entered at all. AKA those with an empty Status field. We’ll use an assign activity. The filtered dataTable will be saved in a
Rows variable, which is of type array of rows. And the value we’ll assign to it is our
dataTable, Select, all the rows with an empty status; or NULL. As easy as that. Select is a very useful tool especially if
you work with spreadsheet data; you cand find out more about its syntax by doing a websearch
for “dataTable select”. Ok, we are done here. Up one level, we’ll just add an OpenApplication
for our CRM app. So, the processing. We want to make a flowchart loop, to add users
one by one, hopefully using the EnterUser workflow we created earlier. Basically we’ll ask “are there users to
be added?”, and if so, enter the loop and try to process the first one. If it succeeds, it will go ahead and process
the next one, and so on until all rows are processed. When there are no more users to add, close
the app and the workflow is done. Let’s take them one by one. To know if there are any users to add, we’ll
need a counter, and we’ll set one up and initialize it with zero. So for the condition we’ll have “counter…
is smaller than… total number of rows”. If the counter is smaller than the number
of rows it means we haven’t reached the end of the user list and we can start adding
the next user. The AddUser block is where the “meat”
of this action is. First of all, it’s not a sequence, but a
TryCatch block. This way, if an error occurs, we’ll be able
to catch it and write the appropriate status in the excel file; and then carry on with
the next user. We’ll start with a log message that will give
us real-time execution tracking of the process. This way when an error occurs, we’ll know
what was happening at the moment so we can locate and solve the problem quicker. Next, we need the most important part, the
one that inserts users in the CRM app. We’ll bring in the Invoke activity that
calls the workflow we built earlier: copy from here… to here. We just need to pass it the user data to enter
in the CRM app. Let’s see… in the LoadUsers sequence,
we load the data from an excel file and then select the rows which have an empty status;
and store all of them in the Rows variable, which is an array of rows. With the current user being held in the counter
variable we set up. So back in our Invoke command, the arguments
look like this; we’ll start with the adress: the current row is Rows, at position Counter. This is the whole row, but we just need the
adress. That is the column name we set up in the Build
DataTable action. And we’ll convert it to string. The others are very similar: Rows, at position
Counter, and the appropriate column name… to string. There is a small problem though: the CRM app
requires a first and last name, but the generated users have only 1 full name field. We need to split it in 2, using the Space
character. The result of the split operation is an array
of strings, so for the first name we’ll get the first element, and for the last name,
the second; zero and one being the first and second elements in zero-based numbering. This method isn’t 100% accurate for all
types of names, but it’ll work for our example. As you might have guessed, finding this correct
formula sometimes takes a bit of trial and error and that is where breaking up the process,
as we did, has huge advantages. Because otherwise we’d have to run the whole
process every time we want to test it, including the part that generates users. And we certainly don’t want that: running
only this workflow is much quicker. Ok, that should be it. Next, we’ll set up a variable – we’ll call
it haveError – to keep track of how the operation completed – successfully or not. Since the process got up to here, it means
it was successful, so we’ll set it to false. Then, for debugging and progress tracking,
we’ll log the successful operation. But more importantly, we want to write the
status of the operation in the excel file. So we’ll add an excel ApplicationScope and
have it open the same “Users” file that we saved in the Exctraction workflow. Then add a WriteCell action to write the OK
status. The actual location of the cell is a bit crafty,
here’s how it works: if we take a look at the excel file, we want to write in the D
cell, so our location should be D, plus the row of the current user, the one we just entered
into myCRM app. The current index of the user is stored in
Counter, but it is based on the array of rows we created using the select command. So we will search for the current user in
the dataTable containing ALL users, using the IndexOf command. And when one is found, it will return the
Index of that row; or user for us. But to be exact, we also have to add 2: one
because of the zero-based numbering, and 1 for the row of headers we have in the excel
file. And that is how we find the correct location
in the excel file. Again, a bit of trial and error was involved. As it is, this workflow is already functional,
so you can try it out, but we’ll go on to setup the Catch section. AKA what happens if there is an error. As usual we’ll start with logging the failure
of the transaction, with the complete Message and Source of the error. Then we’ll set the error flag to true. This is important because later we’ll use
this flag to recover from errors. Now, if there has been an error we need to
log that error in the excel file too. So we’ll just copy the excel section from
the Try block… and paste it here, changing only the message,
from OK to FAIL; the location remains the same. Back in the root level of the workflow, we
have one more activity to set-up: the Success or Error decision box. We already have the condition: it’s the
error flag we created in the AddUser sequence; since it’s a boolean value, that’s all
there is to it. To simulate a crash, we’ll manually close
the app while the workflow is running, so the recovery procedure will be simply reopening
the CRM app. That’s accomplished by connecting the True
port of the decision with the Open Application action. Depending on the type of error you are recovering
from, you can have more complex recovery methods, but in our case this will be enough. And finally, time to see how this workflow,
which is the second half of the project, actually runs. We’ll have the Users excel file open so
we can see the real-time feedback of the operations. Here it is entering the first user from the
excel file… and writing the OK status. And the second one… also successful. For the third one we’ll close the app to
mimic a crash… and here’s its Fail status, and then the app recovers and continues to
process the next user. If we kill it again… same thing. We’re gonna let it finish, we don’t want
to make it angry 🙂 We’re almost done with this workflow, let’s
just clean it up a bit to make it more readable and flexible. First, in the project folder, we can remove
the Parent Invoke workflow since we don’t need it anymore. And we’ll rename this getUserData file to
make it obvious that it is called by it’s parent, getUsers. Much better. And to make it more flexible, we’ll create
a new file to keep all the settings in it. You can use excel files, or CSV, but we’ll
go with the Json format, since it’s the easiest to edit and to use inside UIPath. So let’s make a new file in the project
folder, we’ll name it Config; and using the basic Json sysntax we’ll create 2 parameters:
the file-path to the Users excel file, because it will certainly change for each deployment,,
and the number of users to extract at a time. And save. To use these variables in the workflows, we
need two actions. We’ll put them in the Load Users sequence
since it’s at the beginning of the process. The first one is Read Text File, that will
read our config file. The second action, “Deserialize Json”
is the interesting one: it takes the information from the text file, and outputs an object
containing all our variables from the config file, accessible like so: the name of the
Json object, in our case – GlobalParameters, and the name of the variable we need, usersFilePath. And that is all. Then we’ll copy these 2 actions and paste
them in the other workflow, getUsers, somewhere close to the start, and create the missing
variables. Here, we have the NumberOfUsers variable,
that controls the batch size. We’ll just assign it the value from the
config file, in much the same way, except now we’ll convert it to Int instead of String. And in the Write excel action, we’ll give
it the same global parameter, UsersFilePath. Now, you can edit the config file to change
the full path to the excel file, or the number of users it copies at a time, and the workflow
will adapt accordingly. So that’s all for today’s video. It was longer and it involved more
theory than usual, because it provides important lessons that will help you develop efficient
and reliable automations in any environment. Until next time, goodbye!

Comments (7)

  1. 00:00 – Introduction
    01:49 – Best Practices
    05:57 – Invoke Activity
    10:57 – Project Organization
    25:40 – Resume

  2. I want to know where to find the my crm app mentioned in the video. Thanks.

  3. One of the best tutorials videos i have seen in my life! Congrats for this and UIPath itself!…I'm starting an RPA consultancy Company in Chile; for sure i will use this best practices as to ensure quality!…By the way, any repo where i can download this demo project?
    Thanks!

    I saw the last video with the "Framework", awesome too!, but this project to be downloadable will be great full too.

  4. I heard you got a conference (Y)

  5. wow. Nice tutorial.

Comment here