Annotell provides fast annotations while keeping the quality high. An important factor for making sure this stays true is the use of efficient annotation workflows, utilizing interactive automation features. The goal of introducing automation tools to the annotation workflow is to reduce the time and effort spent on annotating while keeping data quality high. In this article, we will introduce the main insights we gathered from implementing interactive machine learning services as part of the annotation user experience.
Through the years we have grown to have a large amount of annotators spread across the world. One type of data we work a lot with is a fusion of 2D images and LiDAR 3D point clouds. In order for our annotators to successfully annotate images, videos and sequences they receive definitions of what to annotate and guidelines on how to do so. Additionally, we continuously develop tools, features and functions that assist the annotators in their annotation workflows.
How the 2D image interface looks like
How the 3d LiDAR interface looks like
We have been putting a lot of effort into avoiding and reducing interactions that are time-consuming and inefficient within the user workflows. In order for the annotators to create more efficient annotations, we have provided them with a variety of interactive automation features.
The Engineering Team at Annotell was tasked with creating a drawing tool that would allow the annotators to spend less time drawing and adjusting cuboids while annotating objects in LiDAR 3D point clouds. This tool, which we call The Machine Assisted 3D Box tool, calculates the size, position, and rotation of objects. The annotator provides an initial judgment while the machine learning service encapsulates all the points of the object. By doing this we utilize the human’s and machine’s different strengths.
In addition to The Machine Assisted 3D Box tool, a set of tracking features were developed for 3D cuboids. These features adjust the object’s position, rotation, and size through a point cloud sequence. This decreases the time our annotators have to spend adjusting the cuboids in each frame of the sequences and therefore saves time while keeping the annotation quality high.
In the early stage of introducing the users to the interactive automation tools we had a low success rate in getting the users to utilize these features.
In the early stage of introducing the users to the interactive automation tools, we had a low success rate in getting the users to utilize these features. This was the case even if we could conclude, both by observations and user metrics, that these tools sped up the annotation workflows.
To investigate the low conversion ratio we held interviews and sent out surveys to the users but the response wasn’t positive. According to the users, they had experienced these new tools as a slower and less accurate alternative and simply couldn’t see how it would benefit their workflow. So, we were left with a solution the engineers liked and that users ignored. This led to us questioning what factors that could have led to us ending up in this unfortunate situation.
The automation tools weren't appreciated by the users because working with them didn’t bring the expected results to efficiently create cuboids with high quality. Thus, when the tool didn’t meet their expectations, they felt they would rather work with the manual tools and stay in control of the result.
How could we have prevented the negative experiences towards the machine-assisted tool? According to Google’s People + AI Guidebook , you will create trust in systems if you allow the users to have them explained to them. You decide on what level you should explain the system, whether this involves the level of how the data behind the automated interaction works or how the users will be able to include it in their personal workflows. Providing users with this information will help them create mental models that set a reasonable expectation of how to interact and work with the systems.
When we first introduced the interactive automation tools to the users, our engineers had some knowledge about what to expect from the interactions. They knew the limitations of the automation tools along with how it was meant to make the user workflows more efficient. However, it seems, as though we failed to prepare and inform the users enough to create the necessary trust in the tools.
What we learned so far was that the automated interactions did not replace the users’ own judgments in the way they expected. This left the users disappointed and a little confused as to why they should use them at all. How could we have communicated and prepared them better for the new types of interactions these tools came with?
Lessons we’ve learned so far:
By being honest, setting expectations and using visual affordances you will prepare and guide the users through the workflows. However, it is equally important that an automated workflow is more efficient and less complex than the manual workflow it is replacing.
...make sure that the performance of the automated interactions bring positive, noticeable changes that the users appreciate.
If you introduce a new workflow to the user, it is important that the user understand why this workflow has been introduced. Changing a habit is never appreciated amongst the users, especially when they can’t see what’s “wrong” with the old one. A lesson we learned was to make sure that the automation features and functions are easier to work with than the manual ones. More specifically, we have to make sure that the performance of the automated interactions bring positive, noticeable changes that the users appreciate.
Preparations and good communication can lead to a greater level of user acceptance when interacting with automation. However, it can be difficult to change the users’ feelings towards tools, features, and functions if the first experience was negative. This in turn will affect how willing they are to work with it in the future. Therefore, if you are aware that the automation feature you want to implement lacks good usability, start by introducing it to a small group of users. This will allow them to test it and provide feedback on it before making it a part of everyone's workflow.
So what did we learn from introducing this interactive automation into the annotation workflows of our users?