Contact Us

Ignoring the How in meeting product outcomes could hit your ROI

By Airwalk Reply Senior Product Delivery Consultant Anthony Condon

There is a lack of awareness on how an internal product outcome can be met, regardless of the product lifecycle phase (ideation through to growth and maturity phase). If the success of an internal product outcome matters enough that it can influence a business ROI, it could be a helpful prioritisation exercise to grade products across a portfolio to understand what level they are performing at. 

There are internal products built where the internal user has no interaction or responsibility to ensure a product achieves its outcome. An example might be an internally automated backup system - backing up user files each night without the internal user being required to do anything. That interior product should be pretty easy to implement as most users aren’t even aware they are following a backup process. The responsibility of having to do anything is taken away from them, and they can focus on doing their job while their data is backed up overnight. Almost by default, they adhere to the backup process with minimal input, protecting and securing the company's data against loss, corruption or accidental deletion.

Internal products can also be built to require user input in some way. Based on the previous example, if the backup wasn’t automated for internal users, this might involve the user periodically saving essential documents or project files onto a designated storage device within the organisation's premises.

The former example is simple; it barely requires the user to do anything, and it’s efficient because users are forced to do the right thing by default, i.e. backup their work every day, while the latter requires user input. In an ideal world, organisations would avoid the latter. However, there may have been a very particular reason why the latter was conventional over the former.

Larry Tesler, the computer scientist behind Copy and Paste, argued that it is challenging for the product development team when an internal product tries to tackle a complex problem. That complexity is never lost and moves elsewhere. For example, it could compromise the user experience or integration points throughout the software engineering and product development phases. Other complexities might arise in misalignment across functional areas, skills gaps, funding limitations or intricacies in the technology stack. 

Agile is a complexity enabler. The Minimum Viable Product (MVP) allows teams to press on through the strategy and planning phase without having answers to these complexities. There are costly consequences if those compromises are not improved or solved later in the product’s life. What happens when complexity leads to the release of an internal product that comes with necessary user input but leaves it up to the user to decide whether or not to utilise it as intended? How do you verify that employees save their files to the correct storage location? Let’s explore two additional examples of internal products, each showcasing varying degrees of ensuring users adhere to the process.

Manage by Exception

Let’s take an example of an Asset Management System; let’s propose the internal users are the Service Desk. At the outset of product development, the Asset Management System was meant to be tailored for fast onboarding – and a Radio Frequency Identification (RFID) reader to scan devices into the system was deemed the appropriate solution to meet the rapid onboarding requirement. The product released with the RFID reader is incompatible with the Asset Management System, and now all new laptop details need to be onboarded into the system manually. It’s a very busy Service Desk with minimal staff, and humans are not accurate or infallible; the RFID failure has defeated the product's purpose. An MVP doesn’t have to be perfect, but complexity has moved further towards our internal users, the Service Desk. How do we verify that each person is doing what they should be so that the Asset Management System still represents the current environment? The team creates dashboards to track the laptop's status. It can be argued the Service Desk's job is not to develop and maintain dashboards; that was the responsibility of the product team that created the Asset Management System in the first place. 

Let’s say that no measures are taken by anyone to actively detect the laptop’s state, and the busy product development team hasn’t yet prioritised a fix for the RFID scanner. You can guarantee that one way you will find out the Asset Management System ‘is accurate’ or ‘isn't accurate’ will be through a compliance audit or an incident. It may only take one laptop to get through the net, causing it to become non-compliant and perhaps even a target of a malicious actor due to being practically invisible to the internal patching product. Other than not knowing, this is probably the last way you want to find the outcome of the product is not being achieved. If the manual effort required to input the laptops into the system was deemed a considerable effort, compare that to identifying every computer missing from the system, and for each one, recognising the required patch, updating the patch and verifying that the patching has worked. This domino effect could continue and result in audit failure or breach, and both could significantly hit the company's revenue.

Compiled Analysis

As hinted at in the previous example, some internal products have shared the complexity problem with internal users, and that can result in effort spent monitoring whether a product's outcome has been successful.

An internal onboarding training portal website for a > 4000-user organisation may have been designed so that users are told to visit a URL to complete the training. In this example, the customers are the organisation's internal users. The product development team has not found a suitable way to integrate the training process into the onboarding process. Instead of engineering a solution that deals with the complexity of the product, there are dashboards built using HR data. All new onboarders who have or haven’t completed the training are identified based on whether they hit the training completion page presented after all the training. 

Anyone who did not complete the training can be identified, but the Product Owner must do manual work. Through some method, the user must be told to go to their browser, type in the correct URL, and sit through the mandatory onboarding training, which is an entirely manual process. The user doesn’t automatically get forced to sit after failing to complete the training after several days; that wouldn’t be practical and may hinder that user's ability to do their job. On the other hand, such a manual fix could be pretty tedious at scale and involve much chasing. Despite the difficulty this causes, it is undoubtedly better than not knowing whether the product is meeting its outcome, only to find out when it is too late through a critical revenue-impacting audit failure.

Patterns in Internal Product Design

There are varying levels of ‘user inputs’ required across internal products, and they can be classified as patterns relating specifically to internal products. Each pattern has degrees of automation, efficiency, product complexity and scalability. Below is a table that grades the effectiveness of these patterns.
 

Level   Label  Description
7 Compliant by Default The governance guardrails never allow users to drift from standards. A user cannot drift from the standard due to the system or process design, and so the design standard will only be sustainable if it enables the user and avoids adverse user experiences such as long, tedious wait times. It is the most efficient way to design capabilities over time, but it can require enormous collaboration.
6 Self Correcting Both the detection of standards drift and the enforcement needed to bring the drift back into policy are performed in an automated way. A user outside of a policy will be gently brought back into the policy. A drift can occur with this design pattern, and mitigation or correction won't happen in real time.
5 Auto Detection Detection of standards drift is automated and accurate. If there is an alert, some control or activity is out of policy. The remediation for the identified standards drift will have to be completed manually.
4 Compiled Analysis Indicates standards drift through metrics and measures. The collection methodology for detecting standards drift might range from fully automated or fully manual depending on the control and underlying systems involved. If our metrics are appropriate and accurate, they can point towards areas where controls are broken or non-existent or you have experienced standards drift. Think contextualised raw data collection with KPIs. 
3 Raw Data Correlation A manual search of raw data is required to find standards drift. Usually, the data to be searched has been moved off the source system or several systems and generated in some automated way. Identifying standards drift requires manually searching data, alerts, or emails. Compliance or standards drift is only detected as frequently as the raw data is manually searched. Manual log searches or threat hunts are common examples of this design pattern.
2 Manual Review System data or configurations are reviewed manually. This review normally happens on a single system or less frequently with remote access tools manually across a number of systems. This can include controls such as configuration reviews.
1 Manage by Exception This compliance pattern is in use when little effort expended in identifying drift and some external event or activity alerts the organisation to a drift in standards. Mainly detected from audit findings, reports from a user or incidents. 


As the levels ascend, the verification method to ensure the internal product meets its targeted outcomes is more automated, and complexity is removed from users. The solution is more scalable and operationally efficient. 

Not all levels are achievable for an internal product, so this framework can be considered aspirational. It may also not be economically sensible to move from one level to another, and it will be up to the Product decision-makers to decide whether it makes sense to obtain a higher level and forego the cost of the improvement.

If you want to discuss Product Development or an outcome-orientated approach to tackling your business problems, please contact Airwalk Reply.

 

Concerned about your Cloud Security Posture? 'Enablement and Shift Left' is the answer Read more 

Client Feedback