Here are (25) design faults that should trigger the check-engine light
I really don’t know much about cars.
Furthermore, with all the computers on them now, I probably never will. However, I do care when the “CEL” goes on. The CEL, or check-engine light, is that often cryptic, blood-pressure-raising notification that mostly just makes you ask one question: “how long/far can I keep driving without making it worse”? Despite the fact it’s not particularly pleasant, it probably does a decent job at triggering action: reminding people they have damage or will do damage if they don’t treat the underlying issue.
You can also buy a scanner called an OBD2 reader that plugs into your car’s data port and exposes (via a mobile app) what code and fault triggered the CEL. That information is powerful: you might try to fix the issue yourself, explore cost estimates, or simply be more informed when you talk to your mechanic.
So, what if we applied this thinking to your current software/product development process? What if you had a big, orange rectangular CEL light above your cube that would “ding” and illuminate whenever the process–driven by your data or engineering culture–was impeding your ability to design a great user experience that would also yield business value?
Before you head to Amazon (I’m sure they sell a giant CEL sign), you first need to know what design faults should trigger the CEL during your software development. In my time working with companies that are primarily made up of technical, data or engineering talent, I’ve identified cultural patterns that can run counter to your goals to build valuable, human-centered software. Before I continue, I want to recognize that these companies are often staffed with highly competent and intelligent technical experts. That said, human-centered design is a different skill set, and in large groups, a culture emerges–a culture that may sometimes be at odds with a human-centered approach to building software.
So, today, I want you to imagine if you were the “fault detection system” for your development process. As a leader, your job is to illuminate the CEL if one of the design faults is triggered. Of course, in order to do that, you need to know what possible design faults might get thrown in a data/engineering-driven organization–so that you can focus on clearing the CEL and moving ahead in a customer-centric manner.
Since there isn’t an OBD2 reader for this, I’m going to give you a bunch of these faults I’ve seen–but remember: if the CEL never goes on, most of the team won’t know there’s a fault at all, and you may be driving your service into a bigger risk down the road.
Possible Design Faults:
- You can see lots of design and/or engineering “effort,” but despite all these assets and outputs created, something feels off. You can’t put your finger on all the specifics, but you’re just not convinced that the product or solution you’re building is going to be compelling or valuable.
- Users aren’t engaging with your service as much as you expected
- You say/hear, “We’ll make it better in v2”…but you rarely iterate. You just add more stuff to v1 since the backlog is endless...
- You set out to “do some data science,” but, the users and/or business stakeholders are scratching their chin. The execs are wondering if this means there’s now an AI strategy in place.
- The UI or data visualization looks like it might be informative and useful to users, but you can’t quite tell what problems it is solving.
- You gave your customer exactly what they asked you for, but they either aren’t using your solution, don’t seem happy with it, or they changed their mind after seeing it.
- The sales team complains the product is hard to sell / doesn’t “sell itself”
- You’re over-investing in training or documentation
- The stakeholders want “iPhone/Apple-like” simplicity but have not invested at all in design at all, or the ratio of design to engineering staff is completely unrealistic (I’ve seen “hundreds to one” ratios before).
- “Testing” in your world means QA and unit testing only. Sprint demos center around showing off increments of technical effort instead of end-to-end tasks users need to perform.
- Design quality is measured by subjective opinions, usually focused on aesthetics or trivial interface concerns. There is no clear definition from leadership on how useful, usable, and valuable will be measured.
- Nobody has ever seen or helped to create a current state or aspirational journey map to reflect and guide the entire customer experience
- The IOT project focused its efforts on getting the telemetry from the hardware and displaying it.
- The sprint / project / application’s scope was largely dictated and framed by what data/APIs were available since the project started with engineering.
- The word design is only associated with “data visualization and UI”
- The service can be changed anytime with a last-minute swoop-and-poop from the HPP (highest-paid person) involved
- The biggest influencers of design and UX decisions haven’t directly interacted with any end users in more than 2 weeks…or maybe ever
- There are no UX benchmarks or checks that are revisited over time to ensure the solution/service isn’t drifting or getting out of touch
- SMEs / department / domain experts were not involved at all or properly engaged with during design
- There’s a lot more talking about the right design vs. sketching and visually exploring what the right design might be
- While there was an “MVP” mindset, the increments that were implemented were dictated by engineering or data boundaries
- Use cases are poorly written and lack sufficient contextual information to help the team understand the real “why” behind the use case. Effectively, they are not design actionable yet…but engineering charges ahead.
- The team working on additions/changes/features has no idea how they got prioritized in the backlog, but they proceed anyways, filling in the blanks
- UX/UI choices are justified as good based on the fact they were copied from some template or famous brand/site/app
- Consistency in your design consistently trumps context every time.
The CEL is a nuisance, but the fault information behind the CEL provides important data to troubleshoot the health of your software development process. While I’d rather see your CEL light off–because none of these faults has occurred–ignorance is not bliss. The first step to clearing the CEL is to understand what fault triggered it. Once you’ve done that, investing in design is your best mechanic.