2024-08-01

Recently, I finished teaching a class of the first-year undergraduate mandatory course on innovation at Singapore Institute of Technology (SIT). Earlier this year, I became an adjunct faculty for the course (UDE 1001), and taught a class of students from the Digital Communications and Integrated Media (DCIM) degree. The course ended a couple of weeks ago, this term.

In addition to my own class, I helped out as a guest judge for three other classes which were also doing their final presentations. After a while, there were a few patterns which cropped up across the four final presentations I sat in:

  • My most common feedback across all four presentations was a direct quote from Cedric Price: Technology is the answer, but what was the question? The problem definition for many teams was not clear enough, and it seemed to them that “since AI will solve everything, let’s use AI for everything!” Maybe the updated Price quote should be: “AI is the answer, but what was the question?
  • Featuritis: many projects had a TON of features in their prototypes. It often felt as though they were attempting to create super-apps from the get-go. Often, this was caused by…
  • Taking on every piece of feedback given: some teams literally showed how “this feedback caused us to include this feature”.
  • Prototypes as milestones vs. prototypes as a learning process: many teams developed higher-fidelity prototypes, even when the problem-solution fit wasn’t clear, because they saw that they needed to “deliver” a higher-fidelity prototype in the submission requirements. Consequently, sadly, much of the feedback they got from end-users was probably just polite rather than candid responses.

If there’s a single biggest issue, it would be the first: the lack of clarity of problem definition, including who has the problem to be solved. To quote a wise monk, this is like “scratching your head to relieve an itch in your bum”.

After I gave the critique that their presentation was far too vague and high-level, one student came up to me to challenge me that their case was “different”, because their user was “everyone”; when I tried to explain to him that you need to focus on a specific user and understand their problem first, he blurted out “Who has time to understand the end user first? That’s not how the real world works!” which made me laugh! (I thought of the famous quote “I’m not young enough to know everything”; I also thought this was probably my karmic payback for all the stupid questions I asked others in the past…)

Another teammate from this group came up to clarify my comments. After I explained my rationale with respect to the design thinking process, he looked extremely confused and shocked, and said something like “wait, so design thinking isn’t just about generating as many ideas as possible?? That’s what I was taught in poly(technic)!”


I was mulling on all this for the past few weeks, especially since I was conducting (in-parallel) a design thinking customized training for a client. I saw much of the same initial behaviour from the clients, but also saw the gradual shift as they experienced the design thinking process. I think the biggest shift was caused by their own first-hand experience with how the human-centred design focus, starting with empathy, yielded significantly different responses when they spoke with their end-users.

In the case of my clients, they were a team of 15 persons who were willing to invest a significant amount of time and energy: to date, we have run 4 x 8 hr design training sessions, which were a combination of working on a mock project in the mornings (to get them exposed to the design process), while afternoons were spent applying their learnings on their own project. I also coached them by sitting in their research interviews, and debriefing them after that. (This is in contrast with the SIT programme which comprises of 6 x 3 hr sessions, with their homework largely unsupervised.)

But what’s interesting for me was to observe the same anxieties and impulses from my clients at the start, and I began to suspect that maybe there is a larger societal conditioning at play.


Ex-poker player and author of Thinking in Bets Annie Duke popularized a poker term, called “resulting”:

Pete Carroll was a victim of (PJ: my emphasis) our tendency to equate the quality of a decision with the quality of its outcome. Poker players have a word for this: “resulting”.

Basically, if the results were great, that was a GREAT decision. If the results sucked, that was a BAD decision.

And I think the root cause for the behaviour I’ve observed in design education, really stems from Singapore being a “resulting” society: everything boils down to your results, because everyone judges everyone else by their results.

“Resulting” is a natural result of meritocracy.

Because results are paramount, hence

  • there’s no need to dive deeper into problem definition, when AI/apps/whatever magic bullet solution can solve everything; we just need to deliver the solution.
  • if one feature cannot deliver the desired results, let’s try as many features as possible: one of them has to work!
  • prototypes are results, which we can use to show other people that we are doing good work.

When results are the main way you are judged, then who has time to learn and understand root causes? Just show results!

Perhaps more importantly, if results are the only way you’re judged, then everyone will just focus on what is tangible and measurable, rather than what really matters.

It’s a little bit like marrying someone simply because their body measurements fit your ideal, their bank account has the required amount, and their personality test meets the requirement. The known-unknowns, and the unknown-unknowns, are what really matters here, and that requires a lot of exploration and dealing with ambiguity.

And the “resulting” mindset is also extended to the educators’ side: the poor guy’s response about his poly teaching design thinking as “just generate as many ideas as possible” could possibly just be his (mis)interpretation. But it could also be symptomatic of an educational system’s focus on delivering results, in the form of “our students have learned design thinking/programming/psychology” etc., but also “our students have progressed the fastest”, and “our students have scored the best in XYZ”. Hence, you get posts like this:

In my opinion, this post is missing the point, which is: in assessments and tests like this OECD test, the scope of the problem is already defined.

But the hardest part of real life ambiguous problems where creativity is most needed, is often asking the right questions, and deciding if the problem is the RIGHT problem. Basically the hardest part is often to decide what exam question, rather than solving a question defined by someone else.

“Design thinking/programming/AI/creativity/whatever is the solution, but what was the question?”


Started on 1 Aug 24 at 0910hrs. Finished on 1 Aug 24 at 1047hrs.