Ways We Make: Quiddale O’Sullivan, Chapter One
Second in this series is a fabulously intriguing collection of thoughts and experiences from Quiddale, Q for short. It’s so intriguing that we couldn’t bear to edit it and so we’ve split it into two chapters. This week: Career Through Craft. Chapter Two out next week.
Q for short
Previously of Google, Meta, the UN, and Foster & Partners, Quiddale (Q for short) is a London-based product designer specialising in AR/VR and AI. He now works at Helsing, developing AI-powered civil defence systems.
Portfolio: https://www.qforshort.com/
LinkedIn: https://www.linkedin.com/in/quiddaleosullivan
Pronouns: he/him
Career
What was a pivotal “aha” moment in your career?
The Shift in Perspective (The "Aha")
The "aha" moment was realizing we were treating Tensorflow Lite (a Google product now known as Lite Runtime) as a technical data-collection challenge when it was actually a social contract problem. The blocker wasn't the hardware; it was trust. People in the real world don't want to be treated like lab subjects. My perspective shifted from "How can we get more data from people?" to "How can we turn participants into genuine partners and build a system so transparent that they want to help?"
The Radical New Path
Based on this, I advocated for a radical new path focused on transparency. Instead of just getting a signature on a consent form, we pushed to develop tools that gave participants real-time control, like a "delete" button for recent captures and a clear view of what data was being recorded. We reframed the project's primary goal from just hitting a data quota to creating a blueprint for ethical, large-scale data collection in public.
The Outcome & Learning
The outcome was transformative. While our data collection rate was slower at first, the quality and diversity of the data we received were far higher because our research partners were more engaged. More importantly, we proved that the social and technical challenges of Augmented Reality must be solved in parallel, not sequentially. The biggest success wasn't the dataset itself, but the trusted methodology we created. That was the true 10x breakthrough.
What do you wish someone had told you earlier in your journey?
I wish someone had told me that the most important role of a leader is not to have the great idea, but to be the chief defender of that idea against the gravitational pull of mediocrity.
Your biggest enemy isn't the competition. It's the noise, the committees, the market research, and the well-meaning people who will tell you a thousand reasons why it can't be done. The real battle is protecting a fragile, beautiful idea and the small team of A-players carrying it until it's strong enough to stand on its own.
Do you feel like the industry values the same things you do?
Not entirely, and that difference is fundamental to our mission.
The broader industry is largely driven by the immediate cadence of product cycles and commercial application. It's incentivized to find a clever use for a known capability and scale it as quickly as possible. That is important work, but it's fundamentally about application.
We believe our primary purpose is discovery. Our mission is to solve intelligence itself, to understand the fundamental mechanisms of learning and thought. We see ourselves as a scientific endeavor, more akin to a modern-day Bell Labs or CERN for AI. Our "products" are breakthroughs like AlphaFold, which aren't designed to capture market share, but to fundamentally advance science for the benefit of humanity.
The industry often looks at the weather; we are trying to build a better barometer. While there's a healthy and necessary overlap, and the industry is increasingly recognizing the power of fundamental research, our core values are oriented toward a much longer-term scientific horizon.
Has your relationship to design changed over time? How?
Early on, I was obsessed with the object itself: the form, the material, the finish. The goal was to perfect the tangible thing in front of you. It's a natural starting point.
Over time, especially within our studio, I realized the object is simply an artifact of the process. My fascination shifted to how we work, that incredible dialogue between engineering, software, and industrial design. The "how" became more interesting than the "what."
Now, my focus is almost entirely on the people and the culture. The greatest design challenge isn't the product; it's creating and nurturing a studio environment where a diverse group of talented people can do their life's best work, together. My relationship is now with the garden, not just the flowers that grow in it.
What’s a risk you took in your design career that shaped you?
The user research was a humbling disaster. Our beautiful design, which everyone internally loved, completely failed when put in front of real users. They couldn't find core features and were frustrated.
Because we caught this before launch, we were able to pivot and fix the fundamental architectural problems, not just the surface-level bugs. The launch was delayed by a month, but it was a massive success. That experience shaped my entire career. It taught me that the biggest design risk isn't shipping a product late; it's shipping a product that doesn't respect its users. My job isn't just to manage timelines; it's to be the chief advocate for the user's reality, no matter how inconvenient.
Belief
What’s a belief about design you hold that others might find surprising?
I believe that the integrity of a product is defined by the fanatical care invested in the parts you will never see, because at planetary scale, that is the only thing that prevents the entire system from gracefully degrading into chaos.
Unseen Integrity vs. Scaled Chaos
The 'Unseen' Part: The crucial, unseen component is the data anonymization pipeline and the user-controlled deletion mechanism. This isn't just a feature; it's the ethical bedrock of the entire project. Fanatical care here means the anonymization isn't just good, it's rigorously tested against adversarial attacks. The delete function doesn't just hide data, it provides verifiable proof that the user's data has been irrecoverably purged.
Preventing 'Chaos': The 'chaos' at a planetary scale isn't a server crash; it's a systemic collapse of trust. If this unseen integrity fails- if faces aren't properly blurred or a user's 'deleted' data is found on a server- the public and regulatory backlash would be catastrophic. The product wouldn't just fail; it would poison the well for the entire future of AR.
Therefore, the ultimate design is defined not by its camera resolution, but by the verifiable integrity of its data handling: the part the user must trust but will never truly see.
Craft
Are you more of a sketchpad thinker or a keyboard executor?
Prototyping as thinking
For world models, the false dichotomy is between the abstract cognitive architecture (the "sketch" of how the model represents concepts like objects, physics, and causality) and the computational substrate (the "keyboard" work of the neural network architecture, data pipelines, and simulation speed).
The "keyboard as sketchpad" approach means you treat these as a single, rapid feedback loop.
Isolate a core hypothesis: You don't start with "how does the world work?" You start with a tiny, falsifiable question like, "Can a transformer-based model predict the next 10 frames of a video of a ball bouncing more efficiently than a convolutional neural network?"
Build the prototype: You immediately code both simplified models. This isn't about building the final system; it's about building a cheap experiment to test your architectural idea.
Measure the results: You measure everything: training speed, prediction accuracy, and computational cost.
The performance of that prototype is the most valuable insight you can have. An elegant architectural idea for representing the world is worthless if it's too slow to run or impossible to train. The prototype, therefore, isn't just an implementation of your idea; it's the most truthful sketch of it.
How do you approach designing for something invisible, like AI or ethics?
That's the fundamental challenge. We approach it by making the invisible visible through principled design. You don't design the final property, like "intelligence" or "ethics," you design the tangible process from which it can emerge.
For Intelligence: We take inspiration from neuroscience. We don't try to hand-code "intelligence" itself. Instead, we design the visible mechanisms - the learning algorithms and network architectures - that allow intelligence to emerge when the system interacts with data. We design the seed and the learning environment, not the final tree.
For Ethics: The principle is the same. You can't simply program a model to "be ethical." Instead, you design an explicit, auditable framework of rules, constraints, and oversight mechanisms. You build visible tools for interpretability to understand the AI's reasoning, and you rigorously test for unintended behaviours. The ethical conduct arises from this robust, transparent framework.
What’s a tool or workflow trick that changed everything for you?
The most pivotal tool for me was never a piece of software, but an analog one: a simple, lined notebook that I use for time-block planning.
Before I adopted this, my days were guided by a to-do list. I'd arrive at my desk and ask, "What should I work on next?" This seems productive, but it's a trap. Your brain will almost always choose the easier, more immediate, "shallow" task over the harder, more valuable "deep" work. The to-do list paradigm puts you in a constant state of reactivity.
Time-block planning changed everything because it forces you to make decisions about your time in advance. At the start of each day, I give every minute a job. I schedule the deep work first: large, uninterrupted blocks for writing or research. The shallow tasks, like email, are relegated to small, pre-determined blocks. I'm no longer asking "what's next?"; I'm just executing a thoughtful plan.
This single workflow trick inverted my relationship with work. It shifted me from a state of professional reactivity to one of proactive intentionality.
Do you think your best work is the most technically perfect, or the most emotionally resonant?
But the result of that technical work was a profoundly emotional experience for people. For the first time, they could ask a question and get a relevant answer almost instantly. It felt like magic. That feeling of empowerment and instant access to knowledge was the emotional resonance, but it wasn't a feature we designed. It was an emergent property of the underlying technical excellence.
My view is that the best work is when the technology is so good it disappears, leaving only the user and their solved problem.
How do you balance intuition and research?
Research is the skeptical cartographer we send on the next ship. Their job is not to admire the explorer's vision; their job is to find the fatal flaws in it. They rigorously map the terrain, specifically looking for the impassable mountains or uncrossable rivers that will kill the project. The goal of research, especially early on, isn't to prove the intuition right, it's to find the fastest, cheapest way to prove it wrong.
This partnership, which we call "enthusiastic skepticism," is the engine of my work. You need both the wild dream and the rigorous, evidence-based attempt to kill that dream.
What's your first move when you're stuck?
The best way to get unstuck is to find a physical analog for the digital problem. If we're designing a new way to organize photos, we'll print them out and watch how people naturally sort them on a real table. If we're working on a notification system, we'll explore the real-world, physical ways people get each other's attention: from a polite tap on the shoulder to an urgent shout.
This isn't about copying the real world directly, but about rediscovering the fundamental, intuitive human interaction that has been refined over centuries. It brings a sense of familiar humanity back to the work and almost always breaks the creative logjam.
What makes your work especially exciting or hard?
The Challenge: Fighting Physics
Unlike cloud-based AI where you can often solve a problem with more computing power, on-device ML is a constant battle against the physical limits of hardware. Every byte of RAM, every CPU cycle, and every milliwatt of battery power is incredibly precious. A model that works perfectly on a server can completely fail on a mobile phone or microcontroller. You're not just designing an algorithm; you're designing it to fit within a tiny, unchangeable box.
The Excitement: Forced Ingenuity
That extreme constraint is precisely what makes it so exciting. You can't rely on brute force, so you're forced to be incredibly clever. The thrill comes from finding a new optimization or a more efficient quantization technique that makes the impossible possible. It's the ultimate puzzle: taking the power of a massive neural network and elegantly compressing it to run in real-time on a device that fits in your hand. The satisfaction comes from that final moment when a powerful AI capability works instantly, right on the device, using almost no power.
How do constraints – like regulation, materials, or data privacy – shape your creativity?
How Constraints Force Better Solutions
A blank slate is often a trap. It's too big, too undefined. Constraints turn a vague problem into a sharp puzzle, forcing a higher level of ingenuity.
Data Privacy: A regulation like GDPR isn't just a legal hurdle; it's a creative prompt. It forces you to ask, "How can we provide a deeply valuable service while being maximally respectful of the user?" This leads to more clever and trustworthy designs that often rely on on-device processing or privacy-preserving techniques.
Materials: Being told you can only use a sustainable or recycled material forces you to invent new manufacturing processes and design forms you would never have considered otherwise.
Constraints eliminate the lazy, brute-force answers and leave you with a much more interesting challenge, which is where the real creativity begins.
What's a piece of advice you give often, but find hard to follow yourself?
When pursuing ambitious goals, it’s easy to become emotionally attached to the solutions you create, whether it’s a product, a project, or a method. After all, these solutions represent your team’s hard work, passion, and hopes. However, the key to true innovation is to fall in love with the problem, not the solution. This means staying focused on the ultimate goal – like “clean energy for everyone” or “radically new ways to learn” – and being willing to pivot, scrap work, or embrace new approaches if they better serve that mission. While it’s intellectually clear that abandoning a solution not on a 10x trajectory is the right move, emotionally, it can feel like failure. Letting go of something you’ve poured your life into is one of the hardest parts of the job, but it’s often the moment that opens the door to real breakthroughs. The most successful innovators are those who can detach from their solutions and remain loyal to the problem, even when it means walking away from months or years of effort.
Loved this? Subscribe for more thoughtful content from Good Maven and don’t forget to come back next week to get the rest of Q’s download.