Many people claim generative AI will make all stories real time and interactive. What is your take?
Liya Safina: Google recently released Genie, which generates interactive environments from a few prompts. It shows how world building will become accessible to everyone. Every story, brand or show is a form of world building with rules, characters and physics. That “power” used to sit with a very small group of people who knew the tools and had budgets. Fan fiction existed because people wanted to take characters and make new versions. Generative environments open that up. If I love a movie from the 80s, I could soon watch it inside a world I generate on the fly. The tool is neutral, like a hammer that can build or harm. Society shapes the outcome. Designers must anticipate harms and protect users, especially as people take existing stories into new directions. I think real time generative worlds are coming, but the responsibility lies in designing guardrails rather than limiting creativity.
What kinds of collaborations are needed to unlock the next breakthroughs in immersive tech?
Liya Safina: Beyond engineers and designers, we need psychologists and anthropologists to guide how we design for human behavior. We also need policymakers involved earlier. I am a big admirer of Tristan Harris from the Center for Humane Technology. His work shows how tech can create harms designers never intended. He compares it to defensive driving where you venture traffic anticipating danger instead of reacting to it. Technology needs that approach. Policies should not fight innovation. They should evolve alongside it. Right now policymakers are too far removed from development. If they were embedded in research sessions or visited companies monthly, the understanding would be deeper and regulations more effective. With the right structure, companies could collaborate without exposing trade secrets. We need a triangle between users, policymakers and tech creators, with communication happening continuously rather than after harms appear.
From your experience, would companies even want policymakers inside during development?
Liya Safina: In my experience at Google, I was surprised by how much the company genuinely cares about user choice and safety. There is a voluntary desire to prevent harm, not because someone forces it but because it aligns with values. If the right frameworks existed, I think collaboration is possible. It would take careful setup to protect competition and confidentiality, but it could be done. But that is not true for all companies and there are places I would not work regardless of how advanced their technology is. The ones I admire already try to build protections before problems arise. It will not happen in one year, but with two years of structured planning and trust building, regular policymaker collaboration could become normal. Innovation would move faster if the regulatory perspective was integrated earlier rather than introduced after public incidents.
What does responsible innovation look like to you?
Liya Safina: I can only speak from my overall experience across companies. For me, the non negotiable is protecting children. That must be in place from the beginning, even if other aspects iterate later. Some technologies should simply not be available to kids without supervision. Parents need clear opt in controls. Beyond that, no company intends harm. Often an unexpected case becomes a mini trigger event that forces revisions of guardrails. Responsibility means acknowledging that early versions will be imperfect but committing to continuous improvement. In my view, children require stricter boundaries while other features can evolve through iteration. As long as companies recognize this hierarchy of safety and treat it seriously, responsible innovation becomes a realistic standard.
Why do certain products become habits while others remain one-time novelties?
Liya Safina: Products become sticky when they serve a clear primary use case and genuinely help people achieve goals. Access, convenience and time savings matter. Smartwatches did not change the world, but they nailed a core purpose around health, fitness and everyday utility. With smart glasses the hardware is still not matching people’s expectations. Fit, style, overheating and battery life are huge factors. Some of my UX concepts cannot be built until the hardware catches up, which may take several years (specially spatial UX). There is also the unknown trigger event. QR codes existed since the 90s but only took off during COVID because people suddenly needed touchless interaction. Smart glasses may need a similar trigger. My speculation is they might become essential once self driving cars are common. As a cyclist you rely on eye contact with a driver to know you are safe. With no driver, glasses could become the communication layer. That type of moment can shift a niche product into mass adoption.
What are the concerns around smart glasses and how should society approach them?
Liya Safina: Privacy is the biggest concern. If someone wears glasses, how do people around them know if they are being recorded. It took society years to collectively agree that holding a phone upright means you are filming. Early technologies require time for norms to form. Big companies take privacy seriously but cannot get everything right in the first version. The expectation that the first release must solve all issues is unrealistic. Early adopters help refine the product. What hurt progress was when companies were punished for early attempts, like when Google stopped releasing glasses for a decade. That halted innovation. People also need to understand this is an iterative process where public feedback, policy involvement and continuous improvement shape the path. We should not expect perfection at launch. Instead we should design systems that evolve responsibly as society learns and adapts.