Name Dropping is a Q&A series that aims to elevate the stories of leaders who identify as women or nonbinary leading in the tech space. The idea came from Angela DeFranco, a former VP of Product at HubSpot, who said one way to be better allies is to name drop underrepresented voices in discussions of achievement, inspiration, and disruptors in tech, instead of referencing, time and again, the same set of (often male) leaders.

This edition of Name Dropping features Jen Gennai, Founder & Director of the Responsible Innovation Group, Google.

You're Google's Director and Founder of Responsible Innovation. What does 'responsible innovation' mean to you?

There are enormous opportunities to use AI for positive impact, and there is also the potential for harm; the key is ethical deployment. So at Google we favor a sensible, informed discussion. “Responsible Innovation” for me means taking deliberate steps to ensure technology works the way it’s intended and doesn’t lead to malicious or unintended negative consequences. It is the application of ethical decision-making processes and proactive consideration of the effects of advanced technology on society and the environment, throughout the research and product development lifecycle. 

You can’t just assume nothing bad will happen nor that noble goals will be realized, you have to instead actively work on it. This involves developing smart practices, repeatable, trusted processes, and a governance structure for accountability. We also are careful not to call ourselves the “ethics” team because “being ethical” can mean different things to different people, and my threshold of what’s ethical may be different than yours, so we don’t mean to imply that people are being “unethical” but more that we need to be conscious about who is going to be affected by our technology and how they may be exposed to unintended or malicious effects.

My Responsible Innovation team handles day-to-day operations and initial assessments for Google’s AI Principles, and includes user researchers, social scientists, ethicists, human rights specialists, policy and privacy advisors, legal experts and former diplomats on both a full- and part-time basis. This approach deliberately allows for diversity and inclusion of perspectives and disciplines, while working with a number of teams and experts from across functions in the company, including the newly announced group under Marian Croak; the Center for Responsible AI in the Research organization. This group complements our policy, operations, and governance focus with their technical and research expertise.

You were also the lead on co-writing and publishing Google's AI Principles. What was your team's main goal with this document, and what are examples of some of the principles you consider key to the future of AI?

As we put it in the original blog post when we announced the Principles: “We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So [we’ve announced] seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.” At the same time we made it clear that we will not design or deploy AI in four application areas to ensure we have guardrails around what we will not do. 

I spent many years in Google working on the Trust & Safety team leading the User Research team and a team called User Advocacy, so taking a human-centered approach to technology and understanding the effects of technology on people has been an area of focus for me for years. Getting to inform the scope and definition of the AI Principles allowed me to apply years of building an understanding of if and where technology can go wrong to a set of ethical principles to guide Google in the years ahead, so my personal approach with the AI Principles was to ensure they focused on the impacts on people and society, not the company or the technology. 

I’m biased to say all of the AI Principles are critical for ensuring responsible AI now and in the future, but I’ll say the notion of considering harm is key. Anyone building or deploying technology must consider the impact of that technology on individuals and society more broadly, especially potential effects on historically marginalized communities, and those less represented in tech companies, and therefore whose lived experiences may be less obvious to those designing and deploying these advanced technologies.

What's one of the most exciting innovations you think we're going to see from the tech world in the next few years? 

AI can bring enormous benefits to society in areas as diverse as healthcare, science, environment, education, business. I’m most excited that technology and specifically AI can help us better understand and address global problems like never before, for example, identifying new ways of mitigating against the worst effects of climate change and helping cities, businesses, and agriculture adapt or finding new sources of clean energy, and being able to improve health and longevity of human lives. 

In addition to a degree in management and information systems, you hold certificates in subjects like 'Greening the Economy' and 'Environmental Law.' What role does sustainability play in how you make business decisions? 

I see sustainability as predominantly about building for the future and for the benefit of the world, encompassing both human civilization and Earth’s ecosystems and biosphere. I apply those notions of future-planning and working to optimize global benefits and mitigate severe negative effects in my responsible innovation efforts. Both sustainability and responsible innovation involve thinking about people beyond yourself now as well as a future that won’t involve you. It helps me make decisions by getting comfortable planning for futures for which you can’t predict everything but must work to mitigate as many scenarios as you can, and on a global scale. 

The concepts of system dynamics and the interconnectedness of different ecosystems’ inputs and outputs are also useful skills when thinking about advanced technologies ⁠— the technologies we work on don’t operate in a vacuum, and there are many stakeholders involved throughout the lifecycle. And finally, I have had to accept that you can never “solve” sustainability like you don’t solve “ethics” or “fairness,” but you keep iterating and working on it and making it better day to day, sometimes incrementally but also in leaps and bounds. 

Sustainability efforts are also paramount at Google, and recently we announced a variety of ways our products can help users reduce their carbon footprint, as well as how the company is also reducing its environmental impact.

What is one quality that you think every leader should have in order to generate impact, and lead effectively? 

Humility ⁠— no one knows everything so it’s important for everyone, but especially leaders to be comfortable accepting you don’t ⁠— and can’t ⁠— know everything, so you need to listen to others, recognize what you don’t know, recognize who is not in the room, and what expertise and lived experiences are needed. Lead by example, and ask as many questions as answers or advice you give. Be open to constructive feedback which will help you and your teams or business get to better outcomes. 

What career achievement are you most proud of so far? 

I’m most proud of making the case for and subsequently building the Responsible Innovation team from scratch. My team are amazing people and they’re doing something that’s never been done before ⁠— creating an effective governance structure within a large, global tech company to ensure our technology aligns with our ethical AI charter and that all employees are empowered and equipped to take a critical ethical lens to their work day-to-day and throughout the product design lifecycle. 

My team is working to ensure that this complex, innovative, global company develops and deploys technology in the best possible known way for the short and long term benefit of individual users and society more broadly ⁠—-  that’s really hard! There’s no roadmap for this and every day my team shows up to tackle some of the most complex issues of our time ⁠— despite the scrutiny, the pushback, and the suspicions of our motivations and level of accountability from external stakeholders. They do it because they believe in this work, they have seen the impact they’ve had, and they take their responsibility seriously to ensure AI has a positive impact on the world. I’m grateful to have been given the opportunity to establish and grow this team, but also to get to work with such amazing people every day ⁠— I learn something new from each and every one of them all the time. 

What's one book you think everyone working in tech should read? 

The Clapback, which isn’t a technology book but it will help people who work in technology think more deeply about the sources and implications of stereotypes. However, I’m biased because it was written by a former colleague and friend⁠ — Elijah Lawal.

Who’s one woman or nonbinary person in technology you’d like to name drop and why?

I’m not sure if this is strictly technology enough but Mercury Stardust is a TikTok star known as Trans Handy Ma'am. She calls herself a "Jane of all trades" and she’s so good at making D.I.Y efforts really accessible and fun. In Google our founders often said “you can be serious without a suit” and I think she emulates that you can be anything you want, just play to your strengths. I want to name drop her because I’ve learned so much from her and she has a non-traditional parallel career as a burlesque star. You can see her as Trans Handy Ma'am on TikTok at @mercurystardust.

What's your preferred method of self-care? It could be anything: meditation, a funny show, family time, long walks, etc. 

I love decompressing and feeling fulfilled by taking a hike through forests, up mountains, preferably with a nice big dinner and drinks at the end of it that I can happily feel I’ve earned. But as I can’t always get a long hike in during the week, laughing and eating with friends or family always make me feel better.

Know another woman or nonbinary person whose name we should drop? Tweet us at @HubSpotDev with ideas.

Interested in working with people who care about thoughtful leadership? Check out our open positions and apply.

Recommended Articles

Join our subscribers

Sign up here and we'll keep you updated on the latest in product, UX, and engineering from HubSpot.

Subscribe to the newsletter